OpenAI’s popular AI system, ChatGPT, faced a public crisis between Feb. 20 and 21, leaving users bewildered by its nonsensical and pseudo-Shakespearean responses. However, as of 8:14 Pacific Standard Time on Feb. 21, the issue appears to have been resolved. OpenAI’s Status page now states that “ChatGPT is operating normally,” indicating that the problem was resolved 18 hours after being reported.
The exact cause of the problem remains unknown, as OpenAI has not yet responded to inquiries. A preliminary analysis suggests that ChatGPT may have encountered difficulties with tokenization, a process by which text is divided into individual units. Due to the complex nature of GPT-based language models, it may be challenging for OpenAI scientists to pinpoint the exact nature of the issue. In light of this, the team may focus on implementing preventive measures, such as introducing safeguards against generating long strings of nonsensical output.
Initial feedback on social media suggests that the chatbot’s impact was primarily limited to wasting users’ time, as they expected coherent responses to their queries.
Nevertheless, this incident highlights the potential for generative AI systems to produce unexpected, hallucinated, or inconsistent messages. Such unintended responses can have adverse consequences, as demonstrated by Air Canada’s recent experience. The airline was ordered by a court to provide a partial refund to a customer who had received inaccurate information about booking policies from a customer service chatbot, with the algorithm unable to absolve the company of responsibility.
In the world of cryptocurrencies, investors are increasingly utilizing automated systems based on large language models (LLMs) and GPT technology to manage portfolios and execute trades. However, ChatGPT’s recent failure serves as a reminder that even the most robust models can encounter unforeseen issues on any scale.