Nvidia’s CEO, Jensen Huang, recently expressed his belief that human-level artificial intelligence (AI) will likely be achieved within the next five years. He also stated that solving the challenge of hallucinations, a major issue in the field, will be relatively easy.
Huang shared these thoughts during a speech at the Nvidia GTC developers conference in San Jose, California, on March 20. In his keynote address, he discussed the concept of artificial general intelligence (AGI). Huang explained that the arrival of AGI is a matter of benchmarking, although the specific tests he referred to were not mentioned. Generally, “general” in the context of AI means a system that can perform any task that an average human could do, given enough resources.
The CEO also touched upon the issue of hallucinations, which occur when large language models trained as generative AI systems produce new and often incorrect information that is not present in their training data. Huang believes that solving this problem is relatively straightforward. He suggested adding a rule that requires the AI to look up the answer for every response it provides. This way, the AI would conduct research to determine the best answer instead of simply generating one.
While some generative AI systems, such as Microsoft’s CoPilot AI, Google’s Gemini, OpenAI’s ChatGPT, and Anthropic’s Claude 3, already offer the ability to provide sources for their outputs from the internet, completely resolving the issue of hallucinations could have a transformative impact on various industries, including finance and cryptocurrency.
Currently, the creators of these AI systems caution users to exercise care when relying on generative AI for tasks requiring accuracy. For example, the user interface for ChatGPT warns users that the system may make mistakes and advises them to verify important information independently.
In the world of finance and cryptocurrency, accuracy is critical for success. Therefore, under the current circumstances, generative AI systems have limited utility for professionals in these fields.
While there are experiments involving trading bots powered by generative AI, these bots are typically programmed to follow strict rules to prevent autonomous execution. In other words, they can only execute trades in a controlled manner, similar to placing limit orders.
If generative AI models were able to eliminate hallucinations and produce accurate outputs consistently, they could potentially conduct trades and make financial recommendations and decisions without human intervention. This means that fully automated trading could become a reality if the problem of hallucinations in AI is effectively addressed.