Large language models (LLMs) like ChatGPT and Claude are not expected to bring about human-level artificial intelligence (AI) anytime soon, according to Yann LeCun, Meta’s chief AI scientist.
LeCun recently had a conversation with Time Magazine about artificial general intelligence (AGI), which refers to a theoretical AI system capable of performing any task given the right resources. While there is no scientific consensus on the requirements for AGI, Meta’s CEO and founder Mark Zuckerberg made headlines when he announced that Meta would be focusing on AGI development.
“We’ve reached the conclusion that, in order to create the products we envision, we need to aim for general intelligence,” said Zuckerberg in a recent interview with The Verge.
LeCun, however, seems to disagree with Zuckerberg, at least semantically. In his discussion with Time, LeCun expressed his dislike for the term “AGI” and instead prefers to call it “human-level AI,” highlighting that humans themselves are not general intelligences.
Regarding LLMs, a category of AI that includes Meta’s Llama-2, OpenAI’s ChatGPT, and Google’s Gemini, LeCun believes that they are far from reaching the intelligence level of a cat, let alone human intelligence.
LeCun also delved into the ongoing debate surrounding the potential threat posed by open-source AI systems like Meta’s Llama-2. He outright dismissed the notion that AI poses a significant threat. When asked about the possibility of a human programming a goal of dominance into AI, LeCun suggested that if such a “bad AI” existed, “smarter, good AIs” would counteract it.
In related news, Bitcoin aims to surpass Meta in terms of total value as the cryptocurrency market continues to climb.