Google’s recently unveiled artificial intelligence feature, “AI Overview,” has come under fire for providing inaccurate and potentially harmful summaries in response to user searches. The search giant has disabled certain queries for the feature after reports emerged of erroneous outputs.
One example involved a user asking the AI system how to keep cheese on pizza, to which it reportedly suggested using glue. Another incident involved the AI system claiming that two dogs owned hotels and even pointed to a non-existent dog statue as evidence.
While some of these inaccuracies may seem amusing, the main concern lies in the fact that the consumer-facing model responsible for generating the “AI Overview” content delivers both accurate and inaccurate results with equal confidence.
Google’s response to the issue, according to spokesperson Meghann Farnsworth, has been to remove queries triggering inaccurate results as they arise. Essentially, the company is playing a metaphorical game of whack-a-mole with its AI problem.
To add to the confusion, Google seems to be blaming the users for the issues, suggesting that they are responsible for making “uncommon queries.” It remains unclear how users are expected to avoid making these queries, especially considering that Google’s AI system often provides different answers to the same question when queried multiple times.
Cointelegraph attempted to reach out to Google for further clarification but did not receive an immediate response.
Despite the current flaws in Google’s AI system, Elon Musk, founder of rival AI company xAI, believes that machines will surpass human capabilities by the end of 2025. Musk recently expressed his belief that xAI could catch up to OpenAI and DeepMind Google by the end of 2024.
In conclusion, while Google’s AI system still requires further development to iron out its issues, Elon Musk remains optimistic about the future of AI and its potential to outperform humans.