Google continues to face challenges in improving the accuracy of its AI Overviews in search results, as evidenced by viral memes and user reports highlighting flaws. Despite efforts to reduce the presence of AI Overviews, their appearance has dropped significantly from 84% to less than 15% of queries since mid-April, coinciding with the launch of AI Overviews in the U.S. This launch was plagued by numerous incorrect and potentially dangerous AI-generated answers.
Viral Memes and Incorrect AI Answers
A notable example is the viral meme suggesting the addition of glue to pizza, which originated from a humorous incident involving Katie Notopoulos. Despite the absurdity of the query, Google's AI still provides incorrect information, citing the meme as a valid answer. This issue was confirmed by The Verge, which replicated the query and received the same erroneous result.
Comparison with Other AI Systems
Other AI systems like Perplexity.AI and ChatGPT do not recommend adding glue to pizza, highlighting the disparity in accuracy between different AI platforms. Perplexity.AI explicitly advises against using glue as it is toxic, while ChatGPT also warns against it for health reasons.
Broader Implications and Other Errors
The problem extends beyond humorous queries. For instance, Google's AI struggles to provide accurate answers about its own products. Verge editor Richard Lawler found that Google's AI gave incorrect instructions on how to take screenshots in Chrome's Incognito mode, suggesting methods that either do not work or are not applicable.
Conclusion
These ongoing issues with AI Overviews not only reduce the utility of Google's search results but also perpetuate misinformation. The feedback loop created by reporting these errors may further exacerbate the problem, as the AI continues to train on flawed data.