Social media users have become unofficial "red teams" for AI features by highlighting and mocking their errors. For instance, Google's AI search feature erroneously suggested that running with scissors has health benefits, pulling this misinformation from a comedy blog. Such mistakes are shared widely on social media, effectively crowdsourcing the identification of AI flaws.
Key Points:
- Red Teaming: In cybersecurity, "red teams" are ethical hackers who test products for vulnerabilities. Social media users are now playing a similar role for AI products by exposing their errors.
- Google's AI Errors: Despite extensive testing, Google's AI has made notable mistakes, such as recommending glue for pizza cheese and giving dangerous advice for rattlesnake bites.
- Meme Culture: The failures of AI products have become memes, serving as both entertainment and feedback for developers.
- Company Responses: Tech companies often downplay the impact of these errors, claiming they are from uncommon queries and not representative of typical user experiences.
- Data Licensing Deals: Errors also highlight potential issues with AI content deals. Google has a $60 million contract with Reddit for content licensing, and similar deals exist with OpenAI and other platforms.
- Viral Mistakes: Viral AI errors can further confuse AI models as they incorporate the incorrect information circulating online. For example, an AI mistakenly identified a hockey player as a dog, and subsequent queries pulled up articles about this mistake.
Examples of AI Mistakes:
- Google AI suggesting glue for pizza cheese.
- Incorrect advice on handling rattlesnake bites.
- Misidentifying a poisonous mushroom as a common white button mushroom.
- AI mistakenly claiming a dog played in the NHL.
These incidents underscore the challenges of training AI models on internet data, where misinformation is prevalent. As the saying goes, "garbage in, garbage out."