Generative AI is significantly enhancing ad fraud across various online ad platforms, including connected TV (CTV) and streaming audio. According to DoubleVerify, generative AI contributed to a 23% increase in new fraud schemes in 2023, leading to a 58% rise in ad fraud on streaming platforms. The report analyzed 1 trillion impressions for 2,000 brands across 100 markets, revealing a 269% increase in existing bot fraud schemes. Fraud types included bot fraud, site fraud, app fraud, hijacked devices, non-human data center traffic, and injected ad events.
Industry Changes and Trends
The advertising industry is rapidly changing, with more information for advertisers to manage. DoubleVerify’s Global Insights: 2024 Trends Report highlights:
- The widespread use of Artificial Intelligence (AI), accelerating media and advertising transformation.
- Increasing popularity of attention metrics, with 47% of media buyers planning to use them in 2024.
- Rapid growth of Made for Advertising (MFA) content.
- Increasingly popular Retail Media Networks (RMNs) offering specialized inventory.
- Sustainability measurement showing higher media quality leads to lower carbon emissions.
Examples of AI-Driven Scams
Two notable scams, FM Scam and CycloneBot, were highlighted:
- CycloneBot: Focuses on CTV, creating longer viewing sessions on fake devices and quadrupling traffic volume compared to older scams.
- FM Scam: Targets streaming audio, generating fake audio traffic that appears legitimate across various devices. This scam spoofed 500,000 devices in March 2024, marking the first instance of ad fraud targeting smart speakers.
Financial Impact
These scams are costly for advertisers:
- FM Scam: Part of a larger scheme called BeatSring, siphoning over $1 million in traffic monthly.
- CycloneBot: Fakes up to 250 million ad requests daily, costing advertisers up to $7.5 million monthly.
Broader Concerns and Future Implications
Generative AI is also increasing the prevalence of made-for-advertising (MFA) content, with a nearly 20% rise in MFA content online. This has degraded media quality, with 57% of surveyed global advertisers viewing AI-generated content as a challenge. AI-generated reviews can make harmful apps appear legitimate, complicating fraud investigations.
Misinformation and Ad Fraud
Generative AI is also fueling misinformation. A study by NewsGuard, Stanford University, and Carnegie Mellon found that nearly 75% of misinformation websites relied on advertising. Between 46% and 82% of common advertisers inadvertently had ads served on these sites. Researchers recommend increasing ad transparency to address this issue.
Challenges in Combatting Ad Fraud
DoubleVerify uses AI to detect fraud schemes, but there are doubts about a surefire fix. Measurement firms face scrutiny over potential conflicts of interest, and fighting harmful AI with "good AI" is seen as an irresponsible promise. Critics argue that more effort should be put into improving detection products rather than sales pitches.
In summary, the adoption of generative AI is significantly escalating ad fraud, making it a growing concern for both advertisers and publishers. The financial and operational impacts are substantial, and the broader implications for misinformation and media quality are alarming.