YouTube has updated its policy to allow individuals to request the removal of AI-generated or synthetic content that simulates their face or voice. This change, implemented in June, expands YouTube's privacy request process and is part of its responsible AI agenda introduced in November.
Key Points
- Privacy Violation Requests: Affected parties can request the removal of AI-generated content as a privacy violation rather than for being misleading.
- First-Party Claims: Requests must be made by the individual affected, with exceptions for minors, deceased individuals, and those without computer access.
- Judgment Criteria: YouTube will evaluate complaints based on factors like disclosure of synthetic content, unique identification of a person, and whether the content is parody, satire, or in the public interest.
- Public Figures: Special consideration is given to content involving public figures, especially if it shows them in sensitive situations like endorsing products or political candidates.
- Uploader Notification: Content uploaders have 48 hours to address the complaint before YouTube initiates a review.
- Complete Removal: If a takedown is approved, the video and any personal information in the title, description, and tags will be fully removed. Blurring faces is an option, but making the video private is not sufficient.
Additional Measures
- Disclosure Tools: In March, YouTube introduced a tool in Creator Studio for creators to disclose when content is made with synthetic media, including generative AI.
- Crowdsourced Notes: A feature is being tested to allow users to add notes providing context on videos, such as indicating if they are parodies or misleading.
- Generative AI Use: YouTube continues to experiment with generative AI, including tools for summarizing comments and answering questions about videos.
Community Guidelines
- Labeling AI Content: Simply labeling content as AI-generated does not exempt it from removal; it must still comply with YouTube’s Community Guidelines.
- Privacy Complaints: Receiving a privacy complaint does not automatically result in a Community Guidelines strike.
This policy change reflects YouTube's ongoing efforts to manage the impact of AI-generated content on its platform while balancing privacy concerns and content creator rights.