OpenAI has introduced two initiatives to enhance transparency in online content and help users distinguish between real and AI-generated content.
The first initiative is joining the Steering Committee of the Coalition for Content Provenance and Authenticity (C2PA). The aim of the C2PA initiative is to establish a uniform standard for digital content certification. This standard will be adopted by various entities including software companies, camera manufacturers, and online platforms. The goal is to develop web standards for AI-generated content, which will list the creation source in the content coding.
The second initiative involves developing new provenance methods to enhance the integrity of digital content. This includes implementing tamper-resistant watermarking and detection classifiers. Tamper-resistant watermarking involves marking digital content like audio with an invisible signal that is hard to remove. Detection classifiers are tools that use artificial intelligence to assess the likelihood that content originated from generative models.
OpenAI is currently testing these new approaches with external researchers to determine the viability of its systems in visual transparency. The organization believes that if it can establish improved methods for visual detection, it will facilitate greater transparency in AI image detection.
This is a crucial concern given the rising use of AI-generated images and the upcoming expansion of AI-generated video. As the technology improves, it will become increasingly difficult to distinguish what's real, making advanced digital watermarking an essential consideration. OpenAI's initiatives are particularly significant given its presence in the current AI space.