OpenAI has developed a tool aimed at detecting text generated by ChatGPT, which could potentially identify students using AI to complete assignments. However, the company is cautious about releasing it due to the complexities and potential impacts on the broader ecosystem. The tool uses a text watermarking method, which subtly alters word selection to create an invisible watermark detectable by another tool. This approach is different from previous, largely ineffective AI text detectors, including OpenAI's own discontinued tool.
Key Points
- Text Watermarking Method: This method is technically promising but poses risks such as susceptibility to circumvention and potential negative impacts on non-English speakers.
- Detection Focus: The tool would only detect text generated by ChatGPT, not other AI models.
- Challenges: The watermarking method is highly accurate against localized tampering but less effective against globalized tampering, such as translations or rewording using other AI models.
- Ethical Considerations: OpenAI is concerned about the potential stigmatization of AI as a writing tool for non-native English speakers and the broader implications of releasing such a tool.
Additional Information
- Previous Efforts: OpenAI previously shut down an AI text detector due to its low accuracy.
- Research Updates: OpenAI updated a blog post to reflect ongoing research and the challenges faced with the watermarking method.
Overall, OpenAI is taking a cautious and deliberate approach to releasing this tool, weighing its technical promise against the potential risks and broader impacts.