OpenAI has said that it’s building a tool to detect content created by its text-to-image generator DALL-E 3, with early testing showing 98% accuracy.
Internal testing of the detection tool had shown “high accuracy” in distinguishing between non-AI images and those generated by DALL-E3, the company said in a blog post.
It will now release the tool to the first group of outside testers, including research labs and research-oriented journalism nonprofits, it said.
“The classifier handles common modifications like compression, cropping, and saturation changes with minimal impact on its performance,” OpenAI said.
However, other modifications “can reduce performance,” while the tool was not as accurate in distinguishing images generated by different AI models.
“Understanding when and where a classifier may underperform is critical for those making decisions based on its results,” OpenAI said.
Amidst concerns about the impact that AI-generated content could have on this year’s global elections, the Microsoft-backed company also said it would start adding tamper-resistant watermarks on digital content like audio and photos.
“These tools aim to be more resistant to attempts at removing signals about the origin of content,” OpenAI said.
Additionally, the company said it was joining Microsoft in a $2 million “societal resilience” to support AI education and understanding.
Earlier this year, OpenAI and Microsoft, as well as Google and Meta, signed an agreement that included at least 20 big tech companies aimed at preventing the distribution of deceptive AI content during the 2024 global election cycle.
More than four billion people in more than 40 countries are set to vote in elections this year, and generative AI is already being used to influence politics and even convince people not to vote.
Your email address will not be published. Required fields are markedmarked