OpenAI is facing an increasingly pressing problem: counterfeit images generated by its own technology.
The AI company announced on Tuesday that it was launching operations a tool to detect content created by his DALL-E 3 text to image generator. The company also said it is opening applications to the first batch of testers for its image detection classifier, which “predicts the probability of an image being generated” by its OD-E 3.
“Our goal is to enable independent research that evaluates the effectiveness of a classifier, analyzes its real-world application, presents relevant considerations for such apply, and explores the characteristics of AI-generated content,” OpenAI said in statement.
OpenAI said that internal testing of an early version of its classifier showed high accuracy in detecting differences between images generated by non-AI technology and content created by DALL-E 3. The tool correctly detected 98% of images generated by DALL-E 3, while less than 0.5% of non-AI-generated images were incorrectly identified as AI-generated.
According to OpenAI, image modifications such as compression, cropping and saturation have “minimal impact” on the tool’s performance. However, the company said other types of modifications “may reduce performance.” The company also discovered that its tool was not very good at distinguishing images generated by DALL-E 3 from content generated by other AI models.
“Election concerns absolutely drive all this work” – David Robinson, head of policy planning at OpenAI, he said “Wall Street” daily. “This is the number one context of concern that we hear from policymakers.”
A survey of U.S. voters conducted in February by the AI Policy Institute found that 77% of respondents said that when it comes to AI video generators — such as Sora OpenAI, which is not yet publicly available — More vital is the placement of handrails and safeguards to prevent misuse than increasing the availability of models. More than two-thirds of respondents said that AI model developers should be held legally responsible for any illegal activities.
“It really shows how seriously society takes this technology,” said Daniel Colson, founder and executive director of AIPI. “They think it’s powerful. They have seen how technology companies implement these models, algorithms and technologies, leading to results that completely change society.”
In addition to launching the detection tool, OpenAI announced on Tuesday that it is joining the Steering Committee of C2PA, the Coalition for Content Origination and Authenticity, which is an organization a widely used standard for certification of digital content. The company said it began adding C2PA metadata to images generated and edited by DALL-E 3 in ChatGPT and its API earlier this year, and C2PA metadata for Sora will be integrated when it is made widely available.