OpenAI finally makes it easier to tell if a photo was taken with DALL-E 3. Company shared the news this week, noting that it will soon begin adding two types of watermarks to all images generated by DALL-E 3, in line with standards set by C2PA (Coalition for Provenance and Genuine Content). The change already applies to images generated via the website and API, but mobile users will start receiving watermarks from February 12.
The first of the two watermarks appears only in the image metadata. You will be able to check the image creation data using Verify content credentials and other similar websites. The second watermark will be the CR symbol noticeable in the upper left corner of the image.
Source: OpenAI
This is a good change that moves DALL-E 3 in the right direction and also allows you to correctly identify when something has been created using AI. Other AI systems utilize similar watermarks in metadata, and Google has implemented its own watermark to facilitate identify images created using the image generation model that recently migrated to Google Bard.
At the time of writing, only still images will be watermarked. Videos and text will still be watermark-free. OpenAI says that the watermark added to the metadata should not cause any latency issues or affect the quality of image generation. However, this will slightly boost the size of images in some tasks.
If this is the first time you’re hearing about it, that means C2PA is a group consisting of companies such as Microsoft, Sony and Adobe. These companies continue to insist on the inclusion of content credential watermarks to facilitate identify whether images were created using artificial intelligence systems. In fact, the content credential symbol that OpenAI adds to DALL-E 3 images was created by Adobe.
While watermarking may facilitate, it is not a surefire way to ensure that misinformation is not spread via AI-generated content. Metadata can still be omitted through the utilize of screenshots, and most noticeable watermarks can be cropped out of photos. However, OpenAI believes that these methods will facilitate encourage users to recognize that these “signals are key to increasing the trustworthiness of digital information” and that they will lead to a reduction in abuse of shared systems.