UPDATED 11:20 EST / FEBRUARY 07 2024

Perplexity AI: Magnifying glass on keyboard AI

OpenAI will now add labels to AI-generated images following Meta

OpenAI said Tuesday that it will begin labeling images generated by its artificial intelligence image generator DALL-E 3, which is embedded within ChatGPT, that will allow users and other companies to identify any artwork created by it.

This follows a similar announcement by Meta Platforms Inc. to add metadata labels to AI-generated images uploaded to Facebook, Instagram and Threads the same day.

OpenAI said that it would use a standard known as C2PA, a technical standard that allows content publishers to add metadata labels to content to verify its origins. Any image content generated by its generative AI DALL-E 3 or its ChatGPT AI chatbot, which uses the art generator, will now be labeled with this metadata.

The company said that the labels will be rolled out to all mobile users by Feb. 12.

Sites such as Content Credentials Verify can be used to see the metadata attached to images created by OpenAI and it will verify that it was created by DALL-E with an “AI tool used” section.

Metadata is not a perfect solution for verifying or detecting AI-generated content, however, the company warned. “For example, most social media platforms today remove metadata from uploaded images, and actions like taking a screenshot can also remove it,” the company said. “Therefore, an image lacking this metadata may or may not have been generated with ChatGPT or our API.”

Metadata can also be erased or edited by users on their own computers or mobile devices. But the company went on to say that this shouldn’t be a reason to eschew it as a potential element of revealing the proliferation of AI-generated content on the internet. “We believe that adopting these methods for establishing provenance and encouraging users to recognize these signals are key to increasing the trustworthiness of digital information,” OpenAI added in its announcement.

The rise of AI-generated images has had an impact on both creators and the way that people look at content on the internet. Prominent examples include the creation of “deepfakes,” or lifelike images of real people or events, that can confuse or mislead. There have been examples as whimsical as the image of the pope wearing a puffy white coat last year, but also more problematic ones, such as fake AI-generated images of former President Donald Trump being arrested, that can lead to a lot of debate.

By labeling its images with C2PA metadata, Meta, OpenAI and other outlets will be able more readily to detect and reveal content uploaded to their sites as AI-generated. That should help blunt the proliferation of deepfakes and aid fact-checkers.

Metadata labelling differs from another type of content labelling called “watermarking,” which embeds an invisible digital code into an image. Google DeepMind, Alphabet Inc.’s artificial intelligence research lab, created SynthID, a tool that watermarks images created by Google’s Imagen AI image generator model. Unlike metadata, watermarks are much harder to remove.

Image: Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU