Meta will label AI-generated images across Facebook, Instagram and Threads
Meta Platforms Inc. today announced plans to roll out labels that will indicate if images uploaded to Facebook, Instagram and Threads were generated using general intelligence tools.
The company will start adding the labels to users’ posts in the coming months. According to Meta, the update is part of a broader effort to manage better the AI-generated content uploaded to its platforms. The initiative will also see the company build automated software tools for detecting such content.
The development effort is set to focus on two open-source technologies called C2PA and IPTC. They make it possible to equip an image with metadata, or contextual information, that describes when it was created and related details. That contextual information can also be used to flag if the image was generated by an AI tool.
According to Meta, its engineers are building classifiers that can detect if an image’s C2PA or IPTC metadata indicates it was created with AI. The company says the classifiers will enable it to spot files generated by a variety of popular image generators. The goal is to “label images from Google, OpenAI, Microsoft, Adobe, Midjourney and Shutterstock as they implement their plans for adding metadata to images created by their tools,” Meta President of Global Affairs Nick Clegg detailed in a blog post.
Some AI image generators, such as Meta’s recently introduced Imagine with Meta AI service, embed not only metadata but also invisible watermarks in the files they output. The company detailed today that it’s working on tools designed to make such watermarks more difficult to remove and alter. Additionally, Meta is building software that will be capable of spotting AI-generated content even if it doesn’t contain invisible markers.
The company’s efforts to detect AI-generated content encompass not only images but also audio and video files. According to Meta, AI developers have not yet configured their models to include markers in audio and video output. To address that limitation, Meta will require users to indicate when they upload such content to its platforms.
“We’ll require people to use this disclosure and label tool when they post organic content with a photorealistic video or realistic-sounding audio that was digitally created or altered, and we may apply penalties if they fail to do so,” Clegg wrote. “If we determine that digitally created or altered image, video or audio content creates a particularly high risk of materially deceiving the public on a matter of importance, we may add a more prominent label.”
Other tech giants are also developing tools designed to make AI-generated content easier to detect. Last August, Google LLC’s DeepMind unit debuted SynthID, a machine learning system that can embed an invisible watermark into AI-generated images. The watermark remains intact if the image to which it was added is edited or compressed.
Image: Meta
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU