UPDATED 12:45 EDT / AUGUST 30 2023

AI

Google DeepMind unveils tool to watermark and detect AI-generated images

Google DeepMind, Alphabet Inc.’s artificial intelligence research lab, is teaming up with Google Cloud to launch a watermarking tool for AI-generated images that will allow users to identify if artwork or graphics was produced by an AI model.

In an announcement published Tuesday, DeepMind researchers unveiled SynthID, a beta tool that embeds a digital “watermark” directly into an AI-generated image produced by Google’s Imagen generative AI model in Vertex AI, the company’s platform for building AI. This way a limited number of users testing out the new tool can mark their images as AI-generated.

The system uses two different AI models to do it work. One model designs a carefully generated mark that modifies the original a way that is imperceptible to the human eye, but changes it in a subtle way that can be “seen” by another AI model.

At the same time, the modification is not easily destroyed by filters – such as those used by Instagram or image editing software – or by resizing or cropping the image.

When users want to see if a particular image was produced and watermarked by Imagen, they can pass it through the tool and it will provide them with a confidence level. There are three different levels, the first is that it was certainly produced using the AI generated, the second that no watermark was detected and the third that the model potentially detected a watermark and the image is suspect.

Watermarks are among a number of different ways to identify the origin and authenticity of images. Another way that images are identified is through their metadata, which is additional data attached to the image often by software or by a camera. However, metadata itself could be removed or modified, making it a poor way to add trust to an image’s authenticity.

Although generative AI has become a major source of delight and popularity and unlocked tremendous creative potential, it also has the potential for misuse and harm. Images produced by image generative models such as Imagen can create photorealistic and lifelike images as easily as they create fanciful artwork.

More and more faked AI-generated images of political figures have begun to circulate on social media platforms that are harder than ever to determine if they are real or not. In March, an AI-generated image of Pope Francis wearing a white puffy jacket made using Midjourney created confusion, although largely harmless, made the rounds and helped reveal the power of these systems.

“Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media, and for helping prevent the spread of misinformation,” Google DeepMind researchers Sven Gowal and Pushmeet Kohli wrote in the announcement.

The researchers said the two models have been tested against a wide variety of different images types to prepare them for numerous cases of different types of images and optimized for a range of scenarios, including correctly identifying the watermark when the image has been modified and aligning the mark so that it fits the original content better, to make it more imperceptible.

“This is a significant announcement by Google,” Arun Chandrasekaran, distinguished vice president analyst at Gartner, told SiliconANGLE. “Clients that use Google’s text to image diffusion model, Imagen, now have a choice of adding watermark. Given the rise of deepfakes and increasing regulations across the globe, watermarking is an important step in combating deepfakes.”

Google is one of seven large tech firms that made voluntary commitments to AI safety as part of a White House initiative in July. The company also joined Microsoft Corp., ChatGPT developer OpenAI LP and artificial intelligence research startup Anthropic in founding the Frontier Model Forum, an industry body dedicated to the safe and responsible development of AI models. This technology also comes as the European Union prepares to formalize its “AI Act” legislative framework to lock down safer use of AI for its member countries.

Chandrasekaran said it’s still be a “wait and see” situation when it comes to the robustness of the watermark technology that DeepMind has produced, which the researchers themselves warned is not foolproof against all types of image manipulation. “Also, the watermark is specific to Google’s model and hopefully the technology companies will collaborate on standards that work across AI models,” Chandrasekaran added.

“We hope our SynthID technology can work together with a broad range of solutions for creators and users across society, and we’re continuing to evolve SynthID by gathering feedback from users, enhancing its capabilities, and exploring new features,” the researchers said.

Image: Google DeepMind

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU