Google joins C2PA’s steering committee to boost transparency of AI-generated content
Google LLC has revealed that it’s joining the Coalition for Content Provenance and Authenticity, otherwise known as C2PA.
The company is signing on as a member of its steering committee, where it will work with other members on ways to improve the transparency of digital content.
One of Google’s primary aims is to advance the C2PA’s technical standard for digital content provenance, alongside fellow steering committee members, including technology firms such as Adobe Inc., Intel Corp., Meta Platforms Inc., Microsoft Corp. and Sony Corp., plus media organizations like the BBC and Publicis Groupe.
The C2PA is working on something called “Content Credentials”, which is a supposedly tamper-proof metadata standard that can be attached to digital content to prove its creation and modification history.
Google’s announcement comes at a time when the internet is becoming increasingly awash with artificial intelligence-generated content. Generative AI models can be used by almost anyone to create text, images and videos that look and feel extremely genuine, and there are concerns that they are contributing to the spread of misinformation. With the U.S. and other nations heading towards elections later this year, there’s a critical need to be able to identify AI-generated content. For instance, Adobe’s AI suite Firefly was used to generate one billion images within three months of launching, while OpenAI has said that people create over two million images daily using its DALL-E 2 model.
Google said it’s looking at ways in which it can integrate the C2PA’s Content Credentials into products and services such as Gemini, a recently announced competitor to OpenAI’s ChatGPT. The idea is to try and help promote Content Credentials as a resource for understanding the provenance of digital content, the company said.
Google has carried out its own research into ways to improve transparency around digital content. Such initiatives include DeepMind’s SynthID, the “About this Image” function in Google Search, and YouTube’s labels on modified or synthetic content.
The decision to join the C2PA “builds on our work in this space… to provide important content to people, helping them make more informed decisions,” said Laurie Richardson, vice president of trust and safety at Google.
C2PA Chai Andrew Jenks said Google represents a key endorsement of the organization’s approach. “We encourage others to join us in expanding the use of Content Credentials and contributing to the creation of a safer, more transparent digital ecosystem,” he said.
Google’s decision to sign up to the C2PA suggests that the coalition is gaining momentum. Recently, Meta Platforms implemented the C2PA standard in its social media platforms, while OpenAI has done the same with its DALL-E 3 image generator.
One of the C2PA’s most important aims now is to develop methods that ensure its digital watermarks are visible to the human eye, but it’s a tricky process that requires balancing the presence of metadata to ensure it does not disrupt the actual content.
It should be noted that while the C2PA’s initiative is important, metadata is far from being a foolproof solution to the problem of AI-generated misinformation. Rather, it’s really just more of a hindrance than a real protection, as it can easily be removed from any AI image. OpenAI, for example, has pointed out in the past that doing so is as simple as taking a screenshot of the image in question and uploading it somewhere else.
Image: C2PA
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU