UPDATED 18:41 EST / SEPTEMBER 25 2019

AI

Google releases videos to help researchers create better ‘deepfake’ detection tools

Google LLC is trying to combat the rise of so-called “deepfake” videos on the web with the release of a tranche of example clips its made that researchers can use to develop more sophisticated detection tools.

Deepfake is the name of a technique for human image synthesis that relies on artificial intelligence. It’s used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network.

The resulting videos can be extremely realistic. The technique has been used in the past to create fake celebrity pornographic videos and revenge porn. Deepfakes are also used to create fake news reports.

In a blog post, Google said it’s releasing about 3,000 deepfake videos to the public. The videos were created in collaboration with a company called Jigsaw that was created in Google’s incubator program, using real actors.

“To make this dataset, over the past year we worked with paid and consenting actors to record hundreds of videos,” Google says. “Using publicly available deepfake generation methods, we then created thousands of deepfakes from these videos.”

The dataset has been incorporated into the new FaceForensics benchmark, which is a project run by the Technical University of Munich and the University Federico II of Naples. Researchers are free to access the videos for use in developing deepfake detection technology.

Google’s videos should be useful, since many of them appear to be incredibly realistic, especially to users who have no point of reference. In the examples below, the top video shows a real actor, while the bottom clip contains an artificially created deepfake.

new_gif2

Constellation Research Inc. analyst Holger Mueller told SiliconANGLE today’s release shows Google is living up to its AI principles published earlier this year, which are to help society deal with the unintended consequence of AI specifically and technology in general.

“Deepfakes are quickly becoming an issue, both for voice and video, so it’s good to see that technology can not only create them, but even better detect and help fight them,” Mueller said. “In the quest to capture deepfakes, both tests and data are required so that providers can train their models and enterprises and governments can understand how well they are doing. It will be good to see the first results of offerings built on this data set.”

Google said it will continue to add new videos to the dataset to keep up with advances in deepfake technology. That’s important too, because detection methods that work today may not be so effective in the future.

“We firmly believe in supporting a thriving research community around mitigating potential harms from misuses of synthetic media, and today’s release of our deepfake dataset in the FaceForensics benchmark is an important step in that direction,” Google said.

Google isn’t the only big tech company that feels it has a responsibility to help us detect fake content. Facebook Inc. this month said it’s planning to release a dataset of deepfake videos of its own. The social media firm also announced a “Deepfake Detection Challenge” with up to $10 million in prizes and grants for those who can create effective detection tools.

Images: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU