TrueMedia.org aims to use AI to fight political deepfakes
A new nonprofit, nonpartisan organization called TrueMedia today announced it’s developing a tool that will use artificial intelligence to detect deepfakes with an aim to protect the 2024 U.S. election cycle from misinformation.
Deepfakes are a type of fake video, photo or audio recording produced by AI that has been designed to trick a person into thinking that someone did or said something that they never did or create believable but false events. The term is a combination of the words “deep learning” and “fake,” and the emergence of generative AI technology has made creating fake content easier than ever before.
“The cost of AI-based forgery has plunged sharply during one of the most important political elections in history,” TrueMedia said in the announcement. “As a result, we anticipate a tsunami of disinformation.”
Founded by Oren Etzioni, a professor at the University of Washington and former chief executive of the Allen Institute for AI, the organization is backed by Camp.org, the charitable nonprofit foundation of Uber co-founder Garrett Camp.
TrueMedia plans to use AI to analyze political deepfake content in order to train a tool that can detect audio and video deepfakes. Right now, the group is soliciting the public to submit examples of political deepfakes to bolster the already existing tranche of known deepfakes.
The group plans to release a free, web-based tool in the first quarter of this year that will use AI technology such as computer vision and audio analysis. The tool will be rolled out first to journalists, fact-checkers and online influencers before a release to the public later this year.
Voters in the United States have already been affected by deepfakes even as the election ramps up. A faked audio recording of President Joe Biden played for New Hampshire voters in a robocall earlier this month telling them not to vote in the presidential primary, reported NBC News.
Earlier in 2023, a deepfake video of Senator Elizabeth Warren circulated on X, formerly Twitter, with her reportedly stating that Republicans should be barred from voting in the 2024 presidential election. The video’s false nature was quickly discovered and it was labeled “altered audio,” but it still racked up about 189,000 views in a week.
Political campaigns have already begun using AI-generated audio and video in their political ads. In response, Google LLC has begun requiring that such advertisements prominently reveal that they are produced by AI. Although not on the same level as deepfakes, they still have a similar impact.
This has already attracted the attention of lawmakers and regulators. A recent Senate bill, co-sponsored by Sen. Amy Klobuchar and Sen. Josh Josh Hawley would go as far as ban “materially deceptive” deepfakes relating to federal candidates, with exceptions for parody and satire. The Federal Election Commission opened to public comment in August with an eye toward developing rules for regulating AI-generated deceptive deepfakes in political ads.
Photo: Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU