Facebook’s ban on deepfakes doesn’t impress critics
Despite Facebook Inc.’s vow Monday that there’s no room for deepfake videos on the platform, critics think its ban doesn’t go far enough.
The company acknowledged in a blog post that manipulated media is a rising concern, especially as the 2020 U.S. presidential election approaches. Last year, Facebook came under pressure after a manipulated video of House Speaker Nancy Pelosi surfaced on the platform and it wasn’t immediately taken down.
“As a result of these partnerships and discussions, we are strengthening our policy toward misleading manipulated videos that have been identified as deepfakes,” Facebook said in the post. “Going forward, we will remove misleading manipulated media.”
The criteria for a removal was simple enough. Facebook said it would take down manipulated media that might confuse average people so they think a person is saying something they actually didn’t. Media will also be removed if artificial intelligence has been employed to merge, replace or superimpose images into a video.
“This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words,” said Facebook. Content that isn’t immediately removed will still be subject to review and may later be taken down. If by that time the video has been shared numerous times, anyone who has shared it or viewed it will be notified that it’s a fake.
On Wednesday, a hearing was held by the Committee on Energy and Commerce to discuss such deceptions in digital media, with a representative from Facebook attending the meeting. During the hearing, big tech was accused of failing to respond adequately to the “grave threat” of deceptive media.
Others have said that though deepfakes take some technological nous to create, “shallowfakes” or “cheapfakes” can be made by just about anyone who can grasp how certain apps work. The Nancy Pelosi video, people have pointed out, didn’t require a mastermind to create it. Facebook’s new policy doesn’t discuss these lower-quality fakes.
“I am concerned that Facebook’s latest effort to tackle misinformation leaves a lot out,” subcommittee Chairwoman Jan Schakowsky said at the hearing. Facebook also came under fire for not detecting such content fast enough. The representative said that was still a problem, but Facebook was improving.
Photo: Alan Levine/Flickr
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU