POLICY
POLICY
POLICY
Meta Platforms Inc. has been warned by its Oversight Board that it needs to do a lot more about the “proliferation” of deepfake videos shared on its platforms made by artificial intelligence tools.
The 21-person board told the company today that its current policies around misinformation aren’t enough. They said Meta should invest in better detection tools that can easily flag deepfake content and introduce digital watermarks for content that has been created by machines.
“As the quantity and quality of AI-generated content increase, its impact on people and societies will be profound,” the board wrote. “The risks are heightened when deepfake output designed to deceive, manipulate or increase engagement is shared during conflicts and crises, such as in Iran and Venezuela in 2026, and spreads rapidly on different companies’ platforms.”
The board called the 2025 Iran-Israel War in June an “inflection point” where deceptive AI-generated content is concerned. An AI-generated video was shared during the conflict, which was watched about 700,000 times. The clip, showing damaged buildings in the Israeli city of Haifa, was a fake supposedly posted by a news outlet that turned out to be a group in the Philippines.
The video was reported to Meta, but it was neither removed nor labeled as high-risk even though it was clearly AI-generated. The board overturned the decision, and the video was eventually correctly labeled, but the board warns that during times of crises, these processes are too slow.
Meta does have “AI Info” labels for such content, but the board believes the process is “neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content, particularly during a crisis or conflict where there is heightened engagement on the platform.”
The company was also accused of inconsistently implementing watermarks on AI and told it needs more thorough detection tools. In its statement, Meta said it will follow the board’s suggestions and will implement the changes the board seeks when “it is technically and operationally possible to do so.”
Today, Google LLC-owned YouTube was also talking about AI-generated deepfakes, with the company announcing it has just introduced a deepfake detection tool that can be used by public figures whose likeness has been used. Once flagged, YouTube will take the video down if the content isn’t protected under free expression standards. Otherwise, it might only receive a label.
The tool will first be rolled out to a pilot group of testers – politicians, government officials, and journalists – and in the future may become more widely available.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.