Mark Zuckerberg outlines his strategy to keep Facebook abuse-free
Facebook Inc. Chief Executive Mark Zuckerberg released what could be called a manifesto Thursday, outlining how the company will fight election interference on the platform as well as keeping misinformation at bay.
Zuckerberg talked of “foreign actors” such as Russia-based operators that had paid for political ads during the U.S. presidential election.
This time, he contended, Facebook is on the ball, having already identified and removed fake accounts in the run up to elections in France, Germany, Alabama, Mexico and Brazil. Facebook has also been vigilant regarding nefarious accounts coming from Russia and Iran.
“We’ve set a new standard for transparency in the advertising industry — so advertisers are accountable for the ads they run,” wrote Zuckerberg. “Security experts call this ‘defense in depth’ because no one tactic is going to prevent all of the abuse.”
Machine learning artificial intelligence that Facebook employs, he said, is currently blocking millions of accounts daily, and often that’s within minutes of their creation. Mirroring some of his critics in government, he wrote, “Like most security issues, this is an arms race.”
But the bad actors’ subterfuge keeps evolving, keeping Facebook on its feet. These bad actors hijack legitimate protests posts, and often they’re not breaking any rules regarding community standards. Such “inauthentic” campaigns are not easy to crack, he wrote, but that’s where Facebook’s newly hired legion of human moderators step in.
He gives a list of recent interceptions:
-
We identified that the Internet Research Agency (IRA) has been largely focused on manipulating people in Russia and other Russian-speaking countries. We took down a network of more than 270 of their pages and accounts, including the official pages of Russian state-sanctioned news organizations that we determined were effectively controlled and operated by the IRA.
-
We found a network based in Iran with links to Iranian state media that has been trying to spread propaganda in the US, UK, and Middle East, and we took down hundreds of accounts, pages, and groups, including some linked to Iran’s state-sponsored media.
-
We recently took down a network of accounts in Brazil that was hiding its identity and spreading misinformation ahead of the country’s Presidential elections in October.
-
Although not directly related to elections, we identified and removed a coordinated campaign in Myanmar by the military to spread propaganda.
As for the issue of “fake news,” the crude term for misinformation, Zuckerberg said the main focus was to prevent the dissemination of content that might lead to real-world violence. But much of the time, hyperbolic or outright sensationalist lies have only money in mind, so Zuckerberg said the “key is to disrupt their economic incentives.”
Facebook employs third-party fact-checkers when news is flagged as possibly erroneous. That’s nothing new, but on ads, Zuckerberg said, Facebook is now ahead of the game. “As a result of changes we’ve made this year, Facebook now has a higher standard of ads transparency than has ever existed with TV or newspaper ads,” he wrote.
Users can see where the ad came from, who paid for it and whether that particular pages seems to be saying lots of different things to different groups of people. We might remember that some of those bad actors in the past were not taking a particular side, but more interested in sowing the seeds of division, in the U.S. at least.
Zuckerberg said Facebook at one point even contemplated banning political ads altogether. “This seemed simple and attractive. But we decided against it,” he said. “Not due to money, as this new verification process is costly and so we no longer make any meaningful profit on political ads — but because we believe in giving people a voice.”
Detecting abuse, he said, will require a little help from friends. That’s why independent researchers and foundations are continually monitoring Facebook in partnership with the company.
“Those researchers so they can draw their own conclusions about our role in elections, including our effectiveness in preventing abuse, and so they can publish their work without requiring approval from us,” he wrote.
Interestingly, he then approached the matter of abuse concerning all services. “Bad actors don’t restrict themselves to one service, so we can’t approach the problem in silos either,” he said. “If a foreign actor is running a coordinated information campaign online, they will almost certainly use multiple different internet services.”
This is everyone’s problem and all services have to band together and share intelligence, he noted, sounding more like someone who works in the Pentagon rather than in Silicon Valley.
He went on to say that law enforcement has a role, too, as foreign adversaries might well set up a company in the U.S. and create legitimate Facebook accounts. Sometimes Facebook might not be the first to spot trouble, and that’s where government, other tech companies or even journalists may act as a deterrent of abuse.
In conclusion, he waxed philosophical: “One of the important lessons I’ve learned is that when you build services that connect billions of people across countries and cultures, you’re going to see all of the good humanity is capable of, and you’re also going to see people try to abuse those services in every way possible.”
Image: Anthony Quintano via Flickr
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU