Facebook looks to AI to solve its fake news problem
A few weeks ago, Facebook Inc. Chief Executive Mark Zuckerberg downplayed complaints about fake news on Facebook, claiming that “more than 99 percent of what people see is authentic.” Now it seems that Facebook is taking those complaints a little more seriously.
Company executives outlined some of the social network’s plans for dealing with fake news using its expertise in artificial intelligence.
During a roundtable discussion with reporters, Yann LeCun, director of artificial intelligence at Facebook, said that the company already has tools that could be used to automatically weed out fake news stories or offensive live video feeds. Facebook has an interest in controlling the content on its platform, but the company also worries that these tools could swing things too far in the other direction and become a form of censorship, something for which Facebook has already been criticized a number of times in the past.
“What’s the trade-off between filtering and censorship? Freedom of experience and decency?” LeCun told reporters, according to the Wall Street Journal. “The technology either exists or can be developed. But then the question is how does it make sense to deploy it? And this isn’t my department.”
Joaquin Candela, Facebook’s director of applied machine learning, explained on Thursday that the social network has also been increasingly using AI tools to detect a wide range of offensive material on Facebook live, including violence, nudity and things that violate Facebook’s community policies. Candela noted that this technology is still in its infancy, and it has a couple of challenges to overcome.
“One, your computer vision algorithm has to be fast, and I think we can push there,” Candela said, “and the other one is you need to prioritize things in the right way so that a human looks at it, an expert who understands our policies, and takes it down.”
Moderation vs. Censorship
The fine line between moderation and censorship is something with which every community-driven platform must deal, and after past accusations of manipulating content, it is easy to see why Facebook would want to be careful before it starts removing content en masse.
After all, Facebook would not want to go down the same route as Reddit CEO Steve Huffman, who recently admitted to editing posts by users in order to “give bullies a hard time.” Huffman apparently had not learned from the scandal surrounding previous Reddit CEO Ellen Pao, whose pro-censorship stance chafed with the site’s notoriously outspoken user base.
If Facebook wants to avoid similar conflicts with users, it will have to find a way to remove harmful content without censoring questionable yet legitimate material.
Image courtesy of Facebook
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU