UPDATED 23:09 EST / MAY 15 2018

APPS

Facebook details abuse on platform – and AI falls short in flagging some of it

Facebook Inc. Tuesday released its first Community Standards Enforcement Report, detailing what its algorithm and human moderators have detected and removed from October last year to March this year.

The upshot: a lot.

Facebook said it took down 583 million fake accounts, most of which were removed within minutes of being setup. Some 837 million pieces of spam were removed from the site, although the company admitted that during this period 3 to 4 percent of accounts on the platform were still fake.

Facebook’s legion of newly hired moderators are thought to be costing hundreds of millions of dollars. According to Guy Rosen, Facebook’s vice president of product management, those moderators have played an essential part because artificial intelligence is not yet consistently able to spot a bad actor.

Where it fails the most is with hate speech, Rosen said in a post. Facebook took down 2.5 million pieces of hate speech in the first quarter, 38 percent of which was flagged by AI. He added that AI can’t always differentiate from someone talking about an issue to raise awareness from someone promulgating hatred.

“Technology needs large amounts of training data to recognize meaningful patterns of behavior, which we often lack in less widely used languages or for cases that are not often reported,” said Rosen. It’s the same, he added, with porn, spam, fake accounts and bad actors who knowingly break the rules by continually changing their tactics to get past the technology.

About 21 million pieces of adult nudity and sexual activity were also removed, and 96 percent of that was flagged by technology. The report stated that from every 10,000 posts viewed, somewhere from seven to nine views will fall into this category.

Removing graphic violence was also a success, with Facebook either taking down or giving a warning label to 3.5 million pieces of content. A full 86 percent of those posts were taken down by the technology before anyone reported it.

It’s thought that by the end of the year, Facebook will have about 20,000 employees working on its human moderating team, an unprecedented number for the company. In April, Rosen outlined the importance of having human eyes on posts.

Facebook doesn’t want to “stifle speech and potentially productive exchange of ideas,” said Rosen. Moreover, the site doesn’t want to seem to oppressive regarding some forms of nudity or discussion of identity, race, religion and sexuality, so humans are still very much needed to separate the purely destructive or obscene from the well-intentioned.

Image: Poster Boy via Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU