UPDATED 13:03 EDT / FEBRUARY 11 2021

AI

Facebook report reveals how it uses AI to fight harassment, hate speech

Facebook Inc. today outlined its escalating efforts, partly using improved artificial intelligence techniques, to prevent hate speech, bullying, harassment, and violent and graphic content on its platform.

The social network giant this morning released its quarterly Community Standards Enforcement Report covering the fourth quarter of 2020, including metrics from October through December for Facebook and Instagram. There are 12 different policies that Facebook looks at including hate speech, organized hate, adult nudity, child nudity, spam, hate speech, drugs, firearms, terrorism, fake accounts, suicide and self-injury, violent and graphic content.

The company shared numerous improvements in the numbers of proactive rates, especially in bullying and harassment up from 49% in the fourth quarter, up from 26% in the third quarter, which the company attributes to its use of improved artificial intelligence technology.

Facebook also seeks to lead the industry in transparency, becoming one of the only companies to publish numbers in the amount of content missed when attempting to pull down apparently dangerous posts that violate these policies – and also cite how many appeals it afforded when it made mistakes.

In the fourth quarter, Facebook took action on 6.3 million pieces of bullying and harassment content, up from 3.5 million in the third quarter in part due to the use of artificial intelligence technology. Facebook removed 6.4 million pieces, up from 4 million in the third quarter. The company removed 26.9 million pieces of hate speech content in the fourth quarter, up from 22.1 million, due in part to updates in technology capable of translating Arabic, Spanish and Portuguese.

As for Instagram, 5 million pieces of bullying and harassment content were removed, up from 2.6 million in the third quarter, were removed because of technology improvements; 308,000 pieces of organized hate content were removed, up from 224,00; and 6.6 million pieces of hate speech content, up from 6.5 million in the third quarter. The company also caught 3.4 million pieces of suicide and self-injury related content, up from 1.3 million the previous quarter thanks to increased reviewer capacity.

The pandemic has also affected Facebook’s review workforce, but the company has been slowly regaining its human review capability. Currently, the company is prioritizing context-sensitive reviews to humans, such as suicide and self-injury content, until such time that more vaccines are available.

During 2020, Facebook’s engineers spent time improving its context-aware AI analysis of content to better understand how users interact to combine interaction between text, images and other details in posts. This included proactive and reactive analysis alongside historical knowledge of user behavior alongside human review to provide better outcomes.

“The results of these efforts are apparent in the numbers released today — in the first three months of 2020, our systems spotted just 16% of the bullying and harassment content that we took action on before anyone reported it,” said Mike Schroepfer, chief technology officer at Facebook.

Attempting to identify posts that violate policies is challenging even in the best of times because human speech and behavior are messy. For example, policies might include misinformation such as in countries running elections, or false claims about COVID-19 vaccines, or attempting to detect human nudity in posts.

“Like so much of the most important technological progress, this work wasn’t revolutionary but evolutionary,” said Schroepfer. “Our teams brought together better training data, better features, and better AI models to produce a system that is better at analyzing comments and continuously learning from new data.”

At the same time, Facebook doesn’t want to pull down posts including human nudity in newsworthy posts or remove posts including violent or graphic content that display war zones or riots taking place around the world. These are people talking about what’s happening in their own streets and affecting their lives and not publishing violent content in order to threaten harm in order to violate Facebook’s policies.

As for regulations that will hold Facebook accountable to freedom of speech when it comes to balancing combatting harmful content – harassment, bullying, misinformation and the like – the company said it’s open to being part of frameworks that keep that in mind. There is a massive challenge involved in using AI and machine learning when looking at content when attempting to determine if it is harmful or not, even with human review, because a person is on the other end.

Facebook’s AI system, and even human reviewers, are not always correct, which is why the quarterly reports attempt to include a “Correcting Mistakes” section. In this section, Facebook included a portion citing how much content people appealed and how much of that content Facebook then restored after appeal. However, there’s no detail listed as to what was in that content and exactly what went into those decisions or what steps were taken.

When users are affected by policy decisions, they are told if AI or a human was involved in the decision when their post was removed and given the chance to appeal.

Facebook Groups and Communities can also be affected by this sort of action, although groups are generally self-moderated by their own moderators and administrators, who have the power to remove and administrate and flag content on their own if Facebook’s AI and human reviewers continue to see policy violations it will eventually lead to that group being banned.

Photo: Christophe Scholz/Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU