UPDATED 22:43 EDT / MARCH 21 2019

APPS

Following New Zealand terror attack, Facebook admits it has a problem

Facebook Inc. said it’s reviewing its livestreaming policy after the terrorist attack at a mosque in Christchurch, New Zealand, that left 50 people dead was streamed on Facebook itself.

The social media giant has said the attacker’s livestream was not taken down fast enough, adding that its artificial intelligence systems failed to flag the video at first. The company said its system should flag videos containing suicidal or harmful acts, but more needs to be done.

“AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove,” Guy Rosen, Facebook’s vice president of product marketing, said Wednesday in a blog post. He added, unnecessarily: “But it’s not perfect.”

The video was seen live for about two minutes by fewer than 200 people before it was taken down following the New Zealand police contacting Facebook, according to the company. No one reported it while viewing it.

Facebook said in total the video was seen about 4,000 times, with the first report coming in 29 minutes after the video was posted. This was 12 minutes after the end of the livestream. Facebook said in the next 24 hours it removed 1.2 million videos of the attack at upload, meaning they weren’t viewed. A further 300,000 copies of the video were removed after being uploaded.

The company responded to people who have asked why such content can’t immediately be taken down, saying it is working on improving its detection systems.

“AI systems are based on ‘training data,’ which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video,” said Facebook. “This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems.”

The company said another problem is training the algorithm to see the difference between innocuous content, such as some violent gaming videos, and the real thing. The company also said that in the case of the Christchurch shooting, there was a “community of bad actors” working on editing the video so it would not be detected by the algorithm.

Image: Descrier/Flickr

Since you’re here …

Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!

Support our mission:    >>>>>>  SUBSCRIBE NOW >>>>>>  to our YouTube channel.

… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.

If you like the reporting, video interviews and other ad-free content here, please take a moment to check out a sample of the video content supported by our sponsors, tweet your support, and keep coming back to SiliconANGLE.