UPDATED 22:43 EDT / MARCH 21 2019

APPS

Following New Zealand terror attack, Facebook admits it has a problem

Facebook Inc. said it’s reviewing its livestreaming policy after the terrorist attack at a mosque in Christchurch, New Zealand, that left 50 people dead was streamed on Facebook itself.

The social media giant has said the attacker’s livestream was not taken down fast enough, adding that its artificial intelligence systems failed to flag the video at first. The company said its system should flag videos containing suicidal or harmful acts, but more needs to be done.

“AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove,” Guy Rosen, Facebook’s vice president of product marketing, said Wednesday in a blog post. He added, unnecessarily: “But it’s not perfect.”

The video was seen live for about two minutes by fewer than 200 people before it was taken down following the New Zealand police contacting Facebook, according to the company. No one reported it while viewing it.

Facebook said in total the video was seen about 4,000 times, with the first report coming in 29 minutes after the video was posted. This was 12 minutes after the end of the livestream. Facebook said in the next 24 hours it removed 1.2 million videos of the attack at upload, meaning they weren’t viewed. A further 300,000 copies of the video were removed after being uploaded.

The company responded to people who have asked why such content can’t immediately be taken down, saying it is working on improving its detection systems.

“AI systems are based on ‘training data,’ which means you need many thousands of examples of content in order to train a system that can detect certain types of text, imagery or video,” said Facebook. “This approach has worked very well for areas such as nudity, terrorist propaganda and also graphic violence where there is a large number of examples we can use to train our systems.”

The company said another problem is training the algorithm to see the difference between innocuous content, such as some violent gaming videos, and the real thing. The company also said that in the case of the Christchurch shooting, there was a “community of bad actors” working on editing the video so it would not be detected by the algorithm.

Image: Descrier/Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.