UPDATED 20:36 EST / SEPTEMBER 03 2018

EMERGING TECH

Google launches new AI-based tool to help combat child sexual abuse material

Google LLC today released a new artificial intelligence tool that aims to assist organizations in identifying and removing online child sexual abuse material.

The Content Safety API is a toolkit that uses deep neural networks for image processing to identify the material quickly while minimizing the need for human inspections. That’s a cumbersome process that often involves researchers having to go through thousands of images manually.

“Quick identification of new images means that children who are being sexually abused today are much more likely to be identified and protected from further abuse,” Google engineering lead Nikola Todorovic and product manager Abhi Chaudhuri said in a blog post. “We’re making this available for free to NGOs and industry partners via our Content Safety API, a toolkit to increase the capacity to review content in a way that requires fewer people to be exposed to it.”

In testing, the tool is said to provide a great improvement in the speed of review processes of potential CSAM. Reviewer times, the time it takes to find and take action on material, improved by up to 700 percent.

VentureBeat reported that the announcement comes shortly after Google had been criticized by U.K. Foreign Secretary Jeremy Hunt for not doing enough, strangely drawing parallels with Google’s controversial decision to return to China with a censored search engine.

“Seems extraordinary that Google is considering censoring its content to get into China but won’t cooperate with U.K., U.S. and other 5 eyes countries in removing child abuse content,” Hunt wrote on Twitter. “They used to be so proud of being values-driven.”

In a canned statement, at least one group working in the area is positive about the announcement. “We, and in particular our expert analysts, are excited about the development of an artificial intelligence tool which could help our human experts review material to an even greater scale and keep up with offenders, by targeting imagery that hasn’t previously been marked as illegal material,” said Susie Hargreaves of the Internet Watch Foundation, a U.K.-based organization that fights against abuse material. “By sharing this new technology, the identification of images could be speeded up, which in turn could make the internet a safer place for both survivors and users.”

NGOs and similar organizations can get access to the tool via this form.

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.