UPDATED 14:51 EST / DECEMBER 18 2017

EMERGING TECH

AI will review child abuse images so police won’t have to

Police must often review hundreds of disturbing images found on devices belonging to child abuse suspects, and these investigations can take a toll on officers’ mental health. London’s Metropolitan Police Service wants to spare officers from this trauma by developing an artificial intelligence system that will review child abuse images for them.

According to Mark Stokes, the head of digital and electronics forensics for the MPS, London’s police had to investigate more than 53,000 devices last year for evidence of child abuse, and any images found on these devices had to be graded for sentencing purposes.

Stokes explained that the department’s existing image recognition system is not sophisticated enough to handle this job on its own, which is why human investigators must still view the images. “You can imagine that doing that for year-on-year is very disturbing,” Stokes told The Telegraph today.

The MPS wants to replace its outdated tech with an AI that can take over most of the review process, but since its data infrastructure is already struggling to keep, the MPS has decided to migrate its data to cloud platforms such as Amazon Web Services. Stokes said that this move will allow the MPS to apply modern machine learning methods to its data, and he believes that an AI can be trained to analyze child abuse images in two to three years.

While the goal of the MPS is admirable, the department’s AI project may have some legal hurdles to overcome, particularly when it comes to storing illegal images on civilian services. Stokes said that the MPS has been working with cloud providers to meet legal requirements, adding that “we think we have it covered.”

The MPS will likely also need to prove that its AI can grade images as accurately as human reviewers if not better, but this might not be as easy as the organization hopes. Google LLC developed a similar AI to review YouTube videos, hoping to appease brands whose ads showed up alongside inappropriate videos. Unfortunately, Google’s trigger-happy AI has incorrectly flagged thousands of videos, stripping many content creators of a large proportion of their income for no reason.

If Google, one of the world’s leaders in machine learning technology, has been unable to perfect its content-reviewing AI, the MPS may find its own AI project to be a tough sell.

Photo: Brandon Grasley Facepalm via photopin (license)

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.