UPDATED 14:10 EDT / OCTOBER 04 2017

EMERGING TECH

Following health data controversy, Google’s DeepMind forms AI ethics unit

Criticized for a controversy over the improper use of health data, DeepMind Technologies Ltd., Google LLC’s artificial intelligence research division, is expanding the scope of its work beyond the technical realm.

The U.K.-based group this morning announced the launch of a new team tasked with exploring the ethical challenges that accompany the spread of AI. The unit is headed by Verity Harding, who previously led public policy for Google’s European business, and technology consultant Sean Legassick. DeepMind is looking to triple the practice’s current headcount of eight over the next year.

The team will be advised by an outside group of “DeepMind Fellows” that is set to include economists, philosophers and other experts whose area of focus touches upon the AI discussion in one way or another. There are also plans to collaborate with universities pursuing a similar line of research.  

On top of creating ethical guidelines for DeepMind’s work, the unit will try to predict the ways AI could reshape society in the future. The effort is set to emphasize big questions such as how to ensure that AI systems will uphold user rights and what economic impact they’ll have. DeepMind expects that the team will start publishing its first research papers sometime next year.

With that said, the move to establish the unit is much more than just academic in nature. Earlier this year, DeepMind came under fire for a project with the U.K.’s National Health Service that violated regulations on processing patient records. An in-house ethics team could help steer the division away from misusing its AI technologies in the future.

In the long run, research produced by the new unit could benefit other Google groups as well. The search giant’s Waymo autonomous driving subsidiary is a prime candidate. The prospect of self-driving cars hitting the road en masse has raised thorny questions, such as how a vehicle should handle the difficult choices that must be made when an accident is unavoidable.

Many ethical blind spots still remain even as Google and other tech companies race ahead with their AI ambitions. Just a few months ago, DeepMind shared the results of a project that sought to train neural networks to think more like a human by having them learn the basics of walking under simulated conditions.

Image: DeepMind

A message from John Furrier, co-founder of SiliconANGLE:

Support our open free content by sharing and engaging with our content and community.

Join theCUBE Alumni Trust Network

Where Technology Leaders Connect, Share Intelligence & Create Opportunities

11.4k+  
CUBE Alumni Network
C-level and Technical
Domain Experts
15M+ 
theCUBE
Viewers
Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.