Following health data controversy, Google’s DeepMind forms AI ethics unit
Criticized for a controversy over the improper use of health data, DeepMind Technologies Ltd., Google LLC’s artificial intelligence research division, is expanding the scope of its work beyond the technical realm.
The U.K.-based group this morning announced the launch of a new team tasked with exploring the ethical challenges that accompany the spread of AI. The unit is headed by Verity Harding, who previously led public policy for Google’s European business, and technology consultant Sean Legassick. DeepMind is looking to triple the practice’s current headcount of eight over the next year.
The team will be advised by an outside group of “DeepMind Fellows” that is set to include economists, philosophers and other experts whose area of focus touches upon the AI discussion in one way or another. There are also plans to collaborate with universities pursuing a similar line of research.
On top of creating ethical guidelines for DeepMind’s work, the unit will try to predict the ways AI could reshape society in the future. The effort is set to emphasize big questions such as how to ensure that AI systems will uphold user rights and what economic impact they’ll have. DeepMind expects that the team will start publishing its first research papers sometime next year.
With that said, the move to establish the unit is much more than just academic in nature. Earlier this year, DeepMind came under fire for a project with the U.K.’s National Health Service that violated regulations on processing patient records. An in-house ethics team could help steer the division away from misusing its AI technologies in the future.
In the long run, research produced by the new unit could benefit other Google groups as well. The search giant’s Waymo autonomous driving subsidiary is a prime candidate. The prospect of self-driving cars hitting the road en masse has raised thorny questions, such as how a vehicle should handle the difficult choices that must be made when an accident is unavoidable.
Many ethical blind spots still remain even as Google and other tech companies race ahead with their AI ambitions. Just a few months ago, DeepMind shared the results of a project that sought to train neural networks to think more like a human by having them learn the basics of walking under simulated conditions.
Image: DeepMind
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU