Google say it won’t help weaponize AI, but it won’t quit working with the military
Google Inc. has tried to make it clear just how far it will go in helping the military with technology, following months of brouhaha over its involvement with the Pentagon in creating artificial intelligence surveillance to go through drone video footage.
The company lost employees over the Project Maven work, which has become a bone of contention given that the former motto at Google was, “Don’t be Evil.” Some people pointed out that working with the military was in fact going against this principle, and so Google promised it would create ethical guidelines to let people know where it stood.
Google now has made good on this promise, publishing its AI objectives on Thursday, which not surprisingly tried to put the company in a good light. The technology, says Google, will be used for the common good, to predict natural disasters, to diagnose disease, to prevent blindness.
“As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides,” said Google Chief Executive Sundar Pichai.
The company didn’t expressly say if it would quit developing AI to go through hours of drone footage for the military. The crux of the issue had been that while such technology is “nonoffensive,” drones drop bombs on people and so it can hardly be said to be unrelated to the spilling of blood.
In the guidelines Google did say that it would not create AI that could be used to harm people. “Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints,” said the company, adding that this included “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”
As far as Project Maven is concerned, Google also said it won’t develop technology that is used to “gather or use information for surveillance violating internationally accepted norms.” Does this mean the deal is off? It’s not entirely clear.
“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” said Google. “These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”
Speaking to The Verge a Google spokesperson was vague, stating that the company wouldn’t work on AI surveillance projects if they violated “internationally accepted norms.” Google will honor its contract with the Pentagon until 2019, it’s reported. It’s also said the contract, reportedly worth $10 billion, was sought after by Microsoft Corp., IBM Corp. and Amazon.com Inc.
Image: Rennett Stowe via Flickr
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU