UPDATED 20:59 EDT / APRIL 25 2018

EMERGING TECH

The militarization of AI is coming. Can tech companies (and the rest of us) live with it?

Everybody knows that artificial intelligence is an exceptionally weaponizable technology. So it’s no mystery why militaries everywhere are racing to exploit AI to its maximum potential.

Autonomous vehicles, for example, will become the most formidable weapon systems humanity has ever developed. AI gives them the ability to see, hear, sense, and adjust real-time strategies far better and faster than most humans. AI will orchestrate fleets of unmanned tanks, artillery, reconnaissance and supply vehicles. It will almost certainly produce casualty counts in future battles that are staggering and lopsided, especially when one side is almost entirely composed of AI-powered intelligent weapons systems equipped with phalanxes of 3-D camera, millimeter-wave radar, biochemical detectors and other ambient sensors.

There’s no point in dancing around a huge and growing controversy in the AI industry: the appropriateness of tech vendors such as Google LLC, Microsoft Corp. and Amazon Web Services Inc. assisting the U.S. Department of Defense in developing AI technologies not just for offensive purposes, but also for defensive and back-office applications that can sustain this country’s war-making bureaucracy. All the “AI safety” guardrails on Earth can’t protect us from applications of this technology that are explicitly designed to project deadly force, even when that force is exercised in what one might regard as a “just war.”

As with recent privacy protection and “fake news” controversies, the issue of AI’s weaponization is revealing Silicon Valley’s very strong cultural bias toward libertarian and left-wing causes. Whatever your ideological slant, recent protests by Google employees over that firm’s AI research and development subcontract with the DoD call attention to yet another political crossroads that this firm, and its closest rivals, face in pursuing new avenues for making money from their AI expertise.

In early April, Google employees signed a letter objecting to the company’s involvement in a Pentagon pilot program that uses AI to flag drone-captured video images for more efficient human review. The technology could easily be applied to offensive purposes such as targeting drone strikes for counterinsurgency and counterterrorism uses. Google responded that its work is intended for “nonoffensive” uses, such as improving identification of innocent civilians to reduce the likelihood of becoming casualties of war. Both Google and DoD stated that the AI being developed would not be for drones or other autonomous weapons systems that could be activated without human guidance.

But that’s cold comfort, considering that it could easily be repurposed by other projects — perhaps not involving Google — for just such applications. No “ground rules” for commercial AI vendors’ engagement with the military can realistically stop the underlying approaches from being used in weapon systems for offensive purposes. In fact, the likelihood of that possibility is underlined by the fact that the project’s underlying AI technology — TensorFlow, open-source object recognition software and unclassified image data — is available to anyone.

Google has no easy avenues to follow, and it alone can’t stand in the way of AI’s spiraling weaponization. If it persists and expands its AI-related work with DoD, it risks alienating many of its deep pool of AI developers, who have plenty of career opportunities in the U.S. and elsewhere. It can take the unlikely move of justifying the work by arguing that withdrawing from this and potential future DoD opportunities would effectively hand this business to competitors such as Microsoft and AWS, both of which are avidly growing their Pentagon business.

In the unlikely scenario that all AI solution providers walk away from projects with the United States’ and other nations’ military establishments, that would still open an opportunity for universities and nonprofit research centers to pick up the work. Considering how much money the military is likely to funnel into such contracts, which would focus on developing highly sophisticated AI tools, that scenario could easily reverse the brain drain that’s been causing the best and brightest AI researchers to leave academia and seek their fortunes in the private sector. Some of these contracts could conceivably go to research hubs in U.S.-allied nations, such as NATO members that are trying to keep their smartest, albeit underappreciated, AI professionals from moving to Northern California.

In the even more unlikely scenario that the DoD develops Oppenheimer-grade pangs of conscience about developing a new generation of AI-fueled superweapons, it can’t turn back. Geopolitical forces would compel the U.S. to carry forward with such R&D, considering that China and other nations — and even U.S. allies such as the U.K. and France — have placed a high priority on developing their national AI competencies. This is a hard fact that Google’s Eric Schmidt, who sits on a DoD advisory board, openly acknowledges, referring to this as a “Sputnik moment” for the U.S. and allies:

In some ways, the Google employee protests are reminiscent of the late 1960s student demonstrations against Dow Chemical’s on-campus recruiting. The bone of contention was that company’s role in developing the napalm incendiary gel for DoD, which was responsible for some of the most horrifying casualties in the Vietnam War.

Those protests didn’t stop development of that or any other weapon. But they put into stark relief the moral dilemmas that some educated people may confront when applying specialized engineering skills to military projects.

Image: Terminator Genisys Facebook page

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU