UPDATED 17:59 EDT / JUNE 15 2018

THOUGHT LEADERSHIP

Principles versus profit: AI and the fate of the planet

It seems as if everybody is starting to look at artificial intelligence as some sort of make-or-break technology for the human race.

Where the fate of the planet is concerned, there is an increasing collision between the nationalistic view that AI’s overriding purpose is to help countries hold their own in geopolitical struggles and the humanitarian view that AI should deliver the benefits of material prosperity to all people. According to this latter perspective, AI should be contributing to the universal struggle for equality, free expression, personal autonomy and democratic governance.

The nationalistic perspective keeps popping out in headlines. For example, there are the sentiments expressed in this recent article by Horacio Rozanski, chief executive of Booz Allen Hamilton Inc. He discusses what he regards as a “close race” between the United States and China in developing and exploiting AI.

I’ve been exploring AI benchmarking initiatives recently, and I take issue with the assumption that we can validly benchmark one nation against another in this regard. This might make sense if we choose to focus on specific uses where one country has achieved some tangible advantage, such as developing more accurate, efficient, flexible or low-cost machine learning models than the others. The fact that this CEO — who helms a major federal contractor — alludes to AI’s weaponization use cases makes it clear where he’s coming from.

In fact, Rozanski’s entire discussion seems to be predicated on what I’d regard as a “Sputnik redux” outlook. If you’re even vaguely familiar with the hysteria that ensued in the U.S. after the Soviet Union launched the first satellite, you’ll hear echoes in his concerns about our side not funding AI sufficiently at the federal level, not coordinating public and private sector AI initiatives, not educating enough AI practitioners, not being vigilant to infiltration of our AI infrastructure by foreign nations, and not having a strategy to match the Chinese leader’s call for his nation to achieve a “10fold increase in AI output … by 2030.”

That raises the question of how exactly you would scope “AI output,” as opposed to AI capabilities that are infused in every type of product and service in the new world economy. We’re not talking anything as concrete as an aircraft carriers here. Even those will start to incorporate AI as an integral capability, not as a discrete component or subsystem that you can easily swap out.

Representing a humanitarian perspective on AI’s “responsible uses,” there are the viewpoints expressed in this recent SiliconANGLE article by Paul Gillin. One thing that struck me is how providers of AI, analytics and other enterprise information technology solutions are starting to take a more active interest in monitoring and even prescribing the societal purposes for which customers should use their offerings.

In his survey of solution providers, Gillin found that “social responsibility has become an increasingly common topic of discussion internally, and that pressure from politically minded employees is spurring them to monitor the use of their products more carefully.”

A big focus of such concern is over privacy protection, especially in response to the now-in-force European Union General Data Protection Regulation. It’s interesting to see how activist Silicon Valley is becoming in support of GDPR and similar mandates, as judged by the statement by Salesforce.com Inc. CEO Marc Benioff in the Gillin article, in contrast to how the “right to be forgotten” mandate in that law was vigorously opposed by most of them when the EU first proposed it. Looker Data Sciences Inc. CEO Frank Bien’s statement that “GDPR is a great opportunity” captures the brave face that many tech vendors are now putting on this challenge.

But the newfound social consciousness around all things AI goes far beyond concerns for privacy protection. More companies are taking an active interest in preventing their AI solutions from being used for racial profiling, political gerrymandering, identifying targets for terrorist attacks, discriminating against protected classes, and manipulating election results.

Google Inc. brought this activist agenda to the forefront recently. In response to a revolt by some employees who refused to work on weapons-related AI programs with the U.S. Department of Defense, Google further clarified the scenarios where it will and won’t collaborate with the Pentagon.

More broadly, Google also issued a statement of principles outlining the AI uses that it will not promote, develop, commercialize or take on with clients, partners or others in its ecosystem. It starts off the manifesto by describing how Google-developed AI being used to “predict the risk of wildfires,” “…monitor the health of [farmers’] herds,” “diagnose cancer” and prevent blindness.” Then it lists high-level principles that hit on the core concerns — such as debiasing and privacy protection — that generally fall under the umbrella of “AI safety.”

And finally, it specifically pledges not to develop or apply AI technologies whose purpose is to hurt people, conduct illegal surveillance or violate human rights. Coming from a company that long boasted the motto “Don’t be evil,” none of this is particularly surprising. But it’s good that Google has actually developed that dictum into a set of tenets that might guide actual decision-making by its executives, managers and developers.

My sense is that the AI industry is starting to fight back against the general perception that its technology is a potential menace to society. One approach that will almost certainly be emphasized going forward is that AI can be both a sentinel for detecting when privacy, bias and other social evils are present and also a tool for defusing or mitigating these issues.

An MIT researcher encapsulated this perspective with this recent statement: “Essentially, humans are biased already, and some of these biases are good, because we need to discriminate, for example, between something that is good quality and lower quality. That’s a useful bias. But there are also bad types of bias — a bias that is discriminatory or breaks some laws or norms in some way — for instance, discriminating in hiring decisions against a minority group. I think AI is now making these biases a bit more salient and a bit more identifiable, because now we have a better understanding of how the data causes the bias. And I think that’s already creating pressure on companies to be a bit more thoughtful — and if they’re not, then that’s a real public relations, reputational risk for companies.”’

Entrepreneurs are starting to find vehicles for bringing positive social impacts directly into to the AI ecosystem. For example, there’s a new nonprofit open AI marketplace aimed at “advancing the responsible design, development and use of AI technologies.” What it will provide is an online AI collaboration hub that converges AI libraries, models and built-out applications, while also providing access to AI professional services, with a membership model that will encourage responsible societal impacts.

We’ll see whether this kind of initiative succeeds in gaining a foothold in the AI ecosystem, or runs aground in the global race to monetize and militarize this technology to the nth degree.

However, the cynic (or realist) in me thinks the latter tendency will gain the upper hand.

Image: geralt/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU