UPDATED 11:00 EDT / OCTOBER 26 2018

AI

Could AI go rogue? Debating the obstacles for enterprise machine intelligence

Fei-Fei Li is a world-renowned expert in the field of artificial intelligence, having risen to become head of Stanford University’s AI Lab and the chief scientist for AI at Google Cloud. But when Google LLC began an internal debate last year over how to publicly discuss its AI contract with the U.S. Department of Defense, Li’s decision to write a confidential memo on the issue last September might be the second-worst moment in her corporate career. The worst was when it all became public.

“Avoid at ALL COSTS any mention or implication of AI,” Li wrote to her colleagues. “This is red meat to the media to find all ways to damage Google.”

Li’s concern around Google’s AI involvement in Project Maven, a Pentagon-directed drone imaging project, was ultimately revealed in a series of leaked emails published in May by The New York Times. After 4,000 Google employees signed a petition demanding that the company refrain from building warfare technology, the firm announced in June that it would not renew its government AI contract next year.

Google’s experience provides a peek behind the curtain in what has rapidly become a central question for many companies today that are operating in the AI space. How does the enterprise manage the risks of AI development and deployment?

“Employees are already chafing against the idea of utilizing their intellectual property, their smarts, their research, and having it applied to military applications,” said Peter Burris, head of research at SiliconANGLE Media’s Wikibon market research unit and host of theCUBE, SiliconANGLE’s mobile livestreaming studio, during an “AI Risks” discussion with Wikibon analysts. “This is just one of the considerations that all businesses are going to have to face.”

The societal implications as AI is ingrained in a host of real-world applications forces many businesses to confront a key question: Just because we can do it, should we?

Privacy and facial recognition

One hotly debated area is the use of facial recognition technology and its impact on personal privacy. A video camera’s ability to pick out faces in a crowd and apply machine learning algorithms against a database of information to match names with facial images has become infinitely more sophisticated.

This development is due in large part to the rise of AI systems-on-a-chip. Apple Inc.’s A11 Bionic SoC drives the face recognition feature for its more recent iPhone models. Last year, Intel Corp. released its own system on a chip — Movidius Myriad X — for AI-directed vision processing in smart cameras and other devices operating on the edge.

For the public at large, these rapid advances in facial recognition occasionally play out in the daily news. Authorities in Great Britain identified two Russian men allegedly involved in the poisoning of a former KGB spy through sophisticated analysis of thousands of hours of video footage captured around the community where the poisoning occurred.

“The most controversial area in which AI is coming into our lives is facial recognition,” said Wikibon analyst James Kobielus. “Facial recognition is Big Brother’s best friend, potentially tracking every one of us on an ongoing basis wherever we happen to be in the public and increasingly in the private realm.”

Despite concerns around privacy and Google’s recent turmoil, AI still represents an attractive opportunity for global economic development. A PriceWaterhouseCoopers LLP report estimates that AI technologies could increase global gross domestic product by $15.7 trillion over the next decade. Numbers like these put significant pressure on enterprise companies to channel AI in ways that will contribute more good than harm.

“We’ve had decades of talking about how automation is going to help,” said Wikibon analyst Stu Miniman. “You can usually segment the customer personal information data from operational data, so there’s real opportunity. And the technology providers are going to want to leverage this.”

Safeguarding enterprise data

There are signs that some AI technology providers are listening. Earlier this month, IBM Corp. launched IBM AI OpenScale, a platform to help enterprises manage AI applications. The new platform, designed to head off the potential growth of “AI sprawl” in the enterprise, will be available later this year on IBM Cloud.

One of the announcements coming out of Microsoft Ignite in September was the news that Adobe Systems Inc., SAP SE and Microsoft Corp. would partner on an Open Data Initiative. It’s essentially a pooled secure data “lake” on Azure Cloud that will be maintained by the three companies as a way to safeguard and separate customer information.

“There’s no question that AI is going to be deeply utilized within infrastructure companies, cloud companies, as a predicate of how they employ their services, how they make money off their services, how they guarantee delivery of the service,” Burris said. “The tech industry has an enormous incentive to ensure that the domain of AI behaviors and the appropriate risks of using it are addressed.”

Controlling AI’s impact

One key area of focus involves algorithmic control because, after all, humans built AI technology to run on machines in the first place. It’s not beyond the realm of possibility that AI can develop its own defense mechanisms and learn how to circumvent human commands.

Researchers at the Ecole Polytechnique Fédérale de Lausanne have been studying this possibility and evaluating ways to avoid it. Their solution is partly based on the concept of “safe interruptibility,” allowing humans to disrupt AI processes while making sure the breaks don’t affect the technology’s ability to learn.

It’s a tricky measure of control, one that will likely be the subject of further debate and study in the years ahead. “Increasingly, through the power of automated machine learning, AI can build its own machine running models without human guidance, train them automatically and take action,” Kobielus said. “So the potential for what I call the rogue agency of AI in the internet of things is very real.”

The broader issue of AI’s impact and control extends far beyond the walls of the enterprise. Issues surrounding AI are political, cultural and global.

In a speech earlier this week during a privacy conference in Belgium, Apple Chief Executive Officer Tim Cook warned about the need for AI to respect human values and called for industry adoption of privacy standards to prevent misuse. Cook’s speech represented a call for control that will likely continue as AI adoption grows.

“This is going to be a highly polarized environment, and it’s going to be a set of very diverse standards that will emerge,” said Dave Vellante, Wikibon founder and host of theCUBE. “Our technology got us into the problem, but technology can help us to get out of the problem.”

Here’s Wikibon’s complete video discussion of AI risks:

Photo: Alex Iby/Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU