UPDATED 17:16 EST / MAY 20 2016

NEWS

New research paper explains how to create a malevolent AI

Artificial intelligence is quickly becoming one of the most powerful tools in the tech industry, and while AI can be used for harmless tasks like defeating world Go champions, it also has the potential for misuse. A malevolent AI would be like a computer virus on steroids, and while there are currently no known cases, researchers Federico Pistono and Roman Yampolskiy from the University of Louisville in Kentucky believe that we should already be preparing for them.

Pistono and Yampolskiy have published a research paper called “Unethical Research: How to Create a Malevolent Artificial Intelligence.” In it, they explain that it is entirely possible for a malevolent AI to be created in the right environment, and they lay out what sort of warning signs the cyber security industry should be looking out for.

First and foremost, Pistono and Yampolskiy say that any organization interested in creating a malevolent AI would resist any form of oversight on their research.

“If a group decided to create a malevolent artificial intelligence, it follows that preventing a global oversight board committee from coming to existence would increase its probability of succeeding,” Pistono and Yampolskiy told MIT Technology Review.

They explained that one possible strategy such an organization might use would be to sow conflicting information intended to mislead the public into ignoring the potential risks posed by AI research. If the public does not see malevolent AI as a legitimate threat, it would be unlikely that an oversight committee with any authority would be sufficiently funded.

Pistono and Yampolskiy also noted that AI created with closed-source code could pose a higher risk than open-source AI.

“It is well known among cryptography and computer security experts that closed-source software and algorithms are less secure than their free and open-source counterpart,” Pistono and Yampolskiy said. “The very existence of non-free software and hardware puts humanity at a greater risk.”

Photo by AlexDixon 

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.