

Artificial intelligence is quickly becoming one of the most powerful tools in the tech industry, and while AI can be used for harmless tasks like defeating world Go champions, it also has the potential for misuse. A malevolent AI would be like a computer virus on steroids, and while there are currently no known cases, researchers Federico Pistono and Roman Yampolskiy from the University of Louisville in Kentucky believe that we should already be preparing for them.
Pistono and Yampolskiy have published a research paper called “Unethical Research: How to Create a Malevolent Artificial Intelligence.” In it, they explain that it is entirely possible for a malevolent AI to be created in the right environment, and they lay out what sort of warning signs the cyber security industry should be looking out for.
First and foremost, Pistono and Yampolskiy say that any organization interested in creating a malevolent AI would resist any form of oversight on their research.
“If a group decided to create a malevolent artificial intelligence, it follows that preventing a global oversight board committee from coming to existence would increase its probability of succeeding,” Pistono and Yampolskiy told MIT Technology Review.
They explained that one possible strategy such an organization might use would be to sow conflicting information intended to mislead the public into ignoring the potential risks posed by AI research. If the public does not see malevolent AI as a legitimate threat, it would be unlikely that an oversight committee with any authority would be sufficiently funded.
Pistono and Yampolskiy also noted that AI created with closed-source code could pose a higher risk than open-source AI.
“It is well known among cryptography and computer security experts that closed-source software and algorithms are less secure than their free and open-source counterpart,” Pistono and Yampolskiy said. “The very existence of non-free software and hardware puts humanity at a greater risk.”
Support our open free content by sharing and engaging with our content and community.
Where Technology Leaders Connect, Share Intelligence & Create Opportunities
SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.