Microsoft, MITRE and partners release adversarial AI framework
Microsoft Corp. and the federally funded MITRE research organization today released the Adversarial ML Threat Matrix, a framework designed to help cybersecurity experts prepare against attacks targeting artificial intelligence models.
The framework is available on GitHub. Besides Microsoft Corp. and MITRE, it also includes contributions from a dozen other organizations, including Nvidia Corp., IBM, Corp. and Carnegie Mellon University.
The framework is the group’s answer to an emerging class of online threats known as adversarial machine learning. AI models perform tasks such as identifying objects in images by analyzing the information they ingest for certain common patterns. Researchers have established that hackers could inject malicious patterns into an input file to trick an AI into producing an undesired result.
Two years ago, an Auburn University team managed to fool a Google LLC image recognition model into misclassifying objects in photos by slightly adjusting the position of the objects in each input image. More recently, researchers demonstrated a method of activating smart speakers with hidden voice commands that can’t be heard by humans.
The Adversarial ML Threat Matrix contains a collection of adversarial machine learning vulnerabilities and hacking tactics contributed by the organizations backing the project. One sample exploit, based on an internal Microsoft experiment, demonstrates a method of targeting AI models with misguiding input data. Another example covers a scenario where attackers manage to replicate an AI to find weak points in the neural network.
The idea is that companies can use the Adversarial ML Threat Matrix to test their AI models’ resilience by simulating realistic attack scenarios. Moreover, Microsoft sees the framework serving as an educational resource for the cybersecurity community. Security professionals can use it to familiarize themselves with the kind of threats their organizations’ systems could face in the not-so-distant future.
“Our survey pointed to marked cognitive dissonance especially among security analysts who generally believe that risk to ML systems is a futuristic concern,” wrote Microsoft executive Ann Johnson in a blog post co-authored with engineers Ram Shankar and Siva Kumar. “This is a problem because cyber attacks on ML systems are now on the uptick.”
Since you’re here …
Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!
Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.
… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.