UPDATED 16:24 EDT / OCTOBER 22 2020

AI

Microsoft, MITRE and partners release adversarial AI framework

Microsoft Corp. and the federally funded MITRE research organization today released the Adversarial ML Threat Matrix, a framework designed to help cybersecurity experts prepare against attacks targeting artificial intelligence models.

The framework is available on GitHub. Besides Microsoft Corp. and MITRE, it also includes contributions from a dozen other organizations, including Nvidia Corp., IBM, Corp. and Carnegie Mellon University.

The framework is the group’s answer to an emerging class of online threats known as adversarial machine learning. AI models perform tasks such as identifying objects in images by analyzing the information they ingest for certain common patterns. Researchers have established that hackers could inject malicious patterns into an input file to trick an AI into producing an undesired result.

Two years ago, an Auburn University team managed to fool a Google LLC image recognition model into misclassifying objects in photos by slightly adjusting the position of the objects in each input image. More recently, researchers demonstrated a method of activating smart speakers with hidden voice commands that can’t be heard by humans.

The Adversarial ML Threat Matrix contains a collection of adversarial machine learning vulnerabilities and hacking tactics contributed by the organizations backing the project. One sample exploit, based on an internal Microsoft experiment, demonstrates a method of targeting AI models with misguiding input data. Another example covers a scenario where attackers manage to replicate an AI to find weak points in the neural network. 

The idea is that companies can use the Adversarial ML Threat Matrix to test their AI models’ resilience by simulating realistic attack scenarios. Moreover, Microsoft sees the framework serving as an educational resource for the cybersecurity community. Security professionals can use it to familiarize themselves with the kind of threats their organizations’ systems could face in the not-so-distant future. 

“Our survey pointed to marked cognitive dissonance especially among security analysts who generally believe that risk to ML systems is a futuristic concern,” wrote Microsoft executive Ann Johnson in a blog post co-authored with engineers Ram Shankar and Siva Kumar. “This is a problem because cyber attacks on ML systems are now on the uptick.”

Image: TheDigitalArtist/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and soon to be Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Join Our Community 

We are holding our second cloud startup showcase on June 16. Click here to join the free and open Startup Showcase event.

 

“TheCUBE is part of re:Invent, you know, you guys really are a part of the event and we really appreciate your coming here and I know people appreciate the content you create as well” – Andy Jassy

We really want to hear from you. Thanks for taking the time to read this post. Looking forward to seeing you at the event and in theCUBE Club.