Microsoft, MITRE and partners release adversarial AI framework
Microsoft Corp. and the federally funded MITRE research organization today released the Adversarial ML Threat Matrix, a framework designed to help cybersecurity experts prepare against attacks targeting artificial intelligence models.
The framework is available on GitHub. Besides Microsoft Corp. and MITRE, it also includes contributions from a dozen other organizations, including Nvidia Corp., IBM, Corp. and Carnegie Mellon University.
The framework is the group’s answer to an emerging class of online threats known as adversarial machine learning. AI models perform tasks such as identifying objects in images by analyzing the information they ingest for certain common patterns. Researchers have established that hackers could inject malicious patterns into an input file to trick an AI into producing an undesired result.
Two years ago, an Auburn University team managed to fool a Google LLC image recognition model into misclassifying objects in photos by slightly adjusting the position of the objects in each input image. More recently, researchers demonstrated a method of activating smart speakers with hidden voice commands that can’t be heard by humans.
The Adversarial ML Threat Matrix contains a collection of adversarial machine learning vulnerabilities and hacking tactics contributed by the organizations backing the project. One sample exploit, based on an internal Microsoft experiment, demonstrates a method of targeting AI models with misguiding input data. Another example covers a scenario where attackers manage to replicate an AI to find weak points in the neural network.
The idea is that companies can use the Adversarial ML Threat Matrix to test their AI models’ resilience by simulating realistic attack scenarios. Moreover, Microsoft sees the framework serving as an educational resource for the cybersecurity community. Security professionals can use it to familiarize themselves with the kind of threats their organizations’ systems could face in the not-so-distant future.
“Our survey pointed to marked cognitive dissonance especially among security analysts who generally believe that risk to ML systems is a futuristic concern,” wrote Microsoft executive Ann Johnson in a blog post co-authored with engineers Ram Shankar and Siva Kumar. “This is a problem because cyber attacks on ML systems are now on the uptick.”
Image: TheDigitalArtist/Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU