UPDATED 14:53 EDT / NOVEMBER 27 2023

AI

Cross-government cybersecurity best practices announced for safer AI development

The U.K.’s National Cyber Security Center along with several dozen governments’ cyber agencies and AI vendors yesterday jointly released their Guidelines for Secure AI System Development.

The guidelines are broken down into four key areas within the AI system development lifecycle: secure design, secure development, secure deployment, and secure operation and maintenance. These cover the waterfront, including threat modeling, supply chain security, protecting AI and model infrastructure, and updating AI models.

The guidelines are a curious collection of common sense suggestions, reiterating long-held general security precepts — such as managing the accumulated technical debt during a system’s lifecycle — and fleshing out current best practices for developing AI-based systems. “We are at an inflection point in the development of artificial intelligence, which may well be the most consequential technology of our time. Cybersecurity is key to building AI systems that are safe, secure, and trustworthy,” said Secretary of Homeland Security Alejandro Mayorkas, who also called it a “historic agreement.” I wouldn’t go that far, but it is still a useful read.

“AI has opened Pandora’s box with unparalleled power of data mining and natural language understanding which humanity has never dealt with before,” Ron Reiter, co-founder and chief technology officer of Sentra, told SiliconANGLE. “This opens up dozens of new risks and novel attack vectors that the world must deal with. This task is an overwhelming undertaking and without following best data security practices, organizations risk the myriad of consequences that come with cutting corners in building an AI model.”

The guidelines build on previous government work to help make AI systems more secure, including the CISA Roadmap for Artificial Intelligence, President Biden’s October Executive Order, and other governments’ efforts, including Singapore’s AI Governance Testing Framework and Software toolkit called AI Verify and Europe’s Multilayer Framework for Good Cybersecurity Practices for AI. There are other links at the end of the document for additional AI security screeds that have been mostly produced in the past year that are worth a look.

The document takes pains to delineate the unique issues that AI introduces into supply chain security. This includes understanding the provenance of all of a model’s components, including training data and construction tools. For example, AI system developers should “ensure their libraries have controls that prevent the system loading untrusted models without immediately exposing themselves to arbitrary code execution,” they opine in the document.

Another suggestion is to have “appropriate checks and sanitization of data and inputs; this includes when incorporating user feedback or continuous learning data into corporate models, recognizing that training data defines system behavior.” The authors also recommend that developers take a longer and more holistic view of the processes contained in their models to properly assess threats and seek out  unexpected user behaviors, which should be built into the overall risk management processes and tooling. That is a tall statement, considering that many AI threat management tools are still in their early stages.

As part of the recommended practices for securing the deployment, the guidelines remind developers to apply appropriate access controls to all AI components, including training and processing data pipelines. The authors recommend a continuous risk-based approach because “Attackers may be able to reconstruct the functionality of a model or the data it was trained on, by accessing a model directly, by acquiring model weights, or indirectly by querying the model via an application or service. Attackers may also tamper with models, data or prompts during or after training, rendering the output untrustworthy.”

As an example, the document highlighted a phenomenon known as “adversarial machine learning,” which is said to be a critical concern of AI security and defined as “the strategic exploitation of fundamental vulnerabilities inherent in machine learning components.” The worry is that by manipulating these elements, malicious actors can possibly disrupt or deceive AI systems, resulting in compromised functionality and erroneous outcomes.

The agreement follows the announcement of the European Union’s AI Act in June, which banned certain AI technologies such as biometric surveillance and predictive policing. That legislation also classified certain AI systems that might impact human health, safety rights and elections as “high risk”.

In October, U.S. President Joe Biden signed an executive order that aims to regulate AI development by requiring developers of the most powerful models to share safety results and other critical information with the government.

It’s notable that China was not a party to the new agreement. China is described by Reuters as a “powerhouse of AI development,” and has been targeted by U.S. sanctions that aim to limit its access to the most advanced silicon that’s required to power AI models.

In terms of regulation AI, the EU appears to be ahead of the U.S. Besides the AI Act, lawmakers from France, Germany and Italy recently agreed a deal on AI regulation, stating their support for “mandatory self-regulation through codes of conduct” with the development of foundational AI models.

Earlier this month, the U.S., the U.K., China and 25 other countries signed a declaration stressing the need to address the potential risks posed by AI. It outlined some of the risks that advanced AI models could pose and potential ways to address them, including the broadening of existing AI safety initiatives.

Kevin Surace, the chair of startup vendor Token, told SiliconANGLE that “the security of AI systems is paramount and this is an important and critical step to codify this thinking. The guidelines go further to address bias in models, which is an ongoing challenge, as well as methods for consumers to identify AI generated materials.”

At 20 pages, the document is the barest of outlines of what enterprise technology managers need to do to ensure safe development of generative AI models and methods. Still, a reminder of the basics are always a good thing to keep front and center, and the document could easily be used to construct a custom security playbook and to educate those new to AI developer tools and techniques.

With reporting by Mike Wheatley

Image: GDJ/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU