UPDATED 10:00 EDT / JUNE 08 2022

AI

How to manage artificial intelligence risk and security: Focus on five priorities

In most organizations, artificial intelligence models are “black boxes,” where only data scientists understand what exactly AI does. That can create significant risk for organizations.

Large, sensitive datasets are often used to train AI models, creating privacy and data breach risks. The use of AI increases an organization’s threat vectors and broadens its attack surface. AI further creates new opportunities for benign mistakes that adversely affect model and business outcomes.

Risks that are not understood cannot be mitigated. A recent Gartner survey of chief information security officers reveals that most organizations have not considered the new security and business risks posed by AI or the new controls they must institute to mitigate those risks. AI demands new types of risk and security management measures and a framework for mitigation.

Here are the top five priorities that security and risk leaders should focus on to effectively manage AI risk and security within their organizations:

1. Capture the extent of AI exposure

Machine learning models are opaque to most users, and unlike normal software systems, their inner workings are often opaque to even the most skilled experts. Data scientists and model developers generally understand what their machine learning models are trying to do, but they cannot always decipher the internal structure or the algorithmic means by which the models process data.

This lack of understandability severely limits an organization’s ability to manage AI risk. The first step in AI risk management is to inventory all AI models used in the organization, whether they are a component of third-party software, developed in-house or accessed via software-as-a-service applications. This should include identifying interdependencies among various models. Then rank the models based on operational impact, with the idea that risk management controls can be applied over time based on the priorities identified.

Once models are inventoried, the next step is to make them as explainable or interpretable as possible. “Explainability” means the ability to produce details, reasons or interpretations that clarify a model’s functioning for a specific audience. This will give risk and security managers context to manage and mitigate business, social, liability and security risks posed by model outcomes.

2. Drive awareness through an AI risk education campaign

Staff awareness is a critical component of AI risk management. First, get all participants, including the CISO, the chief privacy officer, the chief data officer and the legal and compliance officers, on board, and recalibrate their mindset on AI. They should understand that AI is not “like any other app” – it poses unique risks and requires specific controls to mitigate such risks. Then, go to the business stakeholders to expand awareness of the AI risks that you need to manage.

Together with these stakeholders, identify the best way to build AI knowledge across teams and over time. For example, see if you can add a course on fundamental AI concepts to the enterprise’s learning management system. Collaborate with application and data security counterparts to help foster AI knowledge among all organizational constituents.

3. Eliminate AI data exposure through a privacy program

According to a recent Gartner survey, privacy and security have been viewed as a primary barrier to AI implementations. Adopting data protection and privacy programs can effectively eliminate exposures of internal and shared AI data.

There are a range of approaches that can be used to access and share essential data while still meeting privacy and data protection requirements. Determine which data privacy technique, or combination of techniques, makes the most sense for the organization’s specific use cases. For example, investigate techniques such as data masking, synthetic data generation or differential privacy.

Address data privacy requirements when exporting or importing data to and from external organizations. Techniques such as fully homomorphic encryption and secure multiparty computation should be more useful in these scenarios than for protecting data from internal users and data scientists.

4. Incorporate risk management into model operations

AI models need special-purpose processes as part of model operations, or ModelOps, to make AI reliable and productive. AI models must be continuously monitored for business value leakage and unpredicted — sometimes adverse — outcomes, as environmental factors continuously change.

Effective monitoring requires AI model understanding. Specialized risk management processes must be an integral component of ModelOps to make AI more trustworthy, accurate, fair and resilient to adversarial attacks or benign mistakes.

Controls should be applied continuously — for example, throughout model development, testing and deployment, and ongoing operations. Effective controls will detect malicious acts, benign mistakes and unanticipated changes to AI data or models that result in unfairness, damage, inaccuracy, poor model performance and predictions, and other unintended consequences.

5. Adopt AI security measures against adversarial attacks

Detecting and stopping attacks on AI requires new techniques. Malicious attacks against AI can lead to significant organizational harm and loss, including financial, reputational or related to intellectual property, sensitive customer data or proprietary data. Application leaders working with their security counterparts must add controls to their AI applications that detect anomalous data inputs, malicious attacks and benign input errors.

Implement a full set of conventional enterprise security controls around AI models and data, as well as new AI-specific integrity measures, such as training models to tolerate adversarial AI. Finally, prevent AI data poisoning or input error detection using fraud and anomaly detection and bot detection techniques.

Avivah Litan is a distinguished VP analyst at Gartner Inc., covering blockchain innovation, AI trust, risk and security management. She wrote this article for SiliconANGLE. Litan and other Gartner analysts are presenting the latest research and advice for security and risk management leaders at the Gartner Security & Risk Management Summit 2022, taking place this week in National Harbor, Maryland.

Image: geralt/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Join Our Community 

Click here to join the free and open Startup Showcase event.

“TheCUBE is part of re:Invent, you know, you guys really are a part of the event and we really appreciate your coming here and I know people appreciate the content you create as well” – Andy Jassy

We really want to hear from you, and we’re looking forward to seeing you at the event and in theCUBE Club.

Click here to join the free and open Startup Showcase event.