UPDATED 08:00 EST / JULY 19 2023

SECURITY

How Google’s AI Red Team is shaping the future of cybersecurity

A new report from Google LLC’s AI Red Team today explores a critical capability that the search giant deploys to support its Secure AI Framework: red teaming.

Google released its Secure AI Framework in June to help companies protect artificial intelligence models from hacking. The framework is aimed at ensuring that when AI models are implemented, they are secured by default. SAIF can help companies stave off attempts to steal a neural network’s code and training dataset and can be useful for blocking other types of attacks.

The new report delves into the Red Team’s operations and its crucial role in preparing organizations for potential AI-based cyberthreats. A red team, in terms of security, is a group that pretends to be an enemy and attempts a digital intrusion against an organization for security testing purposes.

However, Google’s AI Red Team takes the traditional red team role a step further. Alongside emulating threats ranging from nation-states to individual criminals, the team also brings specialized AI subject matter expertise to their task, which is claimed to be an increasingly important asset today.

With the capacity to simulate real-world threat scenarios, the Google AI Red Team employs attacker tactics, techniques and procedures to test various system defenses. Using their AI expertise, the team can highlight potential vulnerabilities in AI systems by adapting relevant research to real products and features that use AI technology. The ultimate goal of such testing is to understand the impacts of these simulated attacks and identify opportunities to improve safety and security measures.

The findings from the tests and simulations often present challenges, especially given the rapidly evolving nature of AI technology. Some attacks may not have straightforward fixes, reinforcing the need for incorporating insights gleaned from the red-team process into an organization’s workflow. The integration can help guide research and product development efforts and enhance the overall security of AI systems.

The report also emphasizes the value of traditional security controls. Despite AI systems’ unique nature, proper system and model lockdowns can mitigate many potential vulnerabilities. It’s noted that some AI system attacks can be detected in a similar fashion to conventional attacks, underlining the relevance of standard security protocols.

“We hope this report helps other organizations understand how we’re using this critical team to secure AI systems and that it serves as a call to action to work together to advance SAIF and raise security standards for everyone,” the report concludes. “We recommend that every organization conduct regular red team exercises to help secure critical AI deployments in large public systems.”

Image: Bing Image Creator

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU