UPDATED 15:57 EST / SEPTEMBER 19 2024

Upen Sachdev, principal partner at Deloitte amd Touche, and Steph Hay, head of UX, Google Cloud Security, at Google, talk to theCUBE during mWISE 2024 about the importance of gen AI security. AI

Gen AI security: Enhancing protection through collaboration, trustworthy AI frameworks and data management

Based on the integration of artificial intelligence into critical business systems, gen AI security should be top of mind since large language models are vulnerable to various attack vectors.

To mitigate these risks, security and data teams should join hands throughout the AI development lifecycle, laying emphasis on continuous monitoring, input/output controls and early security involvement, according to Steph Hay (pictured, right), head of UX, Google Cloud Security, at Google.

Upen Sachdev, principal partner at Deloitte and Touche, and Steph Hay, head of UX, Google Cloud Security, at Google, talk to theCUBE during mWISE 2024 about the importance of gen AI security.

Deloitte’s Upen Sachdev and Google’s Steph Hay talk to theCUBE about the need for gen AI security.

“Being able to collapse the attack surface and enable teams to work together,” Hay stated. “LLMs are uniquely positioned to bring in disparate data that might be, for example, in threat intelligence. We have to add scale, create the kinds of controls on a few different levels to be able to protect the model, the application, the infrastructure and the data. Things against prompt injection, notebook security scanning, being able to monitor all this.”

Hay and Upen Sachdev (left), principal partner at Deloitte & Touche LLP, spoke with theCUBE Research’s John Furrier and Savannah Peterson at mWISE 2024, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the importance of gen AI security since LLMs are prone to various risks, such as sensitive information disclosure, training data poisoning and prompt injection. (* Disclosure below.)

Trustworthy AI as a stepping stone toward gen AI security

The principles around a trustworthy AI framework include fairness, accountability and safety. These principles come in handy in offering gen AI security, and this helps in the mitigation of attacks, according to Sachdev.

“When we talk to clients, we look at this from two perspectives,” he stated. “One is gen AI attacking us, how do we protect against it? Then secondly, how do we use gen AI securely in a trustworthy manner? That’s where we built what we call our trustworthy AI framework. Basically with three core principles. One is you want fairness from your model. Second is you want accountability, you want it to not hallucinate and finally keeping that model secure so we are not giving away our data.”

User experience is important in gen AI security since key factors, such as precision, speed and confidence should be incorporated. This leads to AI-infused and AI-guided experiences needed by teams to defend better, according to Hay.

“A lot of the tools that we would design for the defender, we would want to be easy to use, but also be able to convey the signals of trust that would be required to be able to rely on those,” she noted. “There’s a huge user experience challenge with AI. In fact, I often say AI is UX, especially the future of the SOC.”

Given that data is the backbone of gen AI models, data engineering and data science teams should take center stage when working on real-time threats. As a result, this calls for significant collaborations between security and data teams for enhanced productivity, Sachdev pointed out. 

“We are getting more work around master data management, which is organizing an organization’s data,” he explained. “Then securing an organization’s data, doing role-based access, making sure there is good data sanctity in terms of what gets absorbed into the model. I feel data is the underlying layer behind gen AI and we are seeing organizations more in that foundational stage of doing better with their data.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of mWISE 2024

(* Disclosure: Deloitte & Touche LLP sponsored this segment of theCUBE. Neither Deloitte nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU