

Now grounded as a core enterprise concern, artificial intelligence is shifting to hybrid cloud AI strategies, forcing firms to reconsider the best pathways to secure models, data and application programming interfaces.
In this context, hybrid cloud AI adoption is rewriting the economics of security. As generative and agentic systems scale, the mix of platforms and models expands — and so does the attack surface. But leaders are no longer facing just legacy risks; They’re confronting AI-powered threats that evolve as fast as the tooling, making hybrid cloud AI governance a board-level discussion, according to Prerak Mehta (pictured, right), director of customer engineering, strategic AI and ISV, North America, at Google Cloud.
F5’s John Maddison and Google Cloud’s Prerak Mehta talk with theCUBE about achieving secure outcomes with hybrid cloud AI.
“The biggest threat we have today securing AI models is from other AI,” he said. “When you are interacting with multiple entities … I think you need a security layer that is looking from a layer above – end-to-end between endpoints.”
Mehta and John Maddison (left), chief product and corporate marketing officer of F5 Inc., spoke with theCUBE’s Rebecca Knight for the Google Cloud Partner AI Series event, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the enterprise shift from AI experiments to AI outcomes, underscored by data-driven measurement. (* Disclosure below.)
In hybrid cloud AI architectures, enterprises stitch together multiple models and platforms across data centers. That makes east–west traffic as critical as north–south at the edge and shifts enforcement to layer seven, where tokens, prompts and API calls live — not just the packet layer. But this shift expands exposure to AI-specific attack methods, according to Maddison.
“AI increases the attack surface,” he said. “[Not only] have you got models, but eventually agentic and agents out there. The attack surface itself has increased. Then the types of attacks – like prompt injections, for example – will be different. Yes, there’s a lot of conversation around AI, but there’s a lot of securing of AI that needs to happen.”
But performance matters as much as protection when large language model calls chain across chips and centers. Encryption and policy can add latency; Smart load balancing and API security reduce that drag while guarding sensitive data in motion. To meet those demands, Google Cloud is collaborating with specialists to deliver cross-cloud, end-to-end controls that keep hybrid cloud AI fast and safe. Joint efforts such as these help unify these paths better than securing from siloes, Mehta noted.
“We partnered with F5 and NetApp to create a solution to secure all the LLM traffic between different endpoints, including APIs to provide customers a secure and [performant] solution,” he said. “The unique value added … is that when you have a vast amount of data and disparate data sources, it is very hard to secure from one particular place.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the Google Cloud Partner AI Series event:
(* Disclosure: Google LLC sponsored this segment of theCUBE. Neither Google nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.