AI
AI
AI
The race for AI dominance is about more than which model will reign supreme. The real action is happening underneath the surface, where hyperscalers such as Google LLC are focused on the infrastructure and the data pipelines that models run on — especially as agentic AI infrastructure becomes the true battleground for enterprise scale.
In an analysis of Google Cloud Next 2026, John Furrier, executive analyst of theCUBE Research, explained that Google is positioning itself as the dominant player in the agentic control layer – the operating system for the agentic enterprise and a core component of agentic AI infrastructure.
“The control plane is that horizontal layer that moves data around and it connects to all the systems,” Furrier told co-host Alison Kosik in a day one keynote analysis, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. “It’s like the main nerve center. It’s like the backbone, the spine of all the systems — and whoever owns the control plane kind of wins.”
Enterprises are speeding ahead with major IT investments and AI deployments – to mixed results. Furrier mentioned the increasing presence of AI-native applications and agent-driven software coding, with some businesses reporting that they now have more machines than humans doing code work. Other organizations are still trying to determine which use cases will yield the biggest impact from AI, which in turn affects decision-making responsibilities across the enterprise, a challenge that underscores the importance of agentic AI infrastructure.
“You have a new kind of currency going on with tokens, and that’s changing the organizational structures,” Furrier said. “That’s changing how people are organizing their teams. That’s changing how people work. It’s a complete reset in the corporate world.”
During interviews at Google Cloud Next with industry experts at Google, OpenText Corp., Advanced Micro Devices Inc. and Sabre Corp., Furrier and Kosik discussed how AI is forcing enterprises to rethink data and compute infrastructure and where the biggest efficiencies are currently being found. They also covered examples of how organizations can achieve real value from AI with industry experts from McKinsey & Company Inc., Deloitte Consulting LLP and Covered California, among others. (* Disclosure below.)
Here’s the complete video interview with John Furrier and Alison Kosik, part of SiliconANGLE’s and theCUBE’s coverage of Google Cloud Next:
Here are three key insights you may have missed from Google Cloud Next 2026:
AI models are only as good as the data they are based on. Google Cloud is focused on enabling an intelligent agentic data cloud that is capable of delivering not just the correct data, but the best data at the right time, a critical requirement for scalable agentic AI infrastructure, according to Sailesh Krishnamurthy, vice president of engineering for databases at Google Cloud.
“The models are amazing. The models surprise us every day. They can do a lot of work, but they don’t have all the context,” Krishnamurthy told theCUBE. “The context is in the data. The heart of the data is actually stored in these systems. You need to provide that context in order to answer the questions.”
Compared to legacy databases that simply stored data and delivered the exact results when queried, agent data clouds need the intelligence to parse the data and deliver the best quality results. To enable that, databases need capabilities such as graph traversal, vector embeddings, full-text search and relational operations all in one system without requiring data to move between environments — all foundational to modern agentic AI infrastructure.
Here’s the complete video interview with Sailesh Krishnamurthy:
Waqas Ahmed, VP of AI engineering at OpenText, echoed those thoughts. OpenText and Google Cloud are working together to build a full agentic stack built on context engineering, data sovereignty and open interoperability standards.
“Enterprise information is not just files on a drive,” Ahmed told theCUBE. “It is organized, governed, tagged with context, tagged with metadata and integrated with the business applications and customer processes. To wire that into the AI providers and LLMs, you have to be able to build that context so you are not flooding the LLMs with extra information, but you’re giving them the right information at the right time.”
OpenText and Google Cloud are creating industry-specific solutions that provide a secure environment for users to easily access decades of enterprise documentation, according to Yemi Falokun, global AI and machine learning partner engineering lead at Google.
“What they’re now doing is taking the foundation they’ve built from 2023 into the AI agentic era,” he said. “We’re now deeply integrating the Gemini Enterprise Agent Platform so that we can allow our joint customers to deploy secure autonomous solutions at scale, leveraging those decades of information that they store on behalf of their users.”
Here’s the complete video interview with Waqas Ahmed and Yemi Falokun:
Enterprises need substantial compute resources for AI inference, without sacrificing security and control. This has implications for both the enterprise cloud and on-premises environments, and Google has partnered with vendors such as Nvidia Corp. and Dell Technologies Inc. to help enable AI-ready infrastructure, explained Muninder Sambi, VP and general manager for networking and security at Google.
“The challenge is [enterprises] had a choice: Either you can be sovereign and be compliant or give it up and go to the cloud,” Sambi told theCUBE. “With Google Distributed Cloud, we are actually bringing the power and the intelligence of Gemini and all that Google has to offer for an on-premises environment.”
Kubernetes has a key role to play in this new environment as one of the primary orchestrators of AI agents across hybrid environments, according to Drew Bradstock, senior product director for Kubernetes and Google Compute Engine at Google.
“Kubernetes has become that operating system for AI — from training to inference to [reinforcement learning],” he said. “This has really been the heart of everything. We’re finding ourselves on the gun a lot more to adapt Kubernetes quite quickly, even faster than the [open-source community] can keep up.”
Here’s the complete video interview with Muninder Sambi and Drew Bradstock:
Businesses are also grappling with the rapid escalation of costs. In the search for financial efficiency, a trusted and time-tested compute standard – x86 – has become the answer because it is already embedded deeply into both cloud and on-premises infrastructure.
“Most of the enterprise and larger customers that I see out there … are running on-premise in their own real estate, but also running in cloud,” explained Mike Thompson (pictured right), director of cloud product and go-to-market at AMD. “When you’re running containers, that’s one of the things that makes it easier. You have a container; it contains everything you need. You can drop it in. It’s really hard to run a hybrid environment with containers on Arm, because generally those servers aren’t available on-prem. I see containers being leveraged specifically on x86 because of the ease of migration between the two.”
The efficiency of x86 enabled immediate benefits when his business migrated more than 50,000 virtual CPUs to AMD-based instances on Google Cloud, according to Tim McArdle (left), senior FinOps engineer at Sabre, a travel technology company.
“When we moved to the AMD platform, we experienced a price benefit. It’s faster, we have a smaller footprint and we made zero code changes,” he said. “For us, that’s a huge win-win-win. We are able to take that savings and invest it in the new world of agentic AI. That has helped us immensely.”
Here’s the complete video interview with Mike Thompson and Tim McArdle:
Google is investing big into bringing together the various players in the agentic AI delivery chain. The company’s $750 million commitment to build a partner ecosystem will help accelerate outcomes for over 120,000 members, according to Philip Larson, managing director of the Google Cloud Partner Network at Google.
“Agents from our partners are going to talk to agents from the Google Cloud Partner Network across onboarding, training, etc.,” Larson said. “It’s going to infuse the content from my system into their internal systems. It’s going to make targeted recommendations around what learning their reps should do in real time from within their systems. All of a sudden, the people are going to be sitting on top with an intelligence layer that’s helping them figure out how to add value fast.”
Here’s the complete video interview with Philip Larson:
C-suite executives who say they’re still struggling to find business value from large investments in AI may need to raise their horizons, according to Asutosh Padhi, senior partner and global leader of firm strategy at McKinsey & Company Inc. Successful AI projects require the ambition and focus of the entire C-suite to pursue big foundation-shaking projects.
“Start with one of your tougher business problems,” Padhi told theCUBE. “Demonstrate the fact that it works and then you can scale it up from there. When you start with something that will be a needle mover for the enterprise value, then everyone pays attention. The necessary focus from a change management and capability building standpoint goes into it. When you’re working with something on a simple use case, it’s something that’s happening on the side that no one is really paying attention to. Even if it succeeds, no one really cares.”
Here’s theCUBE’s complete video interview with Asutosh Padhi:
To that end, Covered California, the U.S.’s largest state-based health insurance marketplace serving more than 16 million enrolled residents, has provided an example of what’s possible with ambitious AI initiatives. In partnership with Deloitte Consulting and Google, Covered California leveraged Google Document AI to streamline its eligibility and enrollment verification processes.
The project resulted in an estimated 24,000 hours of annual savings in service center operations. Documents that once took 72 hours to verify could be completed in seconds, according to Shilpa Akunuri, chief technology officer at Covered California.
“Our staff was spending a tedious number of hours — no one wakes up on a Monday thinking that I’m going to manually process tons of documents today,” she said. “It’s not just manual labor hour savings, but they can now focus on high-value customer support and providing that operational excellence to the next level.”
In this case, AI not only led to faster operations, but a more empathetic customer journey, explained Vishal Prabhu, managing director at Deloitte.
“While we hear that AI might replace the human touch, what I would submit to you is: Isn’t AI the key to bring it back?” he said. “Technology is doing the heavy lifting and people can focus on people.”
Here’s theCUBE’s complete video interview with Shilpa Akunuri and Vishal Prabhu:
Catch up on our complete video coverage of Google Cloud Next 2026:
(* Disclosure: TheCUBE is a paid media partner for Google Cloud Next. Sponsors of theCUBE’s event coverage do not have editorial control over content on theCUBE or SiliconANGLE.)
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.