AI
AI
AI
Google LLC has emerged as the only cloud “hyperscaler” with a leading frontier artificial intelligence large language model – Gemini – and today it issued a raft of announcements designed to capitalize on that current advantage.
The search giant’s cloud unit launched the Gemini Enterprise Agent Platform as its new hub for building AI agents. Google also unveiled a new Gemini Enterprise application designed to transition AI from an isolated tool into a secure, collaborative autonomous engineer for the enterprise. The latest releases were described by Google Cloud Chief Executive Thomas Kurian (pictured) as the next chapter in the ongoing the AI saga.
“You have moved beyond the pilot, the experimental phase is behind us,” Kurian said during his keynote address at Google Cloud Next in Las Vegas. “How do you move AI into your entire enterprise? The answer is a unified stack.”
As SiliconANGLE analysts have noted, Google is one of the few key tech players that has the resources to optimize the stack end-to-end. Its focus, based on this week’s announcements at Google Cloud Next, has been on maximizing the compute layer, the global network, security, data engines and the application platform to generate enterprise AI value.
Gemini plays a central role in this strategy, as evidenced by its integration in a multitude of the announcements made today. The new Gemini Enterprise application is designed to solve frustrations around siloed AI agents that have proven to be tough to oversee. It adds a new “Inbox” for agentic management, providing a more centralized command for guiding and manage agents in use.
Gemini also powers the newly announced Data Agent Kit, a data engineering experience for leveraging favored practitioner tools, and a new shared workspace feature, called Projects, for pivoting Gemini from a solo AI assistant to a collaborative tool. Gemini was featured prominently in Google Cloud’s security announcements, wrapped around new governance tooling and agentic identity solutions.
“We are moving in a bold and responsible way,” said Sundar Pichai, CEO of Google and its parent company Alphabet Inc., who spoke to the conference in a prerecorded video. “Think of it as mission control for the agentic enterprise. One thing is perfectly clear: We are firmly in the agentic Gemini era.”
Being “mission control” for the agentic world will still require powerful hardware that can run the models for delivering the brainpower behind reasoning machines. Google addressed this as well with the announcement today of two new tensor processing units or TPUs.
The company introduced the TPU 8t and TPU 8i, custom silicon designed to serve as the workhorses for model training and inference. TPU 8t employs a specialized accelerator to address memory access issues for LLMs and memory bandwidth optimization problems that have hindered progress in AI deployment.
“[TPU] 8t is a powerhouse optimized for training,” Amin Vahdat, chief technologist for AI Infrastructure at Google, said in a presentation today. “We can now turn months of training into weeks.”
The custom-designed TPU 8i is architected to host a larger key-value cache at inference time for LLMs, which can significantly accelerate text generation. The technology behind the 8i design improves latency, another roadblock for AI, by shrinking the network diameter and the number of hops a data packet must take to cross the system.
“We’ve finally broken the memory wall that slows long context decoding,” Vahdat said.
Though Google’s announcements this week underscored its confidence in Gemini to anchor an agentic AI strategy, statements by company executives pointed toward a development worth watching in the evolution of AI for the enterprise. Competition for enterprise market share in enterprise AI will rely on the ability of the tech industry’s major players to serve as the control layer where AI does its work.
Pichai alluded to this in his description of “mission control,” and Google’s announcements this week of new features such as Agent-to-Agent Orchestration, Agent Gateway and Agent Observability spotlight the need for bringing a measure of order into the AI equation.
“We built the agent platform to manage the entire lifecycle of an agent,” Kurian noted.
Or as Brian Delahunty, vice president cloud AI at Google Cloud, put it in a press Q&A: “Our vision is this AI-powered enterprise.”
There are indications that Google’s strategy is beginning to translate into financial results and market momentum. Alphabet reported 48% revenue growth year-over-year for its cloud operations in the fourth quarter of 2025, a number that represented the fastest growth rate among the “Big Three” hyperscalers. Cloud backlog also surged 55% quarter-over-quarter.
Data points such as these offer evidence that the machine learning and AI wave is carrying Google Cloud to more success than it has previously seen. Google’s bid to be the operating system for enterprise AI got much reinforcement this week and its future success will likely depend on whether this message influences the growing number of users who are embracing AI to get work done.
“Companies are not just redesigning workflows, they are turning their employees into AI builders,” Kurian said. “We offer you an integrated stack with the freedom to choose the world’s best chips and models. This platform is ready, so what will each of you build?”
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.