UPDATED 08:00 EDT / APRIL 22 2026

INFRA

Google announces innovations in mega-scale networking for the agentic era  

Rising to meet demands for increasing latency and scale, Google LLC today introduced a mega-scale datacenter network fabric and cross-cloud infrastructure aimed at agentic artificial intelligence delivery.

Virgo Network, Google’s new networking system for AI infrastructure, is designed to speed communications both inside accelerator clusters and across the broader data center, where workloads need access to memory, compute and storage.

The company said it accomplished that by “flattening” the network so traffic moves through fewer layers. The result, according to the company, is a system that can connect up to 134,000 chips — including its new eighth-generation TPU 8t training processors — and delivers as much as 47 petabits per second of bi-directional bandwidth.

Google claimed Virgo also provides more than four times the bandwidth per accelerator over the previous generation.

It is also extremely robust and uses its own software suite to handle critical issues and failures within its network. At the scale that Virgo Network operates, failures are inevitable, as are stragglers and glitches. To tackle this, the system provides deep observability and automated routing and architecture to heal and mitigate faults, deal with hangs and stragglers. 

At this scale, system-wide resilience requires a solid network foundation. Virgo Network integrates independent switching planes that provide robust fault isolation and protect cluster-wide throughput. This protects large-scale systems from being affected by localized failures. 

Google said the scaling pattern behind Virgo is to be a completely different product, built to be a campus-as-supercomputer, custom-built to handle AI workloads. It provides sub-millisecond telemetry to monitor systems and reports on transient issues, congestion and optimize buffer management across the hardware and software stack. 

The objective is to provide predictable latency at increasingly large scales and deliver on the needs for what Google calls the “agentic AI era.” This is where more and more AI models are making tool calls at the millisecond range and require enough throughput to make certain those calls don’t hang or hiccup. 

Building cross-cloud infrastructure for agentic AI workloads 

In addition to data center networking, Google also announced connectivity and security layers that provide a foundation for agentic AI workloads in the cloud. 

The company focused its product and service updates around four areas: fluid compute to enable cost-effective, high-speed central processing unit access for AI agents; secure cross-cloud connectivity to deliver governed access to agents; a unified data layer that includes smart storage to transform passive into curated intelligence; and digital sovereignty for managing secure keys to bring models into private enclaves where data lives instead of the other way around. 

With fluid compute, the otherwise spiky access model of AI agents can be tamed, allowing enterprises to access computation at the speed of logic. It provides access to CPUs optimized for high-speed inference and operations such as agentic orchestration and retrieval-augmented generation. CPUs can be used to augment the sheer power of graphics processing units. 

To implement this, the company said it provides virtual machines — enabled by Google Compute Engine and Google Kubernetes services — with C4N and M4N CPUs, capable of delivering up to 95 million packets per second, up to 40% faster than other leading hyperscalers.  

With secure cross-cloud connectivity, enterprises get access to Agent Gateway, a controller that watches access and natively controls and governs protocols such as Model Context Protocol and Agent 2 Agent. It also provides visibility and security into multi-cloud infrastructure, monitoring and protecting anything that flows between different networks. 

With the unified data layer, nothing is siloed. Smart storage transforms otherwise dark data into a knowledge asset by embedding metadata into data objects that allow AI agents to understand its use. This annotates data so that AI agents can make use of all data, providing insight extraction, annotation and semantic search. Agents can then quickly find anything, whether it’s hidden in spreadsheets, documents, PDFs, images or elsewhere. 

The Knowledge Catalog maps business knowledge into a graph that allows AI agents to understand how everything works and provide accurate, grounded responses. It’s a foundation that allows AI training without the need to migrate data – agents can interact with it directly. 

Image: Shutterstock/Nepool

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.