UPDATED 13:35 EDT / OCTOBER 30 2025

INFRA

Arista Networks debuts next-gen platforms for AI data centers

Scaling artificial intelligence is not just a compute problem but increasingly a network issue, a reality that Arista Networks addressed Wednesday by unveiling its latest switch family targeted toward AI data centers.

It’s new generation, the R4 Series platform is based on the Broadcom Jericho3 Qumran3D silicon and is designed for AI, cloud data centers and routed backbone deployments. Last week I talked with Arista about this and Brendan Gibbs, its ice president of AI, routing and switching platforms, told me the goal was to “reduce total cost of ownership, while ensuring high performance, low AI job completion time, low power consumption and integrated security,” which is certainly a lofty goal.

From a performance perspective, the 800-gigabit-per-second R4 system supports high-capacity data center/AI clusters and sets a new high-water mark for speed with the introduction of 3.2-terabits-per-second HyperPorts. Anyone tracking the explosion in AI spending knows that networking is now as strategic as compute.

Though 3.2-Tbps ports might seem like a lot, in AI environments, if all the graphic processing units are running at 400G, that would max out a network running at that speed. This creates the need for a higher aggregation spine. The capacity Arista offers addresses today’s needs but also offers a bit of headroom for growth.

Under the hood

The new R4 routers, which feature efficient two-tier leaf-and-spine designs, deliver a range of fixed and modular solutions for scalable and multiple use cases. All R4 products deliver the full range of EOS (Arista’s operating system)-based L3 features for modern architectures using EVPN, VXLAN, MPLS and SR/SRv6. Each R4 system ensures predictable latency via hierarchical deep packet buffering, coupled with scalable protection against packet loss during transient congestion.

Gibbs said AI growth continues to stoke significant demand for next-generation AI spines. “That drives the need for a dense 800 Gig in the backbone,” he said. “We’re also seeing strong demand for traditional data centers. AI gets all the press these days, but a lot of our business is traditional data center networking with enterprises. And we’re seeing speed changes there as well. We’re seeing repatriation of data from public cloud back to on-prem, or just workload expansion with workload complexity down at the circle level.”

800G for AI centers

The sweet spot for the new router series is high-scale data center backbones, data center interconnections, AI training and inferencing and scale-across routing. To meet the demand for high-speed transport in the very high port densities required by AI centers, Arista provides a range of density options, including petabit scale across cloud/AI titans, neoclouds, service provider and enterprise customer segments, according to the company’s announcement.

Security is a foundational element of the new 800GbE-based offerings in the 7800R4 and 7280R4 families. Each platform supports wire-speed encryption on every port simultaneously with TunnelSec, including MACsec, IPsec and VXLANsec options. The multilayer encryption technologies protect customer data in transit from malicious interception.

To buffer or not to buffer

Though there are several network solutions for scale-across and AI data centers, there is some industry debate as to whether to use shallow or deep buffers. Using shallow or deep buffers in AI networks, particularly in deep learning models, involves a tradeoff between memory usage/latency and throughput/training stability. The argument against buffering is that, when the network is oversubscribed, the buffers get filled and then drained, creating latency.

On the call, I discussed this with Martin Hull, vice president and general manager of cloud and AI. “The buffers are there as a protection mechanism,” he explained. “When a packet comes in, it is sent along and only buffered if the destination is not ready to receive it. In this case, there are two choices – drop it or buffer it and the former create far more latency and longer AI job completion times.”

Gibbs added, “While these are called deep buffers, technically they are hierarchical hybrid buffers with on chip shallow buffers and on package deep buffers. If you’re managing and tuning the workloads, packets go in and out with no problem and the network will be ultra-low latency. However, if something crops up with congestion, either in the box or at the far end, shallow buffers will drop the packets causing retransmission and skyrocket latency and job completion time.”

Improved efficiency

Gibbs said the new platforms leverage the same EOS used across the company’s portfolio and expanded to address new use cases. He said the new R4 portfolio delivers significant advantages, including the lower job completion time and lower power per gigabit per second, improving the efficiency on a watt-per-gig basis. The rise in AI data centers has put a microscope on power utilization and the new systems from Arista are more efficient while introducing the 3.2-Tbps links.

The 7280R4, which features a compact fixed form factor, complements the 7800R4. Both lines feature the same data plane forwarding capabilities, enabling customers to right-size their solutions to match needed port speeds, densities, space and network architectures.

Key 7280R4 family enhancements include:

  • 32-port 800 GbE system, ideal as an AI/DC spine or backbone router
  • 64-port 100 GbE with 10-port 800 GbE system, ideal for an AI/DC leaf

New data center leaf switches

Arista is also introducing new 7020R4 Ethernet leaf switches for high-speed direct server connectivity as an AI or DC leaf. The switches are designed for organizations with complex workloads, heterogeneous environments, and high-end workstations.

The 7020R4 family’s capabilities include 10GbE Ethernet with copper or 25GbE SFP port options, as well as 100 GbE uplinks with wirespeed TunnelSec encryption per port for cybersecurity protection.

Availability

The 7800R4 modular systems, a pair of new linecards, and the two new 7280R4 platforms are already shipping. The new 7020R4 Chassis platforms and a new 7800R4 with HyperPort are scheduled to ship in Q1 of next year.

Final thoughts

Unlike the speculative “dot-com” overbuilds, today’s data and AI centers are responding to seemingly endless business demand. GPU and accelerator utilization is at record highs, and the incremental business value delivered is tangible, not theoretical. Arista’s leadership in the 800GbE switching market and its aggressive portfolio expansion are well-timed to benefit from a 90% average annual growth rate in this segment over the next five years.

Zeus Kerravala is a principal analyst at ZK Research, a division of Kerravala Consulting. He wrote this article for SiliconANGLE.

Photo: Arista

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.