

Networking devices are the lifeblood that keeps artificial intelligence models running smoothly and without interruption.
As AI eats up more space in data centers, Broadcom Inc. has been quick to innovate, releasing a series of products designed to facilitate efficient, low-latency networking.
Broadcom’s Ram Velaga talks with theCUBE about networking for AI.
“Machine learning is a massive distributed computing system,” said Ram Velaga (pictured), senior vice president and general manager at Broadcom. “What connects all of this together is the network. The important thing about the network is it cannot be the bottleneck; the network has to move the data very quickly between these XPUs and [graphics processing units]. What we are doing is focusing on how do you scale this network to go as fast as it can.”
Velaga spoke with theCUBE’s John Furrier at VMware Explore, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed Broadcom’s newest products and handling latency between data centers. (* Disclosure below.)
The data required for AI models might not fit into a single data center, requiring packets to cross between data centers that could be more than 60 miles apart. This distance can result in dropped packets and a lot of latency, two issues that Broadcom aims to address, according to Velaga.
“You need to be able to buffer traffic across these 100-kilometer distances,” he said. “If you have any congestion or if you have any packet drops across these 100-kilometer-separated data centers, you’re able to retransmit from the switches and not have to go back all the way into your host into the GPU, pull the data down again and retransmit.”
In the past three months, Broadcom has released three networking devices for communication inside the rack and between multiple data centers: Tomahawk 6, which doubles the bandwidth of its previous iteration, Tomahawk Ultra, a switch designed for ultra-low latency, and Jericho 4, which features deep buffers to better handle network congestion, Velaga noted.
“The important part about the Jericho class of devices is you can actually have many of these pulled together in a system that creates a very large fabric that is a very massive system, that is thousands of 3.2 terabyte ports,” he said. The magnitude of this is very large.”
Amidst the current innovation around AI networking, Velaga foresees further differentiation, suggesting that companies will eventually be using a range of XPUs and GPUs. He compares it to the difference between a personal computer and the internet.
“A PC is a homogeneous system; it can be self-contained,” Velaga said. “But if you want to build the internet, which is a distributed system, you have to connect this together, and the only way you can do it is it has to be heterogeneous … it has to be where anybody can build it, so that tomorrow, if you don’t build it, somebody else can come and take your place.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of VMware Explore:
(* Disclosure: Broadcom Inc. sponsored this segment of theCUBE. Neither Broadcom nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.