UPDATED 09:00 EDT / JUNE 03 2025

INFRA

Broadcom introduces Tomahawk 6 networking chip for large-scale AI clusters

Broadcom Inc. today debuted a new chip lineup, the Tomahawk 6 series, that’s optimized to power Ethernet switches in data centers.

The company says that customers can expect bandwidth of up to 102.4 terabits per second. That’s nearly twice the performance of the second-fastest Ethernet switch on the market. According to Broadcom, the Tomahawk 6’s speed makes it particularly well-suited for powering large artificial intelligence clusters.

The process of training a large language model involves spreading it across multiple graphics cards. Each chip performs a different subset of the tasks involved in the training workflow. As a result, those tasks can be performed in parallel, which is significantly faster than completing them one after one.

The graphics cards involved in an LLM training project must keep their work coordinated. They do so by regularly sending data to one another, a task that consumes a significant amount of bandwidth. Inference is also bandwidth-intensive because it often requires the graphics cards that run an LLM to retrieve data over the network from remote storage equipment.

According to Broadcom, Tomahawk 6 optimizes network speeds using a set of AI features dubbed Cognitive Routing 2.0. The technology detects when a network link is congested and reroutes data to other connections, which helps avoid performance bottlenecks. It doubles as an observability tool that can collect data about technical issues.

Data center operators often link together the servers in their AI clusters using fiber-optic cables. Such cables are significantly faster than the copper wires historically used for the task. For customers with optical networks, Broadcom offers a version of Tomahawk 6 that ships with co-packed optics.

Before data from an AI server can be sent over a fiber-optic cable, it has to be turned into light. This task is usually performed by devices called pluggable transceivers that have to be attached to an AI cluster’s switches. Co-packaged optics, or CPO, technology integrates the features of a transceiver directly into a switch’s processor. That removes the need for standalone transceiver devices, which avoids the associated hardware costs and lowers power consumption.

The Tomahawk 6 can also be used in copper-based networks. Standard copper cables have fairly limited range, which means that AI servers must be placed near one another to ensure reliability connectivity. That constraint, in turn, can make it challenging for engineers to design AI clusters. Tomahawk 6 ships with support for long-reach passive copper cables that can ease the design process.

Broadcom says that the chip is capable of powering clusters with up to 512 processors when it’s used in a scale-out configuration. This configuration includes limited amount of network equipment. In two-tier scale-out networks that include a larger number of switches, the Tomahawk 6 can link together more than 100,000 processors. 

The chip is shipping today. According to Broadcom, multiple customers plan to integrate the Tomahawk 6 into AI clusters with more than 100,000 processors. 

Photo: Broadcom

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU