UPDATED 17:49 EST / SEPTEMBER 24 2024

INFRA

Intel introduces top-end Xeon 6900P server processors with up to 128 cores

Intel Corp. today introduced a new line of server processors, the Xeon 6900P series, that’s designed for use in demanding environments such as artificial intelligence clusters.

The product family promises to provide about twice the performance per watt of the chipmaker’s previous-generation silicon. According to Intel, the Xeon 6900P series is also significantly better at running AI workloads. Chips in the lineup can complete some inference tasks 2.3 times faster than their predecessors.

“Demand for AI is leading to a massive transformation in the data center, and the industry is asking for choice in hardware, software and developer tools,” said Justin Hotard, executive vice president and general manager of Intel’s Data Center and Artificial Intelligence Group.

Intel’s Xeon server chip portfolio, of which the new Xeon 6900P series is part, implements two types of cores. One variety is optimized for power efficiency while the other prioritizes performance. Whereas some Xeon chip families include a mix of performance- and efficiency-optimized cores, the new Xeon 6900P series features only the former variety to boost processing speeds. 

The flagship processor in the series, the Xeon 6980P, ships with 128 cores that operate at a base frequency of two gigahertz. They can nearly double that speed to 3.9 gigahertz for short amounts of time when running demanding workloads. The cores are supported by a 504-megabyte L3 cache in which the chip stores data that is actively used by the applications in runs.

The Xeon 6900P series also includes four other processors with less computing capacity. They have 72 to 120 cores that are configured with higher base frequencies than the flagship 6980P.

The five chips in the series share a common design. The cores in a 6900P processor are implemented on three different pieces of silicon, or chiplets, that also contain the chip’s cache and certain related components. The cores can use the cache as a shared storage environment or split it up and keep their data in separate memory pools.

Intel produces the chiplets with its latest Intel 3 manufacturing process, which is the second from the company to use extreme ultraviolet lithography, or EUV, technology. The process provides 18% better performance per watt than Intel’s first-generation EUV implementation.

The three chiplets that contain a Xeon 6900P processor’s cores and cache are integrated with two other semiconductor modules made using the earlier Intel 7 node. Those two modules help speed tasks such as compressing and encrypting data. They also contain the I/O, or input and output, circuits that allow the chip to connect to the other components of the server in which it’s installed.

Another selling point of the Xeon 6900P series is that it allows servers to be equipped with MRDIMM memory. This is a faster version of DDR5, a memory technology widely used in data centers. MRDIMM promises to provide up to 39% more bandwidth than earlier technologies.

The technology also eases chip manufacturing in certain respects. It comes in a so-called tall form factor, or TFF, configuration that doubles the maximum amount of memory a processor can accommodate. Moreover, it does so without requiring the use of complex chip packaging components that increase manufacturing costs.

Alongside the debut of the Xeon 6900 series, Intel today officially launched the Gaudi 3 machine learning accelerator it introduced in April. The latter chip is positioned as an alternative to Nvidia Corp.’s market-leading graphics processing units. Intel says the Gaudi 3 can perform inference 30% faster than Nvidia’s previous-generation H200 GPU.

Under the hood, Intel’s new AI chip comprises two sets of computing modules. It features eight so-called MME modules optimized to run relatively simple machine learning tasks. There are also 64 TPC units, which are designed to power advanced AI workloads such as large language models.

Intel has published a reference architecture for an AI appliance that can hold up to 256 Gaudi 3 chips. According to the company, companies with advanced requirements may link together multiple such appliances into a single cluster. AI clusters assembled in this manner can be equipped with up to 8,000 Gaudi 3 chips.

Image: Intel

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU