UPDATED 13:00 EST / JANUARY 10 2023

INFRA

Nvidia debuts new DGX H100 systems powered by Intel’s 4th Gen Intel Xeon Scalable chips

Nvidia Corp. today announced a refreshed lineup of Nvidia Hopper accelerated computing systems powered by its own H100 Tensor Core graphics processing units, as well as by the 4th Gen Intel Xeon Scalable processors that were launched by Intel Corp. today.

In addition, dozens of Nvidia’s partners have announced their own server systems based on the new hardware combination, and the company says they provide up to 25 times more efficiency than previous generation machines.

Nvidia explained that Intel’s new central processing units will be combined with its GPUs in a new generation of Nvidia DGX H100 systems. Intel’s 4th Gen Intel Xeon scalable processors, the Intel Xeon CPU Max Series and the Intel Data Center GPU Max Series, were announced today. Intel says they deliver a significant leap in data center performance and efficiency, with enhanced security and new capabilities for artificial intelligence the cloud, the network and edge, and the world’s most powerful supercomputers.

The new CPUs offer workload-first acceleration and highly optimized software tuned for specific workloads, enabling users to squeeze the right performance at the right power in order to optimize the total cost of ownership. In addition, the 4th Gen Xeon processors are said to deliver customers a range of features for managing power and performance, making the optimal use of CPU resources to help achieve their sustainability goals.

One of the key new capabilities of Intel’s 4th Gen Intel Xeon scalable processors is their support for PCIe Gen 5, which is able to double the data transfer rates from CPU to GPU. These increased PCIe landes allow for a greater density of GPUs and high speed networking within each server. It also improves the performance of data-intensive workloads such as AI, while boosting network speeds to up to 400 gigabits per second per connection, meaning faster data transfer between servers and storage arrays.

Intel’s CPUs will be combined with eight Nvidia H100 GPUs in the new DGX systems, Nvidia said. The Nvidia H100 GPU is the most powerful chip the company has ever made, containing more than 80 billion transistors, making it an ideal companion for Intel’s new chips. It boasts some unique features that make it ideal for high-performance computing workloads, including a built-in Transformer Engine and a highly scalable NVLink interconnect that enable it to power large artificial intelligence models, recommendation systems and more.

“Modern AI workloads require a mix of computing platforms with both CPUs and GPUs,” said Holger Mueller of Constellation Research Inc. “So Nvidia clearly wants to partner with Intel and use its latest Xeon platform, it’s most powerful yet. AI also depends on the speed at which data is processed, so it makes sense the new DGX appliance is using PCIe Gen5. Now it’s all about the first customers and their use cases.”

Patrick Moorhead of Moor Insights & Strategy said he was impressed with Nvidia’s newest DGX systems, but he pointed out that they’re not the first to support PCIe 5, as Advanced Micro Devices Inc.’s latest processors also come with that feature. “I don’t think PCIe 5 is the deciding factor,” he added. “I think it will likely come down to lower pricing, as I am hearing that Intel is providing deep discounts these days.”

The new Nvidia DGX H100 systems will be joined by more than 60 new servers featuring a combination of Nvdia’s GPUs and Intel’s CPUs, from companies including ASUSTek Computer Inc., Atos Inc., Cisco Systems Inc., Dell Technologies Inc., Fujitsu Ltd., GIGA-BYTE Technology Co. Ltd., Hewlett Packard Enterprise Co., Lenovo Group Ltd., Quanta Computer Inc. and Super Micro Computer Inc.

These forthcoming systems from Nvidia and others will leverage the latest GPU and CPU hardware to run workloads with 25 times the efficiency afforded by traditional, CPU-only servers, Nvidia said. They offer an “incredible performance per watt” that results in far less power consumption, it claims. Further, compared with the previous-generation Nvidia DGX systems, the latest hardware boosts the efficiency of AI training and inference workloads by 3.5-times, resulting in around three times lower costs of ownership.

The software powering Nvidia’s new systems comes in handy too. The new DGX H100 systems all come with a free license for Nvidia Enterprise AI. That’s a cloud-native suite of AI development tools and deployment software, providing users with a complete platform for their AI initiatives, Nvidia said.

Customers can alternatively buy multiple DGX H100 systems in the shape of the Nvidia DGX SuperPod platform, which is essentially a small supercomputing platform that provides up to one exaflop of AI performance, Nvidia said.

Photo: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU