UPDATED 18:00 EST / MARCH 18 2024

AI

Dell expands infrastructure portfolio with new Nvidia-powered AI platforms

Dell Technologies Inc. is rolling out a set of infrastructure offerings, including new servers, that promise to help enterprises more efficiently train and run artificial intelligence models.

The company announced the products against the backdrop of Nvidia Corp.’s closely watched GTC developer conference. According to Dell, its new servers are compatible with the B200 Tensor Core graphics card that Nvidia Chief Executive Jensen Huang unveiled at the event. Dell is rolling out the machines alongside a data lake platform, upgrades to its storage portfolio and a range of other product updates.

“Organizations are rushing to experiment with AI but there are many challenges to achieving ROI. Data sovereignty issues, legal and compliance concerns and data quality are all top of mind,” said theCUBE Research co-founder and chief analyst Dave Vellante. “Our research shows that companies are turning to industry leaders like Dell and NVIDIA to help provide AI expertise and services to lower risk and get to ROI sooner.”

Inference-optimized compute

Dell’s new PowerEdge XE9680 servers will be available to customers with Nvidia’s latest B200 Tensor Core graphics processing unit. The chip is expected to perform inference, the task of running trained AI models in production, up to 15 faster than previous-generation silicon. It’s also touted as being more cost-efficient. 

The B200 is based on a new Nvidia GPU architecture known as Blackwell. According to Dell, its PowerEdge XE9680 servers also support other Blackwell-based chips as well as the H200 Tensor Core. Introduced in November, the latter graphics card is an enhanced version of the H100 specifically optimized to run large language models.

On the networking side, the new servers work with Nvidia’s Quantum-2 and Spectrum-X switch lineups. They’re built for networks that use the InfiniBand and Ethernet data transfer protocols, respectively. Both switch families include a range of software features designed to cut latency and reduce the impact of congested connections on data transfer speeds.

Data storage and management 

PowerScale is a line of network-attached storage appliances from Dell optimized for, among other use cases, running AI models. The systems hold data in flash drives implemented with a scale-out architecture, which makes it relatively simple to add more capacity when needed. A storage operating system called OneFS manages the capacity scaling process along with related maintenance tasks.

According to Dell, the PowerScale series is now the first line of Ethernet storage systems validated for use with SuperPODs based on the DGX H100. The DGX H100 is a data center appliance from Nvidia that features eight of the chipmaker’s H100 GPUs. A SuperPOD, in turn, is a cluster of DGX appliances.

Dell says the product updates it detailed at GTC today will make it easier not only to store data but also to manage it. In conjunction with the introduction of its new servers, the company announced that its Dell Data Lakehouse platform is now globally available. The offering allows organizations to centrally process information from different sources.

Enterprises historically used two main types of data management platforms. Data warehouses are highly reliable and well-suited for processing structured records, while data lakes can hold large amounts of unstructured information in a cost-efficient manner. Data lakehouses such as Dell’s newly launched platform combine the two technologies’ feature sets in a single offering.

Integrated platforms

Dell also introduced several other additions to its product portfolio at GTC. Each of the new offerings combines multiple components of the company’s hardware portfolio with software, professional services and Nvidia silicon. 

The first offering, the Dell AI Factory, is described as an “end-to-end AI enterprise solution” for training, tuning and running AI models. It combines Nvidia chips with products from Dell’s compute, storage, client device and software portfolios, as well as professional services. Those services promise to ease tasks such as preparing AI datasets.

Dell Generative AI Solutions with NVIDIA – Model Training is another newly introduced infrastructure platform. According to the company, it can help companies more easily train custom AI models optimized for domain-specific tasks. A third new offering, Dell Generative AI Solutions with NVIDIA – Retrieval-Augmented Generation, is geared towards companies that are building AI models with RAG features.

 “Together, NVIDIA and Dell are helping enterprises create AI factories to turn their proprietary data into powerful insights,” said Nvidia CEO Jensen Huang.

AI accelerators can generate significantly more heat than a typical central processing unit. Dell detailed today that it’s working with Nvidia to develop a rack-scale, liquid-cooled AI platform based on the chipmaker’s Grace Blackwell Superchip. Water and other liquids used for data center cooling conduct heat better than air, which makes them more effective at regulating server temperatures.

Photo: Wikipedia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU