UPDATED 13:20 EDT / JUNE 20 2024

Neil MacDonald, executive VP and GM of the Compute Business Unit at HPE, and Bob Pette, VP and GM of enterprise platforms at Nvidia, talk about their companies' new collaboration with on-prem and hybrid AI at HPE Discover 2024. AI

HPE partners with Nvidia to support hybrid AI architecture for enterprises

Without a doubt, the winner of the generative artificial intelligence boom thus far has been Nvidia Corp. Now the company is joining forces with Hewlett Packard Enterprise Co. to give enterprises the complete AI stack. HPE and Nvidia have paired the former’s expertise in on-premises solutions with the latter’s cutting-edge hardware.

Since enterprises often keep their private data on-prem, industry-specific models such as Nvidia inference microservices are the future, according to Neil MacDonald (pictured, right), executive vice president and general manager of the Compute Business Unit at HPE.

Neil MacDonald, executive VP and GM of the Compute Business Unit at HPE, and Bob Pette, VP and GM of enterprise platforms at Nvidia, discuss the deepening partnership between HPE and Nvidia at HPE Discover 2024.

Nvidia’s Bob Pette and HPE’s Neil McDonald talk with theCUBE’s Dave Vellante and John Furrier about how the partnership between HPE and Nvidia benefits enterprises.

“For enterprises to be successful with generative AI, they have to pursue either fine-tuning a model based on their own data, which they’re going to do on-prem, or leveraging retrieval augmented generation,” he said. “We’ve integrated into private cloud AI our data fabric that enables you to pull that together from those disparate sources in order to underpin that work. But that tuning, or RAG, or the development of small specialized models in that company or industry, is really critical to get the level of impact that’s needed.”

MacDonald and Bob Pette (left), VP and GM of enterprise platforms at Nvidia, spoke with theCUBE Research’s John Furrier and Dave Vellante at HPE Discover, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the future of hybrid AI architecture and details of the companies’ collaboration. (* Disclosure below.)

HPE and Nvidia offer turnkey AI solution for enterprises

HPE has announced Nvidia AI Computing by HPE, integrating Nvidia’s GPUs into HPE’s systems, including HPE Private Cloud for AI. This partnership helps cement Nvidia as an on-prem provider. The company has declared plans to follow its Grace Hopper GPU architecture with Blackwell, powering the next generation of AI technologies, as well as more industry-specific NIMs.

“We have a slew of models covering a variety of enterprise use cases across multiple industries, which is what Jensen refers to as the next industrial revolution,” Pette said. “We’re actually able to take multiple data sources and predict, and that’s where that more and more intelligence is extracted … these NIMs, these containerized models can come from anywhere. People can download them from wherever they are, but we have a nightly NIM AI factory at Nvidia that we continue to build these models, tune, test, optimize.”

The future of AI will involve both on-prem and cloud solutions, according to MacDonald. Locally run models are preferable for companies interested in protecting intellectual property or private data, as well as meeting regulatory restrictions, which are stricter in Europe.

“We’ve long held that the world is going to be hybrid, and it’s been borne out,” he said. “What we’ve done is co-developed a turnkey integrated solution that integrates that compute stack, including the storage that’s needed and the ability to ingest data across a disparate enterprise into gen AI initiatives, leveraging our data fabric.”

The future of AI architecture will require bigger power systems to support chips with greater performance, but this system will ultimately save companies money, according to Pette. Together, HPE and Nvidia are presenting a unified vision of AI computing that pulls together fragmented data sources while integrating different technologies at the hardware level.

“The architecture is not about the individual nodes anymore. It’s not even about the individual accelerators anymore,” MacDonald said. “It’s about the whole system architecture that brings together all of the capabilities in that compute stack, the data stack and the model stack that you need to be effective with generative AI. So, another shift here, both for our partnership and our collaboration, but also, for our enterprise customers, is thinking at that system level in a much, much more profound and cohesive way.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of HPE Discover

(* Disclosure: TheCUBE is a paid media partner for HPE Discover. Neither Hewlett Packard Enterprise Co. and Intel Corp., the primary sponsors of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU