From blueprint to action: The AI Pod initiative by NetApp, Nvidia and Lenovo
Many have likened the rise of generative artificial intelligence in 2022 to the iPhone’s release. As the next evolution frontier in enterprise computing, gen AI is changing how businesses operate and innovate.
Enterprises are moving from AI blueprinting to action, employing models on-premises. To facilitate those operations, NetApp Inc. has partnered with Nvidia Corp. and Lenovo Group Ltd. on the AI Pod initiative.
“We’ve established this enterprise platform groups at Nvidia specifically to address what we see as the biggest opportunity and more than half … of the AI in the world we see to be done on-prem,” said Bob Pette (pictured, second from right), vice president and general manager of enterprise platforms at Nvidia. “From a government regulation standpoint, a sovereign AI standpoint, IP standpoint, you want the AI running where your data is — and that’s why something like the AI Pod is so significant.”
Pette; Sandeep Singh (second from left), senior vice president and general manager of enterprise storage at NetApp; and Kamran Amini (right), vice president and general manager of the Server, Storage and Software-Defined Infrastructure Business Units at Lenovo, spoke with theCUBE’s Rob Strechay at the NetApp Unveils Unified Data Storage Built for the AI Era event, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how the AI Pod provides a robust platform for enterprises to deploy gen AI by addressing key data sovereignty, security and scalability challenges. (* Disclosure below.)
The AI Pod initiative in detail
The goal for Nvidia, Lenovo and NetApp is to demystify AI for enterprises. Many enterprises lack the expertise to develop AI solutions from scratch. The AI Pod integrates advanced Nvidia GPUs, Lenovo’s management stack and NetApp’s storage solutions, ensuring that enterprises can quickly and securely deploy AI applications tailored to their specific needs, according to Singh.
“It becomes critically important for [enterprises] to take the data and combine it with the pre-trained LLMs,” he said. “This is just that opportunity of providing an AI Pod that packages the best of breed across Nvidia, Lenovo and NetApp, and provides that in a pre-validated, pre-integrated form factor to customers so that they can effectively just do gen AI.”
The AI Pod is designed to be scalable, allowing enterprises to start small and expand as their needs grow. This flexibility is crucial as it enables organizations to manage costs while gradually integrating more AI capabilities, Amini added.
“The early adopters of AI were all the cloud service providers that drove massive scale of large language model systems,” he said. “Most of the enterprise customers that did not have the skills or the knowledge of what do we do around AI, it’s about the partnership we could bring in to really simplify that journey.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of the NetApp Unveils Unified Data Storage Built for the AI Era event:
(* Disclosure: TheCUBE is a paid media partner for the NetApp Unveils Unified Data Storage Built for the AI Era event. Neither NetApp Inc., the sponsor of theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU