UPDATED 16:30 EST / MAY 16 2022

AI

Nvidia LaunchPad lowers AI’s barrier to entry through a hands-on approach

Even though artificial intelligence gives businesses a competitive advantage in the 21st century, many enterprises are still struggling with how to get started.

While organizations are ready to begin their AI journey, they often have concerns that they don’t have the deep pockets or best-trained staff to emulate the global brands already built on AI. Nvidia Corp. bridges this gap through LaunchPad, which offers businesses foundational skills such as developing an AI practice and getting it into production, according to Justin Boitano (pictured, right), vice president of EGX at Nvidia.

“We built … LaunchPad, so they get instant access to the AI servers with OpenShift, MLOps tooling, and example applications,” Boitano stated. “It gives you access to the entire stack instantly with hands-on curated labs for both IT and data scientists. So they can walk out with the blueprints they need to set this up and start on a successful AI journey.”

Boitano and Tushar Katarki (pictured, left), director of product management, OpenShift, at Red Hat Inc., spoke with industry analysts Dave Vellante and Paul Gillin at Red Hat Summit, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed how Nvidia’s LaunchPad program holds enterprises’ hands in their AI journey, as well as gave insights on the Nvidia-Red Hat partnership. (* Disclosure below.)

Overhauling data centers using AI

With GPUs being the engines of AI development, Boitano believes this cutting-edge technology will enhance data centers’ performance, security and throughput.

“Acceleration is really the thing that’s going to advance all data centers,” he said. “So I think in the future, every server will have GPUs and DPUs.”

By borrowing a leaf from application continuous integration/continuous delivery, LaunchPad presents a continuous improvement of AI models in production. “We’re talking about CI/CD of the AI model itself, which is a big business transformation,” Boitano added.

The Red Hat-Nvidia partnership connects the dots through innovations such as the GPU Operator concept, an open-source model that does the orchestration.

“With the operator model, Kubernetes will say, ‘There’s a GPU in that node; let me run the operator,’ and it installs our entire run time,” Boitano explained. “Our run time got a MIG configuration utility, the driver, telemetry and metering of the actual GPU and the workload. So instead of somebody trying to chase down all the little pieces and parts, it just happens automatically in seconds.”

Through the partnership, Nvidia is able to scale through OpenShift, according to Katarki. 

“OpenShift and our approach to open hybrid cloud really form a successful platform to base your entire AI journey on with partners such as Nvidia, whom we are working very closely with,” he stated.

By collaborating with Red Hat, Nvidia is able to support different variations of GPUs.

“We’ve done a bunch of work with Red Hat, and we’ve got a beta of OpenShift 4.10 that now supports DPUs,” Boitano stated. “I’ll call it the control plane — like software-defined networking offload in the data center. So it takes all the software-defined networking off of CPUs.” 

Since the edge doesn’t have the guards, gates and guns protection of the data center, worrying about the physical security of the hardware is sorted out by Nvidia’s new processor Hopper H100, according to Boitano.

“There are really stringent requirements on protecting the intellectual property of the AI model itself … you spend millions of dollars to build it and then push it out to an edge data center,” he pointed out. “That’s the area where we just announced a new processor that we call Hopper H100. It supports confidential computing so that you can basically ensure that the model is always encrypted in system memory across the PCI bus to the GPU, and it’s run in a confidential way on the GPU.”

Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the Red Hat Summit event:

(* Disclosure: TheCUBE is a paid media partner for Red Hat Summit. Neither Red Hat Inc., the sponsor for theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU