UPDATED 20:26 EDT / JULY 25 2022

AI

Latest Nvidia Enterprise AI release extends support for data science pipelines and low-code training

Nvidia Corp. today rolled out a major update to its AI Enterprise software suite, with version 2.1 adding support for key tools and frameworks that companies can use to run artificial intelligence and machine learning workloads.

Launched in August last year, Nvidia AI Enterprise is an end-to-end AI software suite that bundles various AI and machine learning tools that have been optimized to run on Nvidia’s graphics processing units and other hardware.

Among the highlights of today’s release is support for advanced data science use cases, Nvidia said, with the latest version of Nvidia Rapids, a suite of open-source software libraries and application programming interfaces for executing data science pipelines entirely on GPUs. Nvidia said Rapids is able to reduce AI model training times from days to just minutes. The latest version of that suite adds greater support for data workflows with the addition of new models, techniques and data processing capabilities.

Nvidia AI Enterprise 2.1 also supports the most recent version of the Nvidia TAO Toolkit, which is a low-code and no-code framework for fine-tuning pre-trained AI and machine learning models with custom data to produce more accurate computer vision, speech and language understanding models. The TAO Toolkit 22.05 release offers new functionality such as REST APIs integration, pre-trained weights import, TensorBoard integration and new pre-trained models.

To make AI more accessible in hybrid and multicloud environments, Nvidia said the latest version of AI Enterprise adds support for Red Hat OpenShift running in public clouds, adding to its existing support for OpenShift on bare metal and VMware vSphere-based deployments. AI Enterprise 2.1 further gains support for the new Microsoft Azure NVads A10 v5 series virtual machines.

These are the first Nvidia virtual GP instances offered by any public cloud, and enable more affordable “fractional GPU sharing,” the company explained. For instance, customers can make use of flexible GPU sizes ranging from one-sixth of an A10 GPU all the way up to two full A10 GPUs.

A final update pertains to Domino Data Lab Inc., whose enterprise MLOps platform has now been certified for AI Enterprise. Nvidia explained that with this certification, it helps to mitigate deployment risks and ensures reliability and high-performance for MLOps with AI Enterprise. By using the two platforms together, enterprises can benefit from workload orchestration, self-serve infrastructure and increased collaboration together with cost-effective scaling on virtualized and mainstream accelerated servers, Nvidia said.

For enterprises interested in taking the latest version of AI Enterprise for a spin, Nvidia said it’s offering some new LaunchPad labs for them to play with. LaunchPad is a service that provides immediate, short-term access to AI Enterprise in a private accelerated computing environment with hands-on labs that customers can use to experiment with the platform. The new labs include multinode training for image classification on VMware vSphere with Tanzu, the opportunity to deploy a fraud detection XGBoost Model using Nvidia Triton and more.

Image: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU