UPDATED 06:00 EDT / OCTOBER 10 2018

AI

Nvidia rides the RAPIDS to accelerate graphics chips for AI training

Nvidia Corp. is hoping to cement its lead in artificial intelligence with the launch early today of a new graphics processing unit acceleration platform that should enable greater amounts of data to be processed for deep learning and machine learning.

The idea behind RAPIDS is to provide enterprises with a performance boost in order to help them address “highly complex business challenges” that rely on processing vast amounts of data. Examples include predicting credit card fraud, forecasting retail inventory and understanding consumer buying behavior, Nvidia officials said.

GPUs have become essential for AI workloads such as deep learning and machine learning, since they provide far greater processing power than regular central processing units. But Jeffrey Tseng, Nvidia’s head of product for AI infrastructure, said ahead of the company’s GPU Technology Conference in Munich that companies still need more power to handle the most demanding workloads.

“Companies today are becoming more and more data-driven,” Tseng said. “Data analytics and machine learning is now the leading high-performance computing segment. But we’re hitting a brick wall in our ability to use data.”

The main component of the open-source RAPIDS platform is a suite of CUDA-accelerated libraries for GPU-based analytics, machine learning and data visualization. Tseng said Nvidia is starting with five of the most popular machine learning libraries and accelerating them for GPUs. Doing so optimizes AI training with more iterations for better model accuracy, the company said.

The libraries provide data scientists with the tools they need to be able to run their entire data pipelines on GPUs, Nvidia said. The platform is boosted by the XGBoost machine learning algorithm for training data models on Nvidia’s DGX-2 system, which is a piece of hardware that combines 16 fully interconnected GPUs to provide up to 2 petaflops of processing power. The result is that data scientists can reduce deep learning and machine learning algorithm training by as much as 50 times compared with training performed on CPU-based systems.

“With this new software platform and the hardware we’ve released, we’ve seen a large reduction in training time and lower infrastructure costs,” Tseng said.

Nvidia’s claims are given credence by a slew of big tech companies that have thrown their weight behind the RAPIDS platform. They include database giant Oracle Corp., which is supporting RAPIDS on its Oracle Cloud Infrastructure via Nvidia’s cloud. Oracle is also working to support the platform on its Oracle Data Science Cloud, it said.

Other companies include IBM Corp., which is announcing support for RAPIDS across on-premises, public, hybrid and multicloud environments via the IBM Cloud, PowerAI on IBM POWER9 and IBM Watson Studio and Watson Machine Learning services. Meanwhile, the big data company Databricks Inc. said it will be using RAPIDS to accelerate Apache Spark workloads.

“We have multiple ongoing projects to integrate Spark better with native accelerators, including Apache Arrow support and GPU scheduling with Project Hydrogen,” said Matei Zaharia, cofounder and chief technologist at Databricks. “We believe that RAPIDS is an exciting new opportunity to scale our customers’ data science and AI workloads.”

Hewlett Packard Enterprise Co., Cisco Systems Inc., Dell Technologies Inc. and Lenovo Group Ltd. will also support RAPIDS on their own systems, Nvidia said.

Analyst Patrick Moorhead of Moor Insights & Strategy said RAPIDS was all about Nvidia trying to make its GPUs more accessible to enterprises interested in running AI workloads.

“Nvidia has had lots of success getting hyperscalers like AWS and Azure to integrate deep learning and machine learning into their workflows,” Moorhead said. “RAPIDS should provide enterprises with better access to these same capabilities by giving software vendors like IBM and HPE easier access to Nvidia acceleration.”

If it can do this, Nvidia has a great shot at dominating the market for AI workloads for years to come, said Holger Mueller, principal analyst and vice president at Constellation Research Inc. He said it’s all about creating the right combination of hardware and software to facilitate these workloads, and Nvidia’s RAPIDS platform is one of the best efforts at doing so yet.

“For Nvidia, coming from the hardware side, it is clear that it has to win the hearts and minds of developers and data scientists and create a widely adopted software platform,” Mueller said. “RAPIDS is a great attempt at this strategy, but we have to wait and see if developers, data scientists and most importantly CxOs, who make the decisions over next-generation app platforms, will up take the new offering.”

With reporting from Robert Hof

Photo: PublicDomainPictures/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU