UPDATED 00:00 EDT / SEPTEMBER 22 2017

CLOUD

Google adds new Nvidia cloud GPUs to accelerate machine learning

Google Inc. is hoping to get more users to run their machine learning and artificial intelligence workloads in its cloud with the launch of new Nvidia Corp. graphics processing units in multiple regions.

Specialized cloud GPUs such as those built by Nvidia are designed to accelerate workloads such as machine learning training and inference, geophysical data processing, simulations, seismic analysis and molecular modeling.

Chris Kleban and Ari Liberman, product managers for Google Compute Engine, said in a blog post Thursday that the company was announcing the availability of Nvidia’s P100 GPUs in beta. In addition, it said Nvidia’s K80 GPU series is now generally available. They added that the company is also providing “sustained use discounts” on the GPUs to encourage customers to make use of them.

With regard to Nvidia’s Tesla P100 GPUs, Google described these as “state-of-the-art” processors that allow customers to increase throughput with fewer instances while simultaneously saving on costs.

Google also noted some of the advantages that cloud GPUs like these have over traditional GPUs. The first of these is increased flexibility, as they allow everything from the central processor and memory to disk size and GPU configuration to be customized to suit customer’s needs.

They also cited the faster performance of cloud GPUs. The other advantages are lower costs, due to the sustained use discounts which are detailed in the chart below. Finally, Google touted the benefits of “cloud integration,” saying that cloud GPUs are available at all levels of the cloud stack.

screen-shot-2017-09-21-at-11-25-48-am

“For infrastructure, Compute Engine and Google Container Enginer allow you to run your GPU workloads with VMs or containers,” the engineers wrote. “For machine learning, Cloud Machine Learning can be optionally configured to utilize GPUs in order to reduce the time it takes to train your models at scale with TensorFlow.”

Google added that the new GPUs are available in four regions to begin with, namely its U.S. East and West regions, and its Europe West and Asia East regions.

Google said it’s seeing customers use the new GPUs for a range of compute-intensive tasks including genomics, computational finance and training for machine learning models. It said the choice of two different chips gives customers more flexibility as they can choose the ones most suitable to optimize their workloads while balancing performance and pricing.

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU