UPDATED 00:00 EDT / SEPTEMBER 22 2017

CLOUD

Google adds new Nvidia cloud GPUs to accelerate machine learning

Google Inc. is hoping to get more users to run their machine learning and artificial intelligence workloads in its cloud with the launch of new Nvidia Corp. graphics processing units in multiple regions.

Specialized cloud GPUs such as those built by Nvidia are designed to accelerate workloads such as machine learning training and inference, geophysical data processing, simulations, seismic analysis and molecular modeling.

Chris Kleban and Ari Liberman, product managers for Google Compute Engine, said in a blog post Thursday that the company was announcing the availability of Nvidia’s P100 GPUs in beta. In addition, it said Nvidia’s K80 GPU series is now generally available. They added that the company is also providing “sustained use discounts” on the GPUs to encourage customers to make use of them.

With regard to Nvidia’s Tesla P100 GPUs, Google described these as “state-of-the-art” processors that allow customers to increase throughput with fewer instances while simultaneously saving on costs.

Google also noted some of the advantages that cloud GPUs like these have over traditional GPUs. The first of these is increased flexibility, as they allow everything from the central processor and memory to disk size and GPU configuration to be customized to suit customer’s needs.

They also cited the faster performance of cloud GPUs. The other advantages are lower costs, due to the sustained use discounts which are detailed in the chart below. Finally, Google touted the benefits of “cloud integration,” saying that cloud GPUs are available at all levels of the cloud stack.

screen-shot-2017-09-21-at-11-25-48-am

“For infrastructure, Compute Engine and Google Container Enginer allow you to run your GPU workloads with VMs or containers,” the engineers wrote. “For machine learning, Cloud Machine Learning can be optionally configured to utilize GPUs in order to reduce the time it takes to train your models at scale with TensorFlow.”

Google added that the new GPUs are available in four regions to begin with, namely its U.S. East and West regions, and its Europe West and Asia East regions.

Google said it’s seeing customers use the new GPUs for a range of compute-intensive tasks including genomics, computational finance and training for machine learning models. It said the choice of two different chips gives customers more flexibility as they can choose the ones most suitable to optimize their workloads while balancing performance and pricing.

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.