UPDATED 11:00 EDT / APRIL 30 2018


Google bumps up its cloud performance with Nvidia’s latest graphics chips

Google LLC’s public cloud is getting a little more oomph for more intensive workloads such as machine learning and high-performance computing with the launch today of a new batch of hardware accelerators from Nvidia Corp.

Hardware accelerators are used to boost the performance of systems beyond what regular computer processors can do. The most common forms of accelerators are graphics processing units such as those made by Nvidia, along with field-programmable gate arrays sold by companies such as Intel Corp. that can be reprogrammed on the fly for different workloads.

For the most intensive work, Google is touting Nvidia’s latest Tesla V100 GPUs, which are now available in beta test mode for customers in its us-west1, us-central1 and europe-west4 regions.

The new V100s can boost performance in deep learning and high-performance computing workloads by up to 40 percent in some cases, according to Google. They can also be used to power software containers, which are a more flexible development environment that enables applications to be built once and run anywhere.

Google product managers Chris Kleban and Ari Liberman explained in a blog post how the V100s can be used with Kubernetes Engine, which is a cluster manager and orchestration system for running Docker containers. Google’s Cluster Autoscaler feature takes care of the heavy lifting, automatically creating nodes powered by the V100s and scaling them up or down as the workload demands.

The V100 GPUs are priced at $2.48 per hour for on-demand use and $1.24 per hour in the case of preemptible virtual machines, which are Google’s lowest-cost instances used for less pressing workloads.

As for those seeking more of a balance between cost and performance, the Nvidia P100 GPUs might be a better alternative. Now generally available in Google’s europe-west4, us-west1, us-central1, us-east1, europe-west1 and asia-east1 regions, the P100s provide lesser performance at a reduced cost.

Users can run up to four P100s with 96 vCPUs and 624 gigabytes of memory in a single virtual machine, Google said. The price is significantly cheaper, at just $1.46 per hour for on-demand use and 73 cents per hour for preemptible instances.

Image: Rawpixel/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy