UPDATED 09:00 EDT / JANUARY 16 2019

INFRA

Nvidia brings its Tesla T4 GPUs for machine learning to Google’s cloud

Google LLC today announced it’s making Nvidia Corp.’s low-power Tesla T4 graphics processing units available on its cloud platform in beta test mode.

The move is significant because Nvidia’s GPUs are the most popular hardware used for machine learning. That’s a subset of artificial intelligence that uses software to emulate roughly how the human brain works to enable computers to teach themselves rather than needing to be programmed explicitly.

The availability of the chips follows several months of testing of the chips by select customers in a private alpha release.

Nvidia’s T4 GPUs are designed for workloads such as AI, data analytics, high-performance computing and graphics design. They’re based on the company’s new Turing architecture and boast multiprecision Turing Tensor Cores plus new RT cores. Each T4 chip comes with 16 gigabytes of memory and is capable of delivering 206 TOPS of compute performance.

Thanks to their low energy requirements, the T4 GPUs are an ideal choice for running workloads at the edge of networks, Nvidia said.

And in a blog post today, Google stressed that the T4s are also the best choice for running inference workloads, which refers to fully trained machine learning algorithms making decisions by themselves.

“Its high performance characteristics for FP16, INT8 and INT4 allow you to run high scale inference with flexible accuracy/performance tradeoffs that are not available on any other GPU,” Google product manager Chris Kleban said.

The availability of Nvidia’s T4 GPUs on Google’s cloud should benefit both companies, Holger Mueller, principal analyst and vice president of Constellation Research Inc., told SiliconANGLE. That’s because machine learning is a key driver of cloud adoption, he said.

“Nvidia getting its Tesla GPUs into the Google Cloud is a major win, as it ensures that its customers can easily tap into it,” Mueller said. “It’s a good move for Google as well, since machine learning load hinges on many GPU platforms and so it allows customers to transfer loads more easily to Google Cloud.”

Google said the Nvidia Tesla T4 GPUs are available in beta starting today across several regions, including the United States, Europe, Brazil, India, Japan and Singapore. Pricing starts at 29 cents per hour per GPU on preemptible virtual machine instances. Pricing for on-demand instances starts at 95 cents per hour.

Photo: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU