UPDATED 09:00 EST / JANUARY 16 2019

INFRA

Nvidia brings its Tesla T4 GPUs for machine learning to Google’s cloud

Google LLC today announced it’s making Nvidia Corp.’s low-power Tesla T4 graphics processing units available on its cloud platform in beta test mode.

The move is significant because Nvidia’s GPUs are the most popular hardware used for machine learning. That’s a subset of artificial intelligence that uses software to emulate roughly how the human brain works to enable computers to teach themselves rather than needing to be programmed explicitly.

The availability of the chips follows several months of testing of the chips by select customers in a private alpha release.

Nvidia’s T4 GPUs are designed for workloads such as AI, data analytics, high-performance computing and graphics design. They’re based on the company’s new Turing architecture and boast multiprecision Turing Tensor Cores plus new RT cores. Each T4 chip comes with 16 gigabytes of memory and is capable of delivering 206 TOPS of compute performance.

Thanks to their low energy requirements, the T4 GPUs are an ideal choice for running workloads at the edge of networks, Nvidia said.

And in a blog post today, Google stressed that the T4s are also the best choice for running inference workloads, which refers to fully trained machine learning algorithms making decisions by themselves.

“Its high performance characteristics for FP16, INT8 and INT4 allow you to run high scale inference with flexible accuracy/performance tradeoffs that are not available on any other GPU,” Google product manager Chris Kleban said.

The availability of Nvidia’s T4 GPUs on Google’s cloud should benefit both companies, Holger Mueller, principal analyst and vice president of Constellation Research Inc., told SiliconANGLE. That’s because machine learning is a key driver of cloud adoption, he said.

“Nvidia getting its Tesla GPUs into the Google Cloud is a major win, as it ensures that its customers can easily tap into it,” Mueller said. “It’s a good move for Google as well, since machine learning load hinges on many GPU platforms and so it allows customers to transfer loads more easily to Google Cloud.”

Google said the Nvidia Tesla T4 GPUs are available in beta starting today across several regions, including the United States, Europe, Brazil, India, Japan and Singapore. Pricing starts at 29 cents per hour per GPU on preemptible virtual machine instances. Pricing for on-demand instances starts at 95 cents per hour.

Photo: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.