UPDATED 14:43 EDT / AUGUST 06 2018

INFRA

Google Cloud adds Nvidia P4 graphics cards for AI and virtual desktops

Google LLC today made a new graphics accelerator available in its public cloud to provide better support for artificial intelligence and virtual desktop workloads.

The chip in question is market leader Nvidia Corp.’s P4. It brings the number of Nvidia graphics processing units that Google’s cloud platform supports to four, all of which have been added since February 2017. The pace at which the company has expanded its GPU lineup reflects just how fast enterprise AI adoption is growing. 

With a starting price of 60 cents per hour, the P4 is the second most affordable of the four available GPUs. The chip can provide 5.5 teraflops of performance when processing single-precision values that take up 4 bytes. A single teraflop equals 1 trillion floating-point operations per second, a standard unit of computing power.

Nvidia has also equipped the P4 with 8 gigabytes of GDDR5 memory, which is specifically designed for use in GPUs. On-chip memory is faster than the regular standalone kind because it keeps data closer to the GPU cores and thereby cuts latency.

In AI deployments, Google sees its new cloud-based P4s being used mainly for machine learning inference. That’s the data processing neural networks do when they’re in production, after they have been properly trained, which is an entirely different task that’s sometimes better served by more powerful GPUs. 

The P4 lends itself just as well to virtual desktop environments. It features Grid, a piece of software from Nvidia that provides pass-through access to GPU resources for virtual machines. For added measure, Google offers access to a tool from partner Teradici Inc. that can stream footage of applications running inside a VM to an employee’s local device.

The third and final use case the company is targeting with its cloud-based P4s is video streaming. The chip has three video processing engines that, according to Nvidia, can transcode up to 35 high-definition streams in real time.

Nvidia is a key partner for Google’s efforts to address the growing role of GPUs in companies’ technology strategies. With that said, Google is not fully reliant on the chipmaker for AI processors. The company also offers cloud customers its Tensor Processing Units, internally designed chips customized for running neural networks that can each provide a massive 180 teraflops of computing power.

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU