UPDATED 14:43 EDT / AUGUST 06 2018

INFRA

Google Cloud adds Nvidia P4 graphics cards for AI and virtual desktops

Google LLC today made a new graphics accelerator available in its public cloud to provide better support for artificial intelligence and virtual desktop workloads.

The chip in question is market leader Nvidia Corp.’s P4. It brings the number of Nvidia graphics processing units that Google’s cloud platform supports to four, all of which have been added since February 2017. The pace at which the company has expanded its GPU lineup reflects just how fast enterprise AI adoption is growing. 

With a starting price of 60 cents per hour, the P4 is the second most affordable of the four available GPUs. The chip can provide 5.5 teraflops of performance when processing single-precision values that take up 4 bytes. A single teraflop equals 1 trillion floating-point operations per second, a standard unit of computing power.

Nvidia has also equipped the P4 with 8 gigabytes of GDDR5 memory, which is specifically designed for use in GPUs. On-chip memory is faster than the regular standalone kind because it keeps data closer to the GPU cores and thereby cuts latency.

In AI deployments, Google sees its new cloud-based P4s being used mainly for machine learning inference. That’s the data processing neural networks do when they’re in production, after they have been properly trained, which is an entirely different task that’s sometimes better served by more powerful GPUs. 

The P4 lends itself just as well to virtual desktop environments. It features Grid, a piece of software from Nvidia that provides pass-through access to GPU resources for virtual machines. For added measure, Google offers access to a tool from partner Teradici Inc. that can stream footage of applications running inside a VM to an employee’s local device.

The third and final use case the company is targeting with its cloud-based P4s is video streaming. The chip has three video processing engines that, according to Nvidia, can transcode up to 35 high-definition streams in real time.

Nvidia is a key partner for Google’s efforts to address the growing role of GPUs in companies’ technology strategies. With that said, Google is not fully reliant on the chipmaker for AI processors. The company also offers cloud customers its Tensor Processing Units, internally designed chips customized for running neural networks that can each provide a massive 180 teraflops of computing power.

Image: Google

A message from John Furrier, co-founder of SiliconANGLE:

Support our open free content by sharing and engaging with our content and community.

Join theCUBE Alumni Trust Network

Where Technology Leaders Connect, Share Intelligence & Create Opportunities

11.4k+  
CUBE Alumni Network
C-level and Technical
Domain Experts
15M+ 
theCUBE
Viewers
Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.