UPDATED 22:10 EDT / AUGUST 29 2018

CLOUD

Nvidia’s GPU cloud for deep learning can now be accessed via Microsoft Azure

Microsoft Corp. today said it’s teaming up with Nvidia Corp. so that its Azure cloud customers can tap into the graphics chipmaker’s GPU Cloud in order to train deep learning models.

Nvidia’s GPU Cloud offers developers preconfigured containers with software accelerated by its graphics processing units, which offer better performance than standard central processing units when it comes to workloads such as artificial intelligence. The new offering on Azure means data scientists, developers and researchers can avoid a number of integration and testing steps before running high-powered computing tasks, the companies said.

Nvidia launched its GPU Cloud in 2017. Powered by Nvidia Volta and its Tensor Core GPU architecture, it supports a range of popular deep learning tools such as Microsoft’s Cognitive Toolkit, TensorFlow and PyTorch.

The Nvidia GPU cloud provides access to some of its most advanced GPUs, including its Tesla V100 chips that are used to power dozens of supercomputers around the world. The same chips are also used to deliver the intensive computing power needed to enable deep learning.

Microsoft is now offering access to 35 GPU-accelerated software containers for deep learning and HPC workloads on its cloud, which can run on the following Microsoft Azure instance types with Nvidia’s GPUs: the NCv3 (one, two or four Tesla V100 GPUs), the NCv2 (one, two or four Tesla P100 GPUs) and the ND (one, two or four Tesla P40 GPUs).

Staying with deep learning, Microsoft also announced the release of a new container image called “Nvidia GPU Cloud Image for Deep Learning and HPC on Azure Marketplace,” which is designed to make it easier to use Nvidia GPU Cloud containers on Azure. The software giant also said it’s making a high-performance computing cluster management tool called “Azure CycleCloud” generally available.

Today’s announcements come after Microsoft launched its Project Brainwave deep learning initiative in preview earlier this year. Project Brainwave is a cloud service for running AI models powered by Intel Corp.’s Styratix 10 field programmable gate arrays, or FPGA chips, which are hardware accelerators that can deliver better performance than both CPUs and GPUs.

Image: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU