UPDATED 18:02 EDT / NOVEMBER 13 2017

CLOUD

Nvidia’s graphics chips are now in every cloud and high-performance computer

Already riding high on the strength of its graphics chips, Nvidia Corp. today said its newest and most powerful graphics processing unit is now available through every major provider of both high-performance computers and cloud computing services.

Although the Santa Clara, California-based chip maker had previously revealed various companies’ plans to use its Tesla V100 GPU, announced in May, today it touted how universal its use is about to become. Nvidia has said the chip is 50 percent faster than the previous top-of-the-line GPU offered a year ago.

Now high-performance computer makers Dell EMC, Hewlett-Packard Enterprise Co., IBM Corp., Lenovo Group Ltd. and Huawei Technologies Co. Ltd. will offer machines using the chips.

In the cloud, Amazon Web Services Inc. led the way in late October with an announcement that it would offer the chip on its cloud. Today, Microsoft Corp. announced it would offer the chip’s power over its Azure cloud. Nvidia also noted Alibaba Cloud, Baidu Cloud, Oracle Cloud, Google Cloud Platform and Tencent Cloud also have announced cloud services based on Volta, the architecture behind the chip.

Nvidia's V100 GPU chip (Photo: Nvidia)

Nvidia’s Tesla V100 GPU chip (Photo: Nvidia)

“We’re in every cloud, every single server, every data center,” Nvidia Chief Executive Jensen Huang (pictured) said at the conference Monday afternoon.

Nvidia also is announcing today a GPU cloud for high-performance computing, which hasn’t been available in so-called application containers as the leading AI software frameworks have been since last month. Containers allow applications to run the same way in multiple computing environments. For now, Nvidia is starting with five major HPC applications: NAMD, RELION, GROMACS, GAMESS and LAMMPS.

“Containers dramatically simplify application deployments,” Ian Buck, vice president and general manager of accelerated computing at Nvidia, said in a press briefing Friday.

Nvidia also said today that Japan’s AI Bridging Cloud Infrastructure, or ABCI, supercomputer operated by the National Institute of Advanced Industrial Science and Technology, would employ nearly 4,400 V100s. Also, Nvidia said it will offer the chips in its own SaturnV supercomputer that it uses internally to develop AI products such as autonomous car technology.

The announcements, made at the SC17 supercomputing conference today in Denver, come as a raft of startups and established companies alike are designing new chips tuned for artificial intelligence, in particular deep learning neural networks, which allow machines to learn on their own rather than being explicitly programmed.

“We’re in a land rush phase,” said Jeff Bier, founder and president of the embedded-chip consultant Berkeley Design Technology Inc., told SiliconANGLE last year. Spending on hardware, software and services related to artificial intelligence is forecast to jump from $12.5 billion this year to $46 billion in 2020, according to International Data Corp.

Nvidia is seeking to cement its lead in chips applied to AI. Although GPUs were invented to rev up graphics on personal computers for gaming and scientific applications, the highly parallel architecture of the chips made them ideal for deep learning applications such as image and speech recognition and self-driving cars, which require processing huge amounts of data at the same time to provide real-time results.

Rivals such as Intel Corp. and startup Graphcore Ltd. as well as AI-focused companies such as Google LLC point out that although GPUs work well for those applications, designing new chips specifically for deep learning could provide even faster results. So even as they use GPUs, they’re also creating their own custom-designed chips that could eventually challenge Nvidia’s GPUs.

SiliconANGLE Media’s theCUBE will be covering the SC17 conference on Tuesday in Denver, including interviews with a variety of executives.

Photo: Nvidia SC17 livestream

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU