Nvidia debuts cloud server platform to unify AI and high-performance computing
Hoping to maintain the high ground in artificial intelligence and high-performance computing, Nvidia Corp. late Tuesday debuted a new computing architecture that it claims will unify both fast-growing areas of the industry.
The announcement of the HGX-2 cloud-server platform (pictured), made by Nvidia Chief Executive Jensen Huang at its GPU Technology Conference in Taipei, Taiwan, is aimed at many new applications that combine AI and HPC.
“We believe the future requires a unified platform for AI and high-performance computing,” Paresh Kharya, product marketing manager for Nvidia’s accelerated-computing group, said during a press call Tuesday.
Others agree. “I think that AI will revolutionize HPC,” Karl Freund, a senior analyst at Moor Insights & Strategy, told SiliconANGLE. “I suspect many supercomputing centers will deploy HGX2 as it can add dramatic computational capacity for both HPC and AI.”
More specifically, the new architecture enables applications involving scientific computing and simulations, such as weather forecasting, as well as both training and running of AI models such as deep learning neural networks, for jobs such as image and speech recognition and navigation for self-driving cars. “These models are being updated at an unprecedented pace,” sometimes as often as hourly, Kharya said.
The HGX architecture, powered by Nvidia’s graphics processing units, or GPUs, is a data center design used in Microsoft Corp.’s Project Olympus initiative, Facebook Inc.’s Big Basin systems and Nvidia’s own DGX-1 AI supercomputers as well as services from public cloud computing leader Amazon Web Services Inc. The first version of the architecture, the HGX-1, was announced a year ago.
Essentially, the HGX-2, which consists of 16 of Nvidia’s high-end V100 GPUs, provides a building block for computer makers to create the systems that. Using Nvidia’s NVLink chip interconnect system, it makes the 16 GPUs look like one, the company said, delivering 2 petaflops, or 2 quadrillion floating point operations per second, a standard computing speed measure.
“Basically, you can now use HGX as a pool of 16 GPUs as if it were a single very large compute resource,” Freund explained.
Nvidia also said today that its own recently announced DGX-2 AI supercomputer was the first system to use HGX-2. It will sell for $399,000 when it’s available in the third quarter. Huang joked on a livestream of his conference keynote that it’s a “great value,” though he appeared to mean it as well.
Nvidia has created three classes of servers that mix central processing units with GPUs in optimal configurations for AI training, for AI inferencing or running of the models and for supercomputing.
Kharya sought to position the HGX architecture as similar to Intel Corp.’s and Microsoft’s development of the ATX personal computer motherboard configuration standard, which led to an explosion of compatible system components made by many companies.
Among the companies that announced plans Tuesday to build the HGX-2 systems are server makers Lenovo Group Ltd., Quanta Cloud Technology or QTC, Super Micro Computer Inc. and Wiwynn Corp. Also, so-called original design manufacturers Hon Hai Precision Industry Co. Ltd. (known as Foxconn), Inventec Corp., Quanta Computer Inc. and Wistron Corp., whose systems are used by some of the world’s largest cloud data centers, said they would deliver HGX-2 systems later this year.
Meanwhile, Intel is stepping up its efforts to expand its presence in AI computing, most recently with a new chip previewed last week and due for release in late 2019 that the chipmaker said is designed to build AI models much faster. Naveen Rao, head of Intel’s AI group, took a shot at Nvidia, calling its claims that GPUs are much faster than Intel’s latest Xeon central processing units a “myth.”
Image: Nvidia
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU