Nvidia signs up big Taiwanese server makers to use its AI data center design
Nvidia Corp. has signed up four of the world’s largest makers of computers to adopt its graphics chip-powered server design for artificial intelligence work in the most demanding “hyperscale” data centers, the company announced Tuesday.
The so-called original design manufacturers, or ODMs, are the four major Taiwanese makers of computers and other electronic products: Hon Hai Precision Industry Co. Ltd. (known as Foxconn), Inventec Corp., Quanta Computer Inc. and Wistron Corp. They will be part of an Nvidia partner program providing the manufacturers early access to the HGX architecture (pictured) powered by Nvidia’s graphics processing units, or GPUs.
The data center design, to be unveiled at the Computex conference in Taipei this week, is the same one used in Microsoft Corp.’s Project Olympus initiative, Facebook Inc.’s Big Basin systems and Nvidia’s own DGX-1 AI supercomputers. Nvidia has a similar program for cloud computing providers such as Amazon Web Services Inc., but this is the first time the ODMs will get early access.
Keith Morris, Nvidia’s senior director of product manager for accelerated computing, told SiliconANGLE that the company aims to provide a standard for hyperscale data centers to allow the ODMs to get to market faster and enable more companies to include Nvidia’s technologies in their operations.
“We’re trying to democratize AI,” Morris said. Although he didn’t say it, Nvidia also might be looking to keep its graphics chips at the center of AI work at a time when rivals such as Intel Corp. and even Google Inc. are pitching other kinds of chips for AI. In any case, the company is looking to prevent manufacturers from using a lot of different custom designs that may limit the market.
The deals follow Nvidia’s introduction May 10 at its GPU Technology Conference of a powerful new chip tuned for artificial intelligence, in particular deep learning neural networks that are responsible for recent breakthroughs such as self-driving cars and instant language translation. Based on a new Volta chip architecture that packs some 21 billion transistors on a single large chip about the size of an Apple Watch face, it’s about 12 times faster on deep learning by one measure than Nvidia’s last-generation chip.
A new Nvidia DGX-1 supercomputing appliance using the chips will be for sale in the third quarter at $149,000, while the chip will be available to the other server manufacturers in the fourth quarter.
The HGX reference design is intended for the requirements of hyperscale cloud environments, the company said. It can be configured in a number of ways, combining GPUs and CPUs for high performance computing and for both training and running of deep learning neural networks. Nvidia also said the HGX is aimed at cloud providers seeking looking to host its new GPU Cloud platform, which offers a number of open-source deep learning frameworks such as TensorFlow, Caffe2, Cognitive Toolkit and MXNet.
Donald Hwang, chief technology officer and president of Wistron’s Enterprise Business Group, said in a statement that its customers are “hungry for more GPU computing power to handle a variety of AI workloads, and through this new partnership we will be able to deliver new solutions faster.”
Nvidia has been on a tear lately thanks largely to its graphics chips becoming a mainstay of AI work. In its first fiscal quarter reported May 9, it reported better-than-expected profits, more than double a year ago, and investors pushed up shares 14 percent after the report.
Photo: Nvidia
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU