UPDATED 00:30 EST / MAY 30 2017

INFRA

Nvidia signs up big Taiwanese server makers to use its AI data center design

Nvidia Corp. has signed up four of the world’s largest makers of computers to adopt its graphics chip-powered server design for artificial intelligence work in the most demanding “hyperscale” data centers, the company announced Tuesday.

The so-called original design manufacturers, or ODMs, are the four major Taiwanese makers of computers and other electronic products: Hon Hai Precision Industry Co. Ltd. (known as Foxconn), Inventec Corp., Quanta Computer Inc. and Wistron Corp. They will be part of an Nvidia partner program providing the manufacturers early access to the HGX architecture (pictured) powered by Nvidia’s graphics processing units, or GPUs.

The data center design, to be unveiled at the Computex conference in Taipei this week, is the same one used in Microsoft Corp.’s Project Olympus initiative, Facebook Inc.’s Big Basin systems and Nvidia’s own DGX-1 AI supercomputers. Nvidia has a similar program for cloud computing providers such as Amazon Web Services Inc., but this is the first time the ODMs will get early access.

Keith Morris, Nvidia’s senior director of product manager for accelerated computing, told SiliconANGLE that the company aims to provide a standard for hyperscale data centers to allow the ODMs to get to market faster and enable more companies to include Nvidia’s technologies in their operations.

“We’re trying to democratize AI,” Morris said. Although he didn’t say it, Nvidia also might be looking to keep its graphics chips at the center of AI work at a time when rivals such as Intel Corp. and even Google Inc. are pitching other kinds of chips for AI. In any case, the company is looking to prevent manufacturers from using a lot of different custom designs that may limit the market.

The deals follow Nvidia’s introduction May 10 at its GPU Technology Conference of a powerful new chip tuned for artificial intelligence, in particular deep learning neural networks that are responsible for recent breakthroughs such as self-driving cars and instant language translation. Based on a new Volta chip architecture that packs some 21 billion transistors on a single large chip about the size of an Apple Watch face, it’s about 12 times faster on deep learning by one measure than Nvidia’s last-generation chip.

A new Nvidia DGX-1 supercomputing appliance using the chips will be for sale in the third quarter at $149,000, while the chip will be available to the other server manufacturers in the fourth quarter.

The HGX reference design is intended for the requirements of hyperscale cloud environments, the company said. It can be configured in a number of ways, combining GPUs and CPUs for high performance computing and for both training and running of deep learning neural networks. Nvidia also said the HGX is aimed at cloud providers seeking looking to host its new GPU Cloud platform, which offers a number of open-source deep learning frameworks such as TensorFlow, Caffe2, Cognitive Toolkit and MXNet.

Donald Hwang, chief technology officer and president of Wistron’s Enterprise Business Group, said in a statement that its customers are “hungry for more GPU computing power to handle a variety of AI workloads, and through this new partnership we will be able to deliver new solutions faster.”

Nvidia has been on a tear lately thanks largely to its graphics chips becoming a mainstay of AI work. In its first fiscal quarter reported May 9, it reported better-than-expected profits, more than double a year ago, and investors pushed up shares 14 percent after the report.

Photo: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.