UPDATED 15:56 EST / MAY 23 2018

EMERGING TECH

Intel teases powerful upcoming AI chip for developing neural networks

Aiming to gain ground on rivals such as Nvidia Corp., Intel Corp. is developing a new processor that will enable companies to build artificial intelligence models potentially much faster than with its current silicon.

Naveen Rao (pictured), the head of the chipmaker’s AI group, previewed the product at its Intel AI developer event in San Francisco today. He said the processor is set to launch late next year under the name Nervana NNP-L1000. It’s based on technology that Intel absorbed via the $400 million acquisition of Nervana Systems Inc. in 2016.

Building on the assets it gained via the deal, the company introduced a line of specialized AI chips called Nervana Neural Network Processors last October. Rao said the upcoming NNP-L1000 is expected to be three to four times faster than the first generation of chips in the series.

This boost is particularly significant in light of the fact that the first-generation units are already demonstrating impressive performance. According to Intel’s figures, NNP prototypes have achieved a utilization rate of 96.4 percent while processing square matrices. A square matrix is a type of mathematical construct that is heavily used by AI models in their calculations.

“Competitors talk about massive theoretical numbers, but you get low utilization in the real world,” Rao said during his morning keynote.

Rao sought to position Intel and its flagship Xeon server processors as the leader in both training of neural network algorithms and in “inferencing,” the process of running those models for services such as speech and image recognition.

“Most AI runs on Xeon today,” he said. “We get it, Xeon wasn’t right for AI a couple years ago. But that has changed,” thanks to software changes Intel made to the architecture.

Indeed, he took a pointed shot at Nvidia, whose highly parallel graphics processing units have become a standard for running neural network and other AI models, without mentioning the rival’s name. “Let’s bust a myth,” he said, that GPUs are 100 times faster than central processing units such as Xeon. “That’s just false,” he insisted.

Rao also hammered on the greater flexibility of its CPUs to run different kinds of computing workloads. Indeed, he brought on Kim Hazelwood, Facebook Inc.’s head of AI infrastructure, to prove the point. “One of the things that was critical to us was to have flexibility in training and inference,” she said, referencing Xeon chips it uses in its data centers. “We need to have platforms that are flexible and can run all of our applications.”

At the same time, perhaps a bit awkwardly, Rao also tried to position Intel as being able to offer other kinds of chips needed for specific kinds of AI inferencing. Besides the Nervana chip, they include its line of field-programmable gate arrays that can be reprogrammed for different kinds of jobs such as Microsoft Corp.’s recently launched Project Brainwave for running AI models, and its low-power Movidius Visual Processing Unit chips that can do inferencing on devices. “It’ll take a combination of architectures to solve the problems you want to solve,” he told the developers.

As for the Nervana chip, on top of the performance improvements, the NNP-L1000 is set to introduce support for something called bfloat16. It’s a format for storing numerical values used in computations that’s specifically geared toward AI software and can enable faster processing.

Intel sees the upcoming NNP-L1000 lending itself particularly well to the training phase of AI projects. That’s the part of a project where developers hone their models’ accuracy using sample data, which can be an incredibly time-consuming process. A chip capable of performing the task up to four times faster could make a big impact on enterprises’ AI initiatives.

The NNP chip family is a core pillar of Intel’s strategy for addressing the growing adoption of artificial intelligence. Rao revealed that both Xeons and FPGAs will receive support for bfloat16 as well.

With reporting from Robert Hof

Image: Intel

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU