UPDATED 13:08 EDT / JUNE 18 2020

INFRA

Intel debuts new AI-optimized Cooper Lake and Stratix chips

Intel Corp. today unveiled its latest generation central processing units for large, high-socket count servers, as well as a new accelerator chip that packs an artificial intelligence module developed together with Microsoft Corp.

Cooper Lake is the codename for the new server CPUs. There are 11 of them and the entire set is designed to power servers with four or eight processors, a niche class of machines typically used for specialized high-performance workloads such as in-memory analytics. A high socket count is beneficial for such workloads because the more CPUs there are in a system, the more memory it can accommodate.

Intel says a server with four Cooper Lake CPUs can provide 1.9 times the performance of a comparable five-year-old machine. The fastest processor in the series, the Xeon Platinum 8380HL, offers 28 cores with a 2.9GHz base frequency for a $13,012 price tag. 

Under the hood, Cooper Lake features several technical advances. The CPUs can accommodate up to 4.5 terabytes of memory depending on the model and also support two faster memory variants. Those are DDR4-3200 and Optane Barlow Pass, which offer up to 9.1% and 25% better data access speeds than what was supported before, respectively. 

The performance of the CPUs in a multiprocessor server also depends in no small measure on how fast they can communicate with one another to share work. Intel has made improvements in this arena as well. The interchip connections that link Cooper Lake CPUs together inside a server can carry 20.8 terabits of data per second, twice as much as in previous generation processors.

On the AI front, Intel has added in support for the bfloat16 format. The bfloat16 format is a version of the 32-bit floating point data format, commonly used by AI models to store information, that can store nearly the same amount of information in as about half as much memory. As a result, an AI model running on a Cooper Lake CPU can theoretically crunch information with nearly twice the throughput than if it were using regular floating point values. 

“We remain committed to enhancing built-in AI acceleration and software optimizations within the processor that powers the world’s data center and edge solutions, as well as delivering an unmatched silicon foundation to unleash insight from data,” said Lisa Spelman, the head of Intel’s Xeon and memory division.

AI was also a focus for Intel when designing the Stratix 10 NX, a new field-programmable gate array announced today alongside Cooper Lake. FPGAs are customizable chips that a company can optimize for a specific workload and then connect to its servers to give them a speed boost. 

The Stratix 10 NX (pictured) is the first FPGA from Intel to feature a AI Tensor Block, a dedicated module specifically optimized for AI workloads. Intel said the AI Tensor Block was developed in collaboration with Microsoft. It improves upon the previous generation Stratix chips by providing 15 times the number of multipliers and accumulators, chip components well-suited for performing the matrix-matrix and vector-matrix multiplications with which AI models process data.

“Estimates suggest that a Stratix 10 NX FPGA running a large AI model like BERT at batch size 1 delivers 2.3X better compute performance than an NVIDIA V100,” Intel staffer Steven Leibson wrote in a blog post. The V100 was Nvidia Corp.’s top-end data center graphics card until May when it was succeeded by the much more powerful A100.

Spelman also used the opportunity to share a glimpse into the company’s product roadmap. Spelman told CRN that the chipmaker will follow up Cooper Lake, which is based on a 14-nanometer process, later this year with new 10-nanometer “Ice Lake” CPUs for mainstream one- and two-CPU servers. In 2021, Intel plans to introduce another batch of Xeon chips called “Sapphire Rapids” that will support one to eight cores and feature a new instruction set, Advanced Matrix Extensions, that promises to speed up both AI training and inference. 

Spelman touched upon the role of AI in Intel’s chip portfolio during an appearance on SiliconANGLE Media’s theCUBE last month:

Images: Intel

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU