UPDATED 19:47 EDT / NOVEMBER 17 2019

INFRA

Intel unveils graphics chip and software for high-performance computing and AI

Intel Corp. is turning its attention to the convergence of high-performance computing and artificial intelligence with the launch late today of a new general-purpose graphics processing unit that’s optimized for both types of workloads.

In addition, Intel announced its oneAPI initiative, which aims to provide a simpler programming model for developing HPC and AI applications that can run on any kind of architecture, including GPUs, central processing units, field-programmable gate arrays and neural network processors.

Announced at the Supercomputing 2019 event in Colorado today, the new Ponte Vecchio discrete GPUs are built on Intel’s Xe architecture using its most advanced seven-nanometer process and have been designed especially for HPC and AI training workloads.

“Several years ago, Intel saw the need to develop one graphics architecture to scale up from traditional GPU workloads to the new HPC/exascale/AI and deep learning training,” Ari Rauch, vice president and general manager of Intel’s Visual Technologies Team and Graphics Business, said in a press briefing.

The new GPU could provide Intel more of a chance to compete credibly against GPU market leader Nvidia Corp. as well as Advanced Micro Devices Inc. for AI and machine learning workloads. Last week it also launched three new chips for training and running AI workloads.

Not least, Intel today also teased the coming of new Xeon processors, including one code-named Cooper Lake in the first half of next year and another called Sapphire Rapids that will launch in 2021.

Notably, Ponte Vecchio is Intel’s first GPU to support its new Foveros 3D packaging technology. Foveros is important because it allows several types of chips to be combined on a single die, which makes it possible to mix and match components into highly specialized processors.

That will almost certainly be the case with Ponte Vecchio, since it has been designed to work in tandem with Intel’s older Xeon Scalable CPUs. Those chips feature built-in AI acceleration to analyze the massive amounts of data generated by HPC workloads.

“The industry wants more competition in data center GPUs, so there’s no question about the need for it,” Patrick Moorhead of Moor Insights & Strategy told SiliconANGLE. But he noted that it won’t be ready for awhile, potentially not until at least sometime next year.

The other piece of the puzzle is Intel’s oneAPI initiative, which is comprised of an open specification for writing HPC and AI-based applications that can run the same on any computing architecture.

Intel said that being able to write these kinds of heterogeneous applications is critical in HPC and AI, as such workloads can run more efficiently by taking advantage of different architectures.

“HPC and AI workloads demand diverse architectures, ranging from CPUs, general- purpose GPUs and FPGAs, to more specialized deep learning NNPs,” Raja Koduri, senior vice president, chief architect and general manager of architecture, graphics and software at Intel, said in a statement. “Simplifying our customers’ ability to harness the power of diverse computing environments is paramount, and Intel is committed to taking a software-first approach that delivers unified and scalable abstraction for heterogeneous architectures.”

OneAPI includes a direct programming language, power application programming interfaces and a low-level hardware interface. Intel has also developed a special software package for OneAPI that includes various compilers, libraries and analyzers. OneAPI is all about making life easier for developers, Intel said, because it eliminates the need to maintain separate code bases and use multiple programming languages and tools to write these kinds of heterogeneous apps.

“OneAPI is designed to provide developers simplicity to program across Intel GPUs, CPUs, FPGAs and Movidius accelerators,” Moorhead said. “It’s an abstraction layer that enables reuse of code, minimizing the need for bespoke acceleration work as it is today. The industry wants this in circumstances when code reuse is needed more than different code for each accelerator.”

Holger Mueller, an analyst with Constellation Research Inc., told SiliconANGLE he thought OneAPI was a more important announcement than the new GPU because its software that provides more value than hardware these days.

“Allowing enterprises to reuse code assets for their next generation applications is key benefit for executives as it enables enterprise acceleration,” Mueller said.

Intel’s new technologies, including the Ponte Vecchio GPUs and OneAPI, will form the basis of the new Aurora supercomputer system at the U.S. Department of Energy’s Argonne National Laboratory in Lemont, Illinois. Aurora will be the first supercomputer in the U.S. to boast a performance of one exaFLOP, or a quintillion floating point operations per second, when it comes online in 2021.

It will feature a compute node made up of two Xeon Scalable CPUs and six Ponte Vecchio GPUs. It will also incorporate other Intel technologies, including its Optane DC Persistent Memory and various connectivity components, which together make up Intel’s full data-centric technology portfolio.

With reporting from Robert Hof

Image: Intel

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU