Intel details supercomputing milestones, upcoming AI chips at SC23
Researchers have used a supercomputer powered by Intel Corp. processors to run four 1 trillion-parameter language models simultaneously.
The chipmaker detailed the milestone at Supercomputing 2023, a major industry event taking place today in Denver. The supercomputer that researchers used to run the four language models is the U.S. Energy Department’s recently installed Aurora system. Alongside its announcement of the researchers’ achievement, Intel shared new details about its upcoming Gaudi-3 and Falcon Shores artificial intelligence chips.
Exascale AI
Aurora was installed at the Energy Department’s Argonne National Laboratory earlier this year. It comprises more than 10,000 servers that feature about 21,000 central processing units and 60,000 graphics processing units from Intel. Once fully operational, Aurora is expected to rank as the world’s fastest supercomputer with more than two exaflops of performance.
Argonne National Laboratory, Intel and several other organizations have teamed up to use the system for AI development. The initiative aims to create generative AI models with more than one trillion parameters that can help speed up research projects. Engineers are training those models on datasets comprising text, code and scientific information.
At Supercomputing 2023 today, Intel disclosed that Aurora managed to run an AI model with one trillion parameters using only 64 of its 10,000-plus servers. Moreover, researchers managed to run four such models at the same time across 256 nodes. Each such node, which weighs 70 pounds, includes two Intel Xeon Max Series CPUs and no fewer than six Intel Max Series GPU graphics cards.
Next-generation AI chips
The Max Series GPUs in Aurora are based on an architecture called Xe HPC that Intel has developed in-house. Intel also offers a second AI processor, the Gaudi 2, that targets many of the same use cases. The Gaudi 2 (pictured) is based on a design that Intel obtained through its $2 billion acquisition of startup Habana Labs Ltd. in 2019.
Intel eventually plans to merge the two product lines into a single chip series based on a unified architecture. But before then, the company will launch an upgraded version of the Gaudi 2. As part of its Supercomputing 2023 presentation, the company shared new details about that upcoming chip.
Gaudi 3, as the processor is called, will reportedly be made using a five-nanometer process. Whereas its predecessor was implemented as a single piece of silicon, Gaudi 3 comprises two separate chiplets. Both Intel and its competitors are adopting a chiplet-based approach to building processors because it simplifies manufacturing in several respects.
One of the current-generation Gaudi 2’s main selling points is that it includes built-in Ethernet ports. This reduces the need for external networking hardware, which lowers costs. The Gaudi 3 will reportedly feature twice the networking capacity of its predecessor as well as 1.5 times more onboard memory for storing AI models’ data.
Thanks to Intel’s design upgrades, the Gaudi 3 is expected to provide four times the performance of its predecessor when crunching bfloat16 data. This is a specialized data format developed by Google LLC that many AI models use to store the information they process. The format’s popularity stems from the fact that it can help reduce the amount of memory that a neural network requires and speed up processing.
Intel plans to merge the Gaudi chip lineup with the Xeon Max GPU series, which powers the Aurora supercomputer, into a new product portfolio dubbed Falcon Shores. Both the Gaudi and Xeon Max GPU will provide forward-compatibility with the portfolio. That means AI models written for the two chip lines will also work on Falcon Shores silicon.
Intel detailed today that Falcon Shores chips will feature HBM3 memory, the latest iteration of the high-speed RAM included in many AI processors. HBM3 is faster than previous-generation hardware and uses less power. Falcon Shores products will also support oneAPI, an Intel technology that promises to reduce the amount of work involved in writing AI applications.
Faster CPUs
The third major focus of Intel’s Supercomputing 2023 announcements was its upcoming Emerald Rapids line of server CPUs. The chip series, which is set to launch next month, is based on the company’s 10-nanometer process. Intel released new performance data that indicates Emerald Rapids can provide a significant speed improvement over previous-generation silicon.
The most advanced CPU in the Emerald Rapids portfolio offers 64 cores. Compared with Intel’s fastest previous-generation chip, which features 56 cores, the new CPU can run AI speech recognition applications up to 40% faster. It demonstrated a similar speed advantage in a test carried out using the LAMMPS benchmark, which measures how fast a chip can carry out computational chemistry tasks.
Image: Intel
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU