UPDATED 16:08 EST / NOVEMBER 04 2020

INFRA

AMD chips power new ‘big memory’ computing cluster at Lawrence Livermore

The Lawrence Livermore National Laboratory has deployed a new “big memory” high-performance computing cluster dubbed Mammoth that uses chips from Advanced Micro Devices Inc. to help scientists perform COVID-19 research.

Mammoth, detailed today, consists of 64 servers each equipped with two AMD Epyc central processing units.

The system has a total of 8,192 processors cores that are said to have peak performance of 294 teraflops, or 294 trillion calculations per second, when processing double-precision floating-point values. Double-precision floating-point values are units of data that take up 64 bits each and are popular in scientific computing because they can hold very large fractional numbers.

What sets Mammoth apart from most high-performance computing clusters is that the emphasis is on memory rather than raw computing power. On top of the two Epyc CPUs, each of the 64 servers in Mammoth has 2 terabytes of high-speed DRAM memory and nearly 4 terabytes of nonvolatile memory.

The reason behind the unusual system design has to do with computing efficiency. As part of their COVID-19 research, scientists at Lawrence Livermore are running virus simulations and other complex workloads that require working with massive datasets.

Those datasets are in some cases so large that they can’t fit inside the memory of a standard server and so must be split up between multiple machines, which creates inefficiencies that slow down processing. The memory-optimized servers in Mammoth address this limitation.

“In our workflow, we compute binding free energies [a property of proteins] with Rosetta Flex, a code that was memory-limited on other machines to 12 or 16 simultaneous calculations per node,” explained Thomas Desautels, a computer scientist at Lawrence Livermore. “Mammoth enables us to run 128 Rosetta Flex calculations simultaneously on a single node, increasing our throughput by a factor of about eight.”

Mammoth is said to have helped reduce the duration of some types of genomic analysis from a few days to a few hours. Moreover, Lawrence Livermore scientists now have to spend less time splitting up datasets into smaller chunks to overcome server memory limitations.

The project gives AMD’s Epyc CPUs more validation in the context of their ability to to support memory-intensive computing use cases. The ability to store large amounts of data in memory at once is important not only for scientific workloads but also enterprise analytics applications and certain databases, which represent a big market.

Mammoth is the latest in a series of recent high-performance computing wins for AMD. Previously, it teamed up with Hewlett Packard Enterprise Co.’s Cray Unit to build Frontier, a 1.5-exaflop supercomputer commissioned by the Department of Energy that will take up an area the size of two basketball courts. A single exaflop equals a million trillion calculations per second. 

Photo: Lawrence Livermore National Laboratory

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU