UPDATED 09:00 EDT / MAY 16 2017

BIG DATA

HPE’s ‘The Machine’ draws closer to reality with 160-terabyte single-memory prototype

Hewlett Packard Enterprise Co. says its long-promised project to reinvent the computer is inching ever closer to completion with the announcement of what it calls the world’s largest single-memory computer, a creation also known as The Machine.

A new prototype unveiled today (pictured) sports 160 terabytes of directly addressable main memory, a heretofore unthinkable amount. HPE said such a machine is theoretically capable of simultaneously working with the data held in every book in the Library of  Congress five times over – or approximately 160 million books.  The company also said it expects the architecture will easily scale to a nearly limitless pool of 4,096 yottabytes, which is 250,000 times the entire amount of digital information that is estimated to exist today.

HPE said that with that amount of memory, it will be possible to work at the same time with every digital health record of every person on earth, every piece of data from Facebook, every trip of Google’s autonomous vehicles, and every data set from space exploration.

Yes, it works

Today’s announcement isn’t a product but rather a proof of concept. The first prototype of The Machine was shown in December at the HPE Discover EU conference in London. HPE isn’t holding its designs close to the vest, but rather contributing the research behind its memory fabric to the Gen-Z Consortium, an industry group that is attempting to create and commercialize a new data access technology.

HPE has pointed to The Machine as evidence that the fires of innovation still burn within a company that has been shedding big parts of its business in the process of slimming down into what executives call a “hybrid IT” company that focuses on the mixture of in-house and cloud-based information technology resources.

“This is HPE swinging for the fences because without that, it’s just a distribution company,” said George Gilbert, an analyst at Wikibon, owned by the same company as SiliconANGLE. Gilbert noted that the use of ARM processors, a Linux operating system and a relatively modest 4-terabyte-per-node memory allocation shows that HPE isn’t trying to entirely reinvent the computer just yet.

Bob Sorensen, vice president of research and technology in the High Performance Computing Group of  Hyperion Research LLC, said HPE is addressing an important bottleneck at the high end. “The ability to integrate processors and memory opens up a host of new algorithms and applications that would not be workable on traditional HPCs,” he said. “The Machine offers a straightforward shared memory scheme that allows for faster and more effective software development of these new applications.

The Machine doesn’t compete with quantum computing, fluidics, DNA computing or any of the other alternative approaches to traditional digital calculation, said Kirk Bresniker, chief architect at Hewlett Packard Labs. Rather, it complements them with a radical new approach to memory management. Alternative calculation engines “all fit into a framework for the kinds of applications we need in the future,” Bresniker said. “Part of our goal is to lower the barrier for these innovations.”

The 160 terabytes of shared memory in HPE’s prototype is spread across 40 physical nodes and interconnected by a high-performance “fabric” protocol. It runs a version of Debian Linux on Cavium Inc.’s ThunderX2 Advanced RISC Machine processors. However, the technology itself is independent of processor, memory and operating system, Bresniker said. “It’s a model of memory access that can span everything from handhelds to supercomputers,” he said.

Mileage may vary

Computing with such a large memory pool makes just about every application run faster, but mileage varies according to the use case, Bresniker said. For example, one HPE team achieved a relatively modest 15-fold performance improvement running Apache Spark on HPE Superdome servers equipped with the new memory architecture after making “surgical modifications to the middleware and data manipulation engine,” he said.

However, another team recorded a 10,000-fold improvement in performance running a Monte Carlo stock pricing analysis using pre-calculations and a large lookup table loaded in memory. Such an application could reshape the competitive landscape in financial services for companies that are quick enough to exploit it.

HPE’s decision to release its specs publicly rather than to try to go it alone with a new architecture is a good bet, said Hyperion’s Sorensen. “It’s all about building an ecosystem of new algorithms and developers from the entire span of advanced computing sites,” he said. “HPE could benefit significantly from making this system readily accessible to the widest possible base of developers.”

Bresniker speculated that early adopters “will be using high-performance computing and massive data analytics because we can use the memory fabric for communications and message passing at extremely high rates.” He also said the architecture will have applications in processing data using large volumes of nonvolatile memory as well as in edge applications – such as autonomous vehicles and sensors – where large amounts of information need to be processed locally for machine learning and inference purposes.

Photo: HPE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU