UPDATED 00:38 EDT / MARCH 27 2015

Rex Computing to build the world’s most power-efficient processor| #OCPSummit15

IMG_3911

When Rex Computing first opened it’s doors, their plan was to create the most power-efficient super computers in the world, said Rex Computing CEO Thomas Sohmers during an interview with theCUBE host Jeff Frick at OCP Summit 2015. In order to realize that goal, the folks at Rex Computing had to take processing infrastructure into their own hands. “We thought at that point, that it may have been possible to use other people’s processors, and as we were developing that and the system we showed last year — we realized there were a lot of fundamental issues of how processors are currently designed and built,” recounted Sohmers. In response to this obstacle, he decided to see if his company “can do a bit better,” said Sohmers

Rex Computing is now setting their sights on “a new processor architecture and instructions to that new core design,” according to Sohmers. Specifically, he stated that Rex’s current idea is “based on this concept that memory movement is expensive and doing the actual computation is much cheaper.” They’re sticking to “the basic computer route,” according to Sohmers, and “developing a 256 core version of this processor and hoping to have this chip taped out in the next 12 – 18 months,” he elaborated further. Sohmers considers their work essential to making super computing more power efficient.

 

Why power is a big deal in super computing

 

To explain why power usage is so important to super computing, Sohmers relayed the enormous task the United States Department of Energy (DoE) faces. Assigned with maintaining the United State’s nuclear stockpile, the DoE handles “weapons testing and simulations in addition to making sure that the current warheads are still safe,” according to Sohmers.

To help them accomplish these tasks, the DoE has “some of the world’s most powerful super computers,” including the world’s second biggest computer, Titan. By executive order, the DoE’s expenditure of energy is capped at twenty megawatts. Titan on its own, according to Sohmers, “contains about seventeen petaflops of sustained computing power.” Furthermore, he added, “the DoE wants to get that up to one exaflop – one thousand petaflops,” without exceeding their twenty megawatt budget. Currently, Sohmers said, the DoE is achieving about three to four gigaflops per watt. In order to accomplish their goals, Sohmers explained that the DoE needs to achieve fifty gigaflops per watt to make their objective feasible.

 

Shifting compute paradigm

 

Part of embracing a new way of streamlining power efficiency is accepting that “the way that the X86 and ARM processors — all processors we use today, including GPUs — are built is for an old paradigm,” said Sohmers. The technology world has changed so drastically since cluster computing first emerged in the 1990s. While “parallelization” makes more sense “in terms of cost per flop, of doing the actual job,” technology’s constraints “aren’t the same today,” said Sohmers. While constraints may have freed technology in certain aspects, Sohmers noted that “we have different problems” as well.

 

A race against time to improve HPC

 

HPC [High Performance Computing] is “very similar to embedded in the sense that with an embedded system in you have a lot of constraints on power, size, and its really meant to be doing a specific task. HPC is basically just the warehouse sized version of that.” Sohmers stated. He noted that the problem solving is similar both in terms of the difficulties faced and the solutions necessary.

Sohmers explained further, using Amazon.com, Inc. as an example: “The mainframe is focused on IO and Amazon is focused on having many different tasks and being able to spread the commute function to different things dynamically,” said Sohmers. The problem is that “the big iron HPC, the things that are developed there flow into Amazon and the very large distributed scale.” according to Sohmers. Right now, he remarked “we’re facing a nice problem in HPC at the top one percent.” Some large scale companies are already feeling the pinch, and he predicted that pinch will tighten in a “two to five year time frame.” After five years, Sohmers said “every one else” will begin to feel it too.

Sohmers described his position, saying that his company is doing its best to amend these difficulties before they worsen: “by focusing on the problems of the top 1% of computers right now, we’re going to be affecting the design and development of all computers in the future,” he stated emphatically.

 

The future of super computing

 

What Sohmers finds exciting are the possibilities for side effects of “embedded and super computing being close together in the problem set — I think it’s going to get closer.” As self-driving cars and UAVs become a larger part of society, the fact is that : “you are very constrained when it comes to actual power budget and cost, but you still need a lot of actual compute power there,” said Sohmers. Furthermore he said, “while the cloud is developing, we’re still pretty bound by latency and bandwidth.” Sohmer’s focus will be “fixing it at the compute level,” he stated.

Watch the full interview below, and be sure to check out more of SiliconANGLE and theCUBE’s coverage of OCP Summit 2015.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU