UPDATED 10:23 EDT / MAY 08 2017

BIG DATA

Does memory have the mojo to crunch data that’s choking data centers?

Enterprises choking on inbound data that’s too expensive to store and too valuable to throw out need relief. But they must look outside the box for it, according to Steve Pawlowski (pictured, right), vice president of advanced computing solutions at Micron Technology Inc.

During the Micron Summit in New York, Pawlowski and Tom Eby (pictured, left), vice president of the compute and networking business unit at Micron, spoke about how memory might pick up the slack.

“One of the epiphanies I had coming to Micron was just how much we could actually do in the memory array,” Pawlowski told Dave Vellante (@dvellante) and David Floyer (@dfloyer), co-hosts of theCUBE, SiliconANGLE Media’s mobile live streaming studio. (* Disclosure below.) 

As some argue that Moore’s Law is not longer applicable, it is crucial for other parts of the data center to come in and lift what processing cores can no longer shoulder, Pawlowski explained. On this point there is an important distinction: There is Moore’s Law, which holds that the number of transistors in a dense integrated circuit will double every two years. Then, there is Dennard’s Law, whereby speed and power will also double proportionately with the transistors.

“I wouldn’t say Moore’s Law’s come to an end, but Dennard’s scaling has come to an end, so it’s causing us to rethink how we build our systems, because we just can’t keep up with the performance requirements,” Pawlowski stated.

Memory wild card

In other words, it is crunch time, as data centers seek out innovative ways to up performance before the avalanche of big data buries them. Growing diversity of workloads is also pressing companies to think outside of dated paradigms, and memory is grabbing their attention, according to Eby.

For example, some are training machine learning models by feeding graphics memory into GPUs; others are classifying machine learning with an FPGA [Field-Programmable Gate Array] fed by hybrid memory queue.

“So this increasing specialization of workloads and solutions to support those workloads and specialized memories to support that are just an opportunity for more value-add for a memory company like Micron,” Eby said.

So just how much heavy lifting can memory do? Actual in-memory cores are possible, but they cannot likely hit several gigahertz, Pawlowski said. Nonetheless, strategically positioning memory as close to data as possible can juice more performance from it, he added.

“This is what we’re doing at Micron — if the data is critical, it stays resident and you move computing to the data,” he stated.

Sending all of that data a long way is inefficient, because often the processors work in increments and will take a big chunk of data, add one and send it back, Pawlowski explained. “That’s just a tremendous waste of time and energy. And you can do a lot of that work closer and closer to memory,” he said.

One might think that an advanced network would help, but its powers are limited, according to Pawlowski. “5G implementations will certainly add a lot in terms of a more robust, reliable cellular capability, but it’s still not going to be able to provide the kind of backhaul you need if you’re just sending raw data to the data center,” he said.

Also, companies need to start thinking about whittling down the amount of data they store due mostly to cost, Pawlowski added. “An exabyte of data in flash is roughly about $172 million if you look at the current cost per gigabyte equivalent, so I’m not really sure you can store all that data,” he stated.

Compute outposts at the edge

Pawlowski’s prescription is to furnish the edge with as much storage and compute power as possible: “enough of it to be able to say what’s relevant and what’s not. And those algorithms have to change constantly.”

Tying connected devices, data and data centers together is admittedly a daunting task, he stated. “NVMe over Fabric is a good step, but I actually see that at some point in time, there will be an abstracted interface that will become the network interface,” he predicted.

In addition, other memory technologies Micron is using, like the key-value-store (a data storage system for associative arrays), have potential for unexpected use cases.

“I can actually see building deep neural nets out of these devices, because we will have the capability. Training will occur somewhere else, but once we’ve got those trained weights, we can actually run those devices pretty well,” Pawlowski concluded.

Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s independent editorial coverage of Micron Summit 2017. (* Disclosure: TheCUBE is a paid media partner at the Micron Summit. The conference sponsor, Micron, does not have editorial oversight of content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU