UPDATED 05:57 EST / FEBRUARY 27 2014

NEWS

Predictions on the future data center : Limitless + contextual| #OCPSummit

greg-huffThe Open Compute Summit, now in its fifth edition, reunited in San Jose the tech professionals interested in hacking the conventional computer infrastructure and conversing over Open Compute.

Greg Huff, CTO of LSI, one of the newest memebers of OCP, was a featured speaker on Day one, and his presentation delivered the LSI future vision for the datacenter and the valuable role of the OCP.

“We are a new member of the Open Compute Foundation, and a contributing member as well. We started getting excited about the open concept innovation a little over two years ago. As a storage technology company, I am going to explain what motivated us to get into open ecosystem innovation, what drives us and what we expect to get out of it,” started Huff.

Titled “The Datacentered Future,” his presentation tried to explain LSI’s view of how services and data centers are going to evolve over time.

While it’s been widely known that nowadays we have the richness in interaction, the growth in tablet & smartphone use, the data connected services, the coming of wearables, “people lose sight of what opportunities are going to be in this world of connected devices and connected data,” believes Huff.

He went on to exemplify:

“Let’s say you have a regional area where you are collecting with a fine grain information about climate: soil samples for moisture, humidity, temperature, etc. That might be very useful if you could do analytics on it; it could help manage scarce resources, plan better crops. Over a broader geographical area, the data becomes interesting for commodities traders.”

As Huff pointed out, a lot of innovation and a lot of value creation and optimization in the future is going to be centered on the data that you can gather and correlate in order to extract some insight from.

That is, in Huff’s point of view, one way to interpret the term “data center”.

The other one is to step back and realize none of the interesting things actually occur on the end; it’s not going to happen on the wearable or on tablet, or on a data internet connected meteorological station in the field.

  • The data center is where it’s at

“The data center, at whatever scale or scope or deployment, is the place where this is going to happen,” said Huff. “To really get these interesting outcomes, you need nearly limitless depth and breadth of data. The more you have and the longer periods of time you have it for, the better answers you’re going to get. You need scale, concurrent execution, billions of interactions. These things will have to be nearly real-time response for some of these online services. As some people at Microsoft highlighted, ‘these things will have to be globally deployed, but also managed locally,'”he clarified.

Further explaining the LSI interest in OCP, Huff added: “Open Compute is about the ability to scale computing infrastructure in the most efficient and economical way possible to achieve more work per dollar.”

Contributions to OCP

 

“LSI’s technology is pretty ubiquitous for a storage infrastructure and the our storage solutions are very important in the data center,” stated Huff.

lsi-presentation

LSI is contributing two storage infrastructure reference designs to the OCP.

1) The first is a board design for a 12Gb SAS Open Vault storage enclosure.

The Open Vault is a very cost-effective storage solution with modular I/O built for the Open Rack. It has been designed to enable the performance optimization of 12Gb SAS infrastructure by utilizing the DataBolt bandwidth optimizer technology.

LSI’s DataBolt aggregates the performance of storage devices, enabling up to 60 percent higher bandwidth from existing SATA drives in the Open Vault enclosure.

2) LSI is also contributing a design from its Nytro XP6200 series of PCIe flash accelerator cards, purpose-built to meet the requirements of Open Compute and other hyperscale servers.

Nytro XP6209 is a 1TB flash card providing hyperscale cloud datacenters with accelerated performance, optimized power and an overall lower-cost-per-gigabyte PCIe flash solution for server-based applications.

LSI’s Role in the OpenStack Ecosystem

 

  • Driving easy-to-use, high-performance solutions through multiple strategic partners
  1. Integrating LSI’s storage capabilities with OpenStack through partner community
  2. Leveraging key technologies to accelerate performance for Software Defined Storage
  •  Private cloud solution with Nebula
  1. Built on LSI’s pooled storage
  2. Joint demo at Booth B16

 .

Predictions

 

If you’re looking at where this is going to go, these are a couple of ideas of what the future data center is going to look like:

1. Limitless breadth and depth of data – today, if you want to talk about storing data on space, there is nothing better out there than a Hard Disk Drive.

Some people predict their demise, but the leaders in this industry, when they focus on this segment, should not talk about products that have been modified to work in the data center, but architected exactly for this purpose.

2. Contextual Instantaneous Retrieval – If you want rapid response on this online services, you won’t find that in a hard drive.

Today, you have processors that cache memory incredibly fast. As soon as you leave that that enter the storage domain, it’s a huge latency penalty: 100.000x vs a memory access.

Over the last few years Nand-flash has gained a lot of traction and solved a lot of technical problems for a lot of people because it basically reduces that by 100x and it’s a tremendous accelerator for contextual data. As we go forward, this isn’t going to be enough or even close.

There is a set of new technologies coming, called Next generation NVMs = non volatile memories that span this world between memory and storage to a degree that Nand-flash can’t.

The PCM technology (Phase Change Memory) uses an attribute of amorphis or crystalline and heats a semiconductor to enact that change and create a one versus a zero, and spin-torque transfer that works more like DRAM. This full hierarchy is going to get filled in, and “there’s going to be pretty amazing things in transformational and applications in service delivery based on it,” predicted Huff.

Just as disruptive as flash has been, in what can be accomplished in a data center for a set of services or a set of applications, these next generation of non volatile memories are going to do the same – first living in the storage domain (normal OS read/write APIs), but later working with quite a few partners in the industry to make the applications aware of the idea of a load or a store, something that does not currently go through an OS API that results in a non volatile exchange.

3. Deployment-Specific Scalability – Flexible Infrastructure Choice Points

If you are not that big, if you are a host or a service provider, a private cloud inside an enterprise, and you’ve got to make all this things work at a small-to-mid scale, but you want to get the value of all the innovation and all the creation that’s occurring in this ecosystem, you do have to have some cooperation between the application and the infrastructure when you go and deploy it small-scale. A key “a-ha” for us at the LSI was the fact that a lot of the fundamental elements, the hardware things that live under the software and deployment architecture are common for the two use-cases.

4. Programmable Infrastructure – Automated Resource Orchestration

 .

  • Hyperscale operators develop solutions internally for
    deployment, monitoring, orchestration, etc.
  • Enterprises and SMBs often use solutions from Microsoft
    and VMware
  • Significant traction and ecosystem innovation occurring around OpenStack

 .

Work in progress

 

Huff wrapped up his presentation giving the audience a sneak-peak into the LSI’s future contributions to the ecosystem:

 .

  • Open Vault Enhancements
  1. Flash accelerated expander board
  2. Disk reset and recovery integration to address false failures
  • Innovative Reference Architectures
  1. All-flash storage building block (“Flash Vault”)
  2. Rack level storage architecture for shared, pooled storage
  3. SSD reference design based on LSI’s SandForce controller
  • Software RAID

lsi-future-contributions


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU