UPDATED 12:30 EDT / FEBRUARY 01 2012

Node.js, Flash Technology Opens Door to Undreamed of Applications

The combination of fast development using next generation languages such as NODE.js, big data, and a five-layer IT architecture combining processors with flash storage backed up by the latest advanced disk archiving systems will allow companies to do exciting things, says Wikibon CTO David Floyer.

Interviewed by SiliconAngle.com founder and CEO John Furrier on SiliconAngle.tv from last week’s NODE.js summit, Floyer presented a vision of the future of IT based on the disruptive technologies just entering the enterprise.

“We’ve gone through server and storage consolidation with virtualization,” Floyer said. “Now we’re going to see application consolidation, database consolidation which will simplify the way businesses are run, reduce the cost of running those businesses and allow them to do things they couldn’t even dream about before. So this is a very exciting time.”

At the front end, he says, companies will need tools like NODE.js that empower fast development of ad hoc applications to research the new kinds of questions that are being asked in big data environments.

“A lot of young, extremely talented programmers are tackling the problems of mobile computing involving the problems of vast amounts of IMs going between people, from machine to machine, using NODE.js, he said. “They’re providing a framework for very high-speed transport of these messages where speed is more important than absolute certainty of delivery.”

That, however, presents problems in the business world, where absolute guarantees of delivery are important. The only way to ensure that no data is lost is to write it to some form of persistent storage as quickly as possible. Traditionally that has meant disk, but that is a very slow, narrow bandwidth solution that places major constraints on applications, particularly in the new big data environment.

The answer, he says, is a new IT architecture combining servers with flash memory cards built-in, allowing data capture at near memory speeds on a persistent medium that can survive unplanned outages (see image).  This, in his vision, is connected to an active management second layer that moves the data to layer three, consisting of hybrid flash/disk systems. Here the flash provides a persistent cache that can take in large amounts of data quickly and feed that onto large capacity disk for permanent storage. This in turn is connected through another active management layer to an archiving layer of SATA storage using advanced methodology to facilitate fast retrieval of archived information when needed.

All the major pieces for this architecture are available, he says. One important key part was supplied by the recent demonstration by HP and Fusion-IO of a one billion IOPs system using cards installed directly in HP Proliant servers to act as persistent memory for very large amounts of incoming data. Although the demonstration was limited in some respects, its implications were tremendous, as Floyer showed in his analysis published here on Wikibon.org.  This demonstration of the system that becomes layer 1 of the new architecture solves the problem of capturing very large volumes of data in a persistent medium to minimize risk of data loss. The key here is that the built-in acts as a kind of memory, avoiding the traditional IO stack to support atomic writes at very high speed.

Today HP and Fusion-io are leading the development of this critical, tightly coupled server/ front-end system. However, Fusion-io has competition including SolidFire and Virident, which provide equivalent or even higher speed data capture. That opens the possibility for other partnerships with HP competitors in the server space.

Obviously this layer by itself will always have limited storage capacity. The HP demonstration purposely used 64-byte files specifically because of this limitation. Therefore this top layer needs to be connected through an active management layer to layer three, where large amounts of active data can be stored. This layer needs to combine the IO capabilities of flash with the high capacity/low cost of disk. While this layer could be entirely , EMC’s Project Lightning “is very interesting”, Floyer says. “They are putting cards into servers and have FAST, which is part of the active management of data. They can introduce cache coherence across the servers and layer 3.”

This is still a fairly expensive storage technology, however. Once the data is no longer needed for active analysis, the architecture should have another active management layer that will recognize that and move it down to a less expensive all disk archiving layer using SATA disks. Floyer says DataDirect Networks has an interesting product in Web Object Systems (WOS) that provides a high-speed object-based storage mechanism that can play a base role in that management scheme. And Cleversafe’s use of erasure coding for distributing data across a network provides guaranteed retrieval of files after a hardware failure at a lower cost than RAID, which again can save the enterprise money while preserving that guarantee of data preservation that enterprises need.


A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU