UPDATED 08:06 EDT / APRIL 30 2015

NEWS

Why Flash-as-Memory-Extension is the future of enterprise storage

For decades, storage read/write speed was the gating factor for data center performance. In the last three years that has changed with the advent of NAND flash storage, with nanosecond I/O response times that are fast enough to keep up with modern processors.

As a result, the new performance gating factor is the storage area network (SAN). The answer, writes Wikibon CTO and co-founder David Floyer, is a new architecture: flash as memory extension (FaME) (see graphic at right). The core advantage of FaME over the common architectures in use today, as SiliconAngle Senior Editor Paul Gillin recently wrote, is that it moves processors close enough to the flash storage to replace the SAN and its slower switch with an order-of-magnitude faster Peripheral Component Interconnect Express (PCIe) bus and switch.

The FaME architecture provides the optimal balance of scalability, high performance and cost for most enterprise compute loads, the three important qualities of high performance enterprise environments, Floyer writes. At the center of the architecture are PCIe flash cards and the switching mechanism, which normally would be a PCIe switch. The switch will determine the number of processors in the system, with 20-40 being the range supported by presently available technologies. All processors can access all data with latency as low as the high nanoseconds, versus milliseconds using a traditional SAN.

David_FloyerFloyer (left) compares FaME to four other common architectures using all-flash or hybrid flash-disk storage:

  1. The two-node architecture with PCIe Flash, used extensively in hyperscale computing, provides constant low latency and can be extended to include atomic write APIs, which halve the I/O needed for database applications. It provides strong availability at reasonable cost but falls down on scalability. Floyer suggests that it is most appropriate for small enterprise applications and for medium-sized companies.
  2. The distributed node architecture provides good latency within nodes and can be used with either low-cost spinning disks or flash drives. Its biggest advantage is cost control, with good performance for loads that can be parallelized completely. It falls down, however, on availability, making it a poor choice for the majority of enterprise compute loads that do not parallelize well, including transactional and decision support.
  3. The data-in-memory architecture has become increasingly popular, with systems like SAP HANA gaining favor in the market. The data is distributed across DRAM on multiple server notes with messaging using the MPI protocol. This provides very low latency within each node but much longer latency between nodes, which grows at a non-linear rate as more nodes are added. It is used mostly for applications with small data sets and has been successful in data analytics. However, availability is a problem. Because DRAM is non-persistent, recovering from a power failure is an issue. Restoring large datasets into memory can take hours or days. As a result, the architecture is best used for applications with small datasets and a high tolerance for long recovery times.
  4. The all-flash array architecture supports large data sets with fast, predictable read/write latency. It can provide excellent storage services and can support heterogeneous clustered servers running multiple operating environments. Availability is the strong point of this architecture, with data sharing among applications facilitated by snapshots. Cost can be kept reasonable using data reduction technologies, although with some increase in latency. The flash is so fast that the network and protocols become the gating factors for performance, making performance the weak point.

FaME, by comparison, eliminates the network delays by bringing the servers close enough to the flash storage to connect at bus speeds. Because flash is persistent memory, it eliminates the need to reload after a power outage, and modern technologies make it tolerant of component failures. Cost can be controlled through data reduction and – if the applications do not demand very high performance – the use of lower cost, slower flash and server technologies. Snapshots can also enable backup and recovery mechanisms that work optimally with ultra-fast flash storage, providing low RPO.

Read Floyer’s full analysis on the new Wikibon Premium site.

Images courtesy Wikibon.

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU