UPDATED 15:52 EDT / JANUARY 14 2014

Wikibon defines Server SAN: the intersection of hyperscale, convergence and flash

The issues that are being tackled by web-scale giants today provide a glimpse of the challenges in tomorrow’s enterprise data center. In particular, the need to manage large volumes of information with minimum delay is driving the emergence of Server SAN , a new infrastructure paradigm that Wikibon Principal Research Contributor Stu Miniman places at the intersection of three of the hottest trends in IT: hyperscale, convergence, and flash.

Miniman defines Server SAN as an architecture that turns multiple direct-attached storage (DAS) devices into a pool of shared resources via a high-speed interconnection such as InfiniBand or Low-latency Ethernet. Coherency is managed at the software layer, with a special emphasis on application availability, and both flash and traditional spinning disks are incorporated into the environment. The architecture may be implemented in a variety of ways depending on the organization’s requirements, but all deployments must decouple capacity management from the underlying hardware and provide support for a wide range of workloads, along with automatic throughput allocation down to the application level. By converging compute and storage, Server SAN collapses traditional information silos and enables the admin to migrate apps and data as needed, greatly reducing operational costs while increasing the lifespan of storage infrastructure.

This new approach to data center management is billed as simpler and more scalable than traditional architectures, but it does not yet offer all the enterprise features required to move beyond the realm of Internet giants like Google and Facebook.

Server SAN also suffers from a lack of proven use cases, but that will change as vendors continue to add more capabilities and enhance the cost-effectiveness of their offerings. Nutanix, SimpliVity and Scale Computing are leading the way in the so-called hyperconvergence segment, while Nexenta and Sanbolic are collaborating with hardware partners to offer their software solutions as part of fully integrated appliances. And in the data systems category, OpenStack, Gluster and AWS are pushing the envelope on the protocols and technologies that power it all.

Legacy storage architectures are becoming obsolete, Miniman writes, which means organizations need to start exploring the new paradigms for managing data and identify how they could fit into future deployments. He concludes with a prediction that a sizable percentage of applications will run in Server SAN environments by the end of the decade. To help companies consider how Server SAN might fit into future deployments, Miniman outlines the key characteristics that should be compared and contrasted as users evaluate its solutions:

  • Environment support
    • Virtualization (VMware, HyperV, KVM)
    • Physical (non-virtualized environments)
    • Media support (Flash only/Mix flash+disk)
    • Cloud integration
  • Scalability
  • Management/API support for the storage layer
  • Simplicity

 

For more on Miniman and Vellante’s discussion with Chirantan “CJ” Desai, check out the video below.

photo credit: kevin dooley via photopin cc

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU