UPDATED 03:00 EDT / MARCH 13 2017

CLOUD

Breaking up is hard to do: busting the handcuffs of traditional data storage

Premise

The largest and most successful Web companies in the world have proven a new model for managing and scaling a combined architecture of compute and storage. If you’ve heard it once, you’ve heard it a hundred times: “The hyperscale guys don’t use traditional disk arrays.”

Giants such as Facebook Inc. and Google Inc. use a design of local distributed storage to solve massive data problems. The key differentiation of this new architecture is extreme scalability and simplicity of management, enabled by automation. Over the years, Wikibon has referred to this approach as “Software-led Infrastructure,” which is analogous to so-called Software-Defined Storage.

Excluding the most mission-critical online transaction processing markets served by the likes of Oracle Corp. and IBM Corp.’s DB2, it’s becoming clear this software-led approach is poised to penetrate mainstream enterprises because it is more cost-effective and agile than traditional infrastructure. Up until recently, however, such systems have lacked the inherent capabilities needed to service core enterprise apps.

This dynamic is changing rapidly. In particular, Microsoft Corp. with Azure Stack and VMware Inc. with its vSAN architecture are demonstrating momentum with tightly integrated and automated storage services. Linux, with its open source ecosystem, is the remaining contender to challenge VMware and Microsoft for mainstream adoption of on-premises and hybrid information technology infrastructure, including data storage.

Upending the ‘iron triangle’ of arrays

Peter Burris, Wikibon’s head of research, recently conducted research that found IT organizations suffer from an infrastructure “iron triangle” that is constraining IT progress. According to Burris, the triangle comprises entrenched IT administrative functions, legacy vendors and technology-led process automation.

In his research, Burris identified three factors IT organizations must consider to break the triangle:

  • Move from a technology to a service administration model;
  • Adopt True Private Cloud to enhance real automation and protect intellectual property that doesn’t belong in the cloud; and
  • Elevate vendors that don’t force false “platform” decisions, meaning technology vendors have a long history of  “adding value” by renaming and repositioning legacy products under vogue technology marketing umbrellas.

The storage industry suffers from entrenched behaviors as much as any other market segment. Traditional array vendors are trying to leverage the iron triangle to slow the decline of legacy businesses while at the same time ramping up investments in newer technologies, both organically and through acquisition. The Linux ecosystem –the lone force that slowed down Microsoft in the 1990s – continues to challenge these entrenched IT norms and is positioned for continued growth in the enterprise.

But there are headwinds.

In a recent research note published on Wikibon (login required), analyst David Floyer argued there are two main factors contributing to the inertia of traditional storage arrays:

  • The lack of equivalent functionality for storage services in this new software-led world; and
  • The cost of migration of existing enterprise storage arrays – aka the iron triangle.

Linux, Floyer argues, is now ready to grab its fair share of mainstream, on-premises enterprise adoption directly as a result of newer, integrated functionality that is hitting the market. As these software-led models emerge in an attempt to replicate cloud, they inevitably will disrupt traditional approaches just as the public cloud has challenged the dominant networked storage models such as Storage Area Network and Network-Attached Storage that have led the industry for two decades.

Linux is becoming increasingly competitive in this race because it is allowing practitioners to follow the game plan Burris laid out in his research, namely:

1) Building momentum on a services model – (i.e. delivering robust enterprise storage management services that are integrated into the OS);

2) Enabling these services to be invoked by an orchestration/automation framework (e.g., OpenStack, OpenShift) or directly by an application leveraging microservices (i.e., True Private Cloud); and

3) The vendors delivering these capabilities have adopted an open ecosystem approach (i.e. they’re not forcing false platform decisions, rather they’re innovating and integrating into an existing open platform). A scan of the OpenStack Web site gives a glimpse of some of the customers attempting to leverage this approach.

Floyer’s research explores some of the key services required by Linux to challenge for market leadership, with a deeper look at the importance of data reduction as a driver of efficiency and cost reduction for IT organizations.

Types of services

In his research, Floyer cited six classes of storage service that enterprise buyers have expected, which have traditionally been available only within standalone arrays. He posited that these services are changing rapidly, some with the introduction of replacement technologies and others that will increasingly be integrated into the Linux operating system, which will speed adoption. A summary of Floyer’s list of storage services follows:

  • Cache management to overcome slow hard disk drives which are being replaced by flash (with data reduction techniques) to improve performance and facilitate better data sharing
  • Snapshot Management for improved recovery
  • Storage-level Replication is changing due to the effects of flash and high speed interconnects such as 40Gb or 100Gb links. Floyer cited WANdisco’s Paxos technology and the Simplivity (acquired by HPE) advanced file system as technologies supporting this transformation.
  • Encryption, which has traditionally been confined to disk drives, overhead-intensive and leaves data in motion exposed. Encryption has been a fundamental capability within the Linux stack for years and ideally all data would be encrypted. However encryption overheads have historically been too cumbersome. With the advent of graphics processing units and field-programmable gate arrays from firms such as Nvidia Corp., encryption overheads are minimized enabling end-to-end encryption, with the application and database as the focal point for both encryption and decryption, not the disk drive.
  • Quality of Service, which is available in virtually all Linux arrays but typically only sets a floor under which performance may not dip. Traditional approaches for QoS lack granularity to set ceilings (for example) and allow bursting programmatically through a complete and well-defined REST API (to better service the needs of individual applications – versus a one-size-fits all approach). NetApp Inc.’s Solidfire has, from its early days, differentiated in this manner and is a good example of a true software-defined approach that allows provisioning both capacity and performance dynamically through software. Capabilities like this are important to automate the provisioning and management of storage services at scale, a key criterion to replicate public cloud on-prem.
  • Data Reduction – Floyer points out in his research that there are four areas of data reduction that practitioners should understand, including zero suppression, thin provisioning, compression and data de-duplication. Data sharing is a fifth and more nuanced capability that will become important in the future. According to Floyer:

To date… “The most significant shortfall in the Linux stack has been the lack of an integrated data reduction capability, including zero suppression, thin provisioning, de-duplication and compression.”

According to Floyer, “This void has been filled by the recent support of Permabit’s VDO data reduction stack (which includes all the data reduction components) by Red Hat.”

VDO stands for Virtual Data Optimizer. In a recent conversation with Wikibon, Permabit Chief Executive Tom Cook explained that as a Red Hat Technology partner, Permabit obtains early access to Red Hat software, which allows VDO testing and deep integration into the operating system, underscoring Floyer’s argument.

Why is this relevant? The answer is cost.

The cost challenge

Data reduction is a wonky topic to chief information officers, but the reason it’s so important is that despite the falling cost per bit, storage remains a huge expense for buyers, often accounting for between 15 and 50 percent of IT infrastructure capital expenditures. As organizations build open hybrid cloud architectures and attempt to compete with public cloud offerings, Linux storage must not only be functionally robust, it must keep getting dramatically cheaper.

The storage growth curve, which for decades has marched to the cadence of Moore’s Law, is re-shaping and growing at exponential rates. IoT, M2M communications and 5G will only serve to accelerate this trend.

Data reduction services have been a huge tailwind for more expensive flash devices and are fundamental to reducing costs going forward. Traditionally, the common way Linux customers have achieved efficiencies is to acquire data reduction services (e.g., compression and de-dupe) through an array – which may help lower the cost of the array, but it perpetuates the Iron triangle. And longer-term, it hurts the overall cost model.

As underscored in Floyer’s research, the modern approach is to access sets of services that are integrated into the OS and delivered via Linux within an orchestration/automation framework that can manage the workflow. Some cloud service providers (outside of the hyperscale crowd) are sophisticated and have leveraged open-source services to achieve hyperscalelike benefits. Increasingly, these capabilities are coming to established enterprises via the Linux ecosystem and are achieving tighter integration as discussed earlier.

More work to be done

Wikibon community data center practitioners typically cite three primary areas that observers should watch as indicators of Linux maturity generally and software-defined storage specifically:

  1. The importance of orchestration and automation

To truly leverage these services, a management framework is necessary to understand what services have been invoked, to ensure recovery is in place (if needed) and give confidence that software-defined storage and associated services can deliver consistently in a production environment.

Take encryption as an example along with data reduction. To encrypt you must reduce the data before you encrypt because encryption tries to eliminate the very patterns that, for example, data de-duplication is trying to find. This example illustrates the benefits of integrated services. Specifically, if something goes wrong during the process, the system must have deep knowledge of exactly what happened and how to recover. The ideal solution in this example is to have encryption, de-dupe and compression integrated as a set of services embedded in the OS and invoked programmatically by the application where needed and where appropriate.

2. Application performance

Wikibon believes that replicating hyperscalerlike models on-prem will increasingly require integrating data management features into the OS. Technologists in the Wikibon community indicate that the really high performance workloads will move to a software-led environment leveraging emerging non-volatile memory technologies such as NVMe and NVMf. Many believe the highest performance workloads will go into these emerging systems and over time, eliminate what some call the “horrible storage stack” – meaning the overly cumbersome storage protocols that have been forged into the iron triangle for years. This will take time, but the business value effects could be overwhelming with game-changing performance and low latencies as disruptive to storage as high frequency trading has been to Wall Street — ideally without the downside.

3. Organizational issues

As Global 2000 organizations adopt this new software-led approach, there are non-technology-related issues that must be overcome. “People, process and technology” is a bit of a bromide, but we hear it all the time: “Technology is the easy part…. People and process are the difficult ones.” The storage iron triangle will not be easily disassembled. The question remains: Will the economics of open source and business model integrations such as those discussed here overwhelm entrenched processes and the people who own them?

On the surface, open source services are the most likely candidates to replicate hyperscale environments because of the collective pace of innovation and economic advantages. However, to date, a company such as VMware has demonstrated that it can deliver more robust enterprise services faster than the open-source alternatives — but not at hyper-scale.

History is on the side of open source. If the ecosystem can deliver on its cost, scalability and functionality promises, it’s a good bet that the tech gap will close rapidly and economic momentum will follow. Process change and people skills will likely be more challenging.

(Disclosure: Wikibon is a division of SiliconANGLE Media Inc., the publisher of Siliconangle.com. Many of the companies referenced in this post are clients of Wikibon. Please read my Ethics Statement.)

Photo: Wikipedia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU