This is a sponsored post commissioned by Dell EMC. Sponsor Posts are identified paid posts that appear on all pages of SiliconANGLE.com, supporting editorial efforts.
Chief information officers understand that the operational model of the past – where “keeping the lights on” took a majority of the information technology department’s time – cannot continue. To keep up with the pace of change that business requires, companies must have a cloud strategy that allows IT to provide services in a flexible and agile manner at a reasonable cost. Converged infrastructure technologies are helping to streamline the operational support of infrastructure, and this is extended into more scalable and agile environments with the proliferation of new solutions including hyperconverged infrastructure (HCI), software-defined storage (SDS), and true private cloud.
Server SAN, HCI, and SDS
When Wikibon put forth the idea of Server SAN in 2014, it was early days in maturation of hyperconverged infrastructure solutions. The primary value of HCI solutions is not the box/appliance, it is the simplicity of an offering that treats infrastructure as a pool rather than as individual devices. One of the biggest ongoing challenges of traditional storage arrays is the time and costs associated with migration – every new box needs to be loaded up at the initial install and information removed before it can be deactivated. The fully burdened migration costs often run more than one third of the total cost of ownership.
With HCI, the first time you deploy one will be the last time that you need to do a migration, since the distributed nature of the software-based solution allows data to be shared across nodes; adding or removing new devices will be integrated into the pool without having to do migrations. Leading HCI solutions in the marketplace today deliver a fully integrated stack, such as:
- Dell EMC VxRail
- Dell EMC server + VMware vSphere hypervisor + VMware vSAN
- Server options (Dell+Lenovo OEM, Supermicro, Cisco UCS) + Nutanix AHV hypervisor (or third-party vSphere and HyperV) + Nutanix
From a maturity standpoint, Wikbon analyst David Floyer states in a recent research note that Server SAN functionality exceeds traditional storage arrays.
Software-defined storage is an overlapping category in the marketplace; HCI is a deployment choice for a set of SDS solutions (all HCI are SDS, but not all SDS can be deployed as HCI; HCI requires that the compute nodes support both applications and storage). Like with HCI, SDS provides a simpler operational path for scaling infrastructure. Where SDS differs is that it is only the storage layer. Today, SDS is more prevalent than HCI for highly scalable deployments for large enterprises and service providers. Examples of scalable SDS are:
- Ceph (an open source solution, which is also available as Red Hat Ceph Storage; see interview with Verizon’s Chris Emmons which uses RH Ceph for NFV),
- Hedvig Distributed Storage Platform (see interview with Hedvig Founder/CEO Avinash Lakshman),
- Dell EMC ScaleIO (available as SDS, and as a full stack solution with Dell EMC VxRack; see interview with Verizon Lab’s Larry Rau who uses ScaleIO).
The storage industry is heavily fragmented because no single solutions can fit all use cases. In a whiteboard video (below), Dell EMC’s Chad Sakac and Wikibon’s Stu Miniman examined the current state of storage architectures to dive into where cloud storage, all-flash arrays, HCI, SDS, and traditional storage fit for legacy and cloud-native applications.
Customer study: Citi’s SDS-based private cloud
Large financial institutions treat IT as a critical piece of the business. Like other very large enterprises, IT acts like a service provider to deliver on the requirements that business units need. Large financials have much more robust security requirements than most companies.
Wikibon spoke with Dan Maslowski, who is global engineering head for storage as part of Citi Architecture and Technology Engineering (CATE). Prior to Citi, Dan had worked on a variety of software storage solutions. The lead architect of CATE, Greg Lavender, hired Dan to help “change the storage world by being a large customer with a leading strategy.” Citi has had a deep engineering level partnership with the Dell EMC ScaleIO team since 2014 (ScaleIO was acquired by EMC in 2013).
Citi’s architecture leverages Dell EMC’s ScaleIO to build a software-defined “pod” that delivers on scale, rapid deployment, and economics. From an economics standpoint, Citi’s internal financial analysts estimate that the solution is 30-40 percent less expensive than AWS. Citi does leverage public cloud for appropriate engagements, especially short term engagements; but for most deployments economics and GRC (governance, risk, and compliance) concerns such as confirmed deletion point Citi towards using mostly private cloud deployments.
Deployment speed at Citi is measured from purchase order to power-on; traditionally this could take six months, Citi’s goal is to be less than 15 days, the first pod in 2015 was deployed in 31 days (three to four times better than previous infrastructure), they can now meet the goal and development environments can be spun up in minutes.
From a scalability standpoint, the current pods have a storage pool (starting at 4PB and up to 16PB in a pod) and compute pool with a dedicated storage network, and use SDN and VMware NSX. The current storage architecture is a 4PB all-flash configuration (with 2 tiers of flash) and 1TB of RAM; overall storage costs are decreased by 60 percent compared to Citi’s pre-pod storage array architectures.
Citi plans to continue its leadership in storage usage by adding containers and microservices into its architecture. Dan states that the continuous state of change means that they must build resiliency into applications, pairing agile engineering with agile infrastructure.
From box to pool to platform
The role of infrastructure is to be the platform for applications, as stated from Wikibon’s True Private Cloud premise, public cloud is the bar that IT should measure itself against:
Wikibon believes that public cloud represents a quantum level of disruption to the manner in which IT Services are delivered to enterprises and users. As such, in order for in-house IT organizations to remain valuable and relevant to the business functions and processes they support, enterprises should consider delivering private cloud in the same light – that it should emulate the pricing, agility, low operational staffing and breadth of services of Public Cloud to the maximum degree possible.
HCI and SDS can be a foundational layer for moving along the continuum towards a True Private Cloud deployment. A good private cloud solution will allow IT to shift a significant amount of operations to the platform and vendors that they partner with. This is different from traditional outsourcing, rather than pushing “my mess for less” to another company, IT can focus on skills and requirements that are critical to the business and differentiated from what can be purchased as a service or other standard offering.
The management and orchestration of hybrid cloud solutions is an area that is getting a lot of focus. Suppliers of HCI and SDS solutions are in the mix to solve (through in-house management and integration into partner solutions) the need in the current management gap.
The radical reinvention of the data center is to move from a mindset of looking at hardware, boxes, and CapEx to a software, pool/platform/service, and OpEx model. The lessons and architectural paradigms of hyperscale companies are being delivered in solutions that are consumable by the enterprise in HCI and SDS. CIOs and IT staff should take a good look at how these newer architectural solutions rather than continuing on existing refresh cycles.