UPDATED 13:34 EDT / APRIL 02 2019

INFRA

Composable infra reigns in cost and scale of next-gen workloads

With cloud computing as ubiquitous as it is nowadays, it’s easy to forget that there’s still hardware out there. Behind the easy pay-as-you-go model, public cloud is really just someone else’s computer, hardware and all. And companies who run hardware — cloud providers or anyone with on-premises data centers — still struggle with cost, resource utilization, and scale issues. The need for resource optimization is growing as big data and artificial intelligence put heavy demands on hardware. Composable infrastructure that improves resource utilization is emerging as an answer.

Companies that invest in very expensive hardware typically settle for shockingly low utilization, according to Sumit Puri (pictured), co-founder and chief executive officer of Liqid Inc. “Typical resource utilization is very low — below 20%,” Puri said.

We can easily see why this happens by looking at companies with ebbs and flows in workload number and size. Consider online stores that purchase hardware to match once-a-year holiday demands. Others, like transportation providers, see vast disparity in resource utilization in a single day. They also typically buy infrastructure for peak hours, only to let it sit idle at other times.

Why is utilization of resources usually so poor? Because resources are deployed as static elements trapped inside of boxes, Puri pointed out. Getting resources out of their boxes and pooling them together would result in better overall utilization from a single server, storage array, etc. “If we can take rack scale efficiency from 20% to 40%, our belief is we can do the same amount of work with less hardware,” he said.

Puri spoke with Jeff Frick and David Floyer, co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, at theCUBE’s studio in Palo Alto, California. Liqid, created in 2013, offers a composable infrastructure platform that enables users to configure and manage physical, bare-metal server systems in seconds. They discussed the latest in composable infrastructure and its promise for data-heavy workloads like AI (see the full interview with transcript here). (* Disclosure below.)

This week, theCUBE spotlights Liqid in our Startup of the Week feature.

Big data, big hardware costs

The composable infrastructure market was estimated to be worth $616 million in 2018, according to research from MarketsandMarkets Research Private Ltd. It is expected to reach $5.1 billion by 2023, at a Compound Annual Growth Rate of 52.6% over the forecast period.

One thing composable infrastructure aims to do is maximize resource utilization for big-data and AI workloads. It can also reign in the expensive infrastructure scale-out that comes along with them. A 2017 study by International Data Corp. showed composable infrastructure enabled 95% faster deployment in a U.S. genomics research institute that implemented it. It also resulted in a 48% hardware savings and significantly fewer information technology service calls.

“The more data that you process, the more scale out you need,” Brian Pawlowski, chief technology officer of DriveScale Inc., told theCUBE last October. “Composable infrastructure is becoming a critical part of getting that under control — getting you the flexibility and manageability to allow you to actually make sense of that deployment in the IT center.”

Traditionally, servers are built by plugging devices into the sockets of a motherboard. Liqid’s composable infrastructure differs in that it consists of “pools” or “trays” of resources. Trays of central processing units, solid state drives, graphics processing units, etc., don’t plug into a motherboard. Instead, they connect into a fabric solution. The software layer on top allows users to dynamically configure servers at the bare-metal level without virtualization. Data-center resources go from statically configured boxes to dynamic, agile infrastructure.

“You’re getting delivered infrastructure of any size, shape or ratio that you want, except that infrastructure is dynamic,” Puri said. “When we need another GPU in our server, we don’t send a guy with a cart to reprogram, to plug a device in — we reprogram the fabric and add or remove devices as required by the application. We give you all the flexibility that you would get from public cloud on the infrastructure that you are forced to own.”

Liqid provides a composable infrastructure solution that differs from other offerings on the market, according to Puri. It is essentially software-defined infrastructure. Its orchestration at the level of bare metal without virtualization is a step up from other composable infrastructure on the market, he added.

Virtualization is a drag

Liqid approaches composability somewhat differently than other market players, such as Hewlett Packard Enterprise Co. For starters, Liqid is disaggregated, while HPE sells composable resources as a converged solution, Puri explained.

The advantage of bare-metal composability is that it eliminates the need for virtualization, according to Puri. “We have a PCIe-based fabric up and down the rack,” he said. Peripheral component interconnect express is an interface standard that connects high-speed components. HPE, on the other hand, uses a fabric based on Ethernet networking technologies.  

“There are no Ethernet SSDs; there are no Ethernet GPUs — at least today. So by using Ethernet as their fabric, they’re forced to do virtualization protocol translation, so they are not truly bare metal,” he said. Composability at the bare-metal level allows Liqid users to scale up quicker in response to workload demands compared to virtualized solutions.

Being disaggregated and bare-metal based gives Liqid users more freedom to choose their own components, according to Puri. We’re an open ecosystem. We’re hardware agnostic. We allow our customers to use whatever hardware that they’re using in their environment today. Once you’ve gone down that HPE route, it’s very much a closed environment,” he said. 

Doubling hardware ROI

Both cloud providers and companies building private clouds for internal customers make up Liqid’s user base. One customer, a studio in Southern California, uses Nvidia Corp.’s Tesla V100 advanced data center GPUs. During the day, their AI engineers utilize them. At night, they reprogram Liqid’s fabric and use those same GPUs to render workloads.

“They’ve taken $50,000 worth of hardware and they’ve doubled the utilization,” Puri said. 

Liqid has partnered with Inspur on an AI-centric rack, which ties Inspur’s servers and storage to Liqid’s fabric. “It’s very difficult to move petabytes of data around, so what we enable is a composable AI platform,” Puri said. “Leave data at the center of the universe, reorchestrate your compute, networking, GPU resources around the data. That’s the way that we believe that AI is approached.”

Another customer, a transportation company, is working to cut resource waste with Liqid. “What happens with them at 5 p.m. is actually very different than what happens at 2 a.m.,” Puri said. “The model that they have today is a bunch of static boxes, and they’re playing a game of workload matching. If the workload that comes in fits the appropriate box, the world is good. If the workload that comes in ends up on a machine that’s oversized, then resources are being wasted.”

Liqid enables the company to study the workload as it comes in and dynamically spin up small, medium or large resource pools. When the workload is done, they free resources back into the general pool. This is changing the entire total cost of ownership argument in their environment. 

“Public cloud is very easy, but when you start thinking about next-generation workloads — things that leverage GPUs and [field programmable gate arrays] — those instantiations on public cloud are just not very cheap,” Puri said. “We give you all of that flexibility that you’re getting on public cloud, but we save you money by giving you that capability on-prem.” 

Watch the complete video interview below, and be sure to check out more of SiliconANGLE’s and theCUBE’s CUBE Conversations(* Disclosure: Liqid Inc. sponsored this segment of theCUBE. Neither Liqid nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU