UPDATED 14:30 EDT / APRIL 09 2019

CLOUD

Get ready for the monetization of hybrid cloud, says Wikibon research

The battle for cloud computing supremacy seems to have come to an unexpected end, and the winner is … multiple clouds!

Wikibon Inc. research shows that large and midsized businesses are choosing hybrid solutions over solely using a private cloud or processing data on a single public cloud, opening opportunities for vendor monetization.

“The battle of public versus private cloud is not really the discussion,” said Stu Miniman (pictured, left), senior Wikibon analyst. “It’s about my applications; it’s about my data; it’s where things naturally are going to live.”

Miniman and fellow Wikibon analysts James Kobielus (center) and David Floyer (right), discussed hybrid cloud monetization and innovation in white space during a CUBE Conversation.

A hybrid cloud taxonomy

Hybrid cloud circa 2019 is more complex than the multi-vendor world of a few years ago. As Wikibon’s resident expert analyst and chief technology officer, Floyer recently wrote about taxonomies for the hybrid cloud, delineating cloud types with associated characteristics and planes.

Outlined from left to right, the cloud variants per Floyer are: multiclouds; loosely coupled hybrid clouds; tightly coupled hybrid clouds; “true” distributed hybrid clouds; and autonomous standalone clouds. Under these are the four characteristics of: state, integration, automation, and hybrid applications, which increase from left to right. Then, Floyer identifies five planes, network, data, control, application on any node, and autonomous edge state (see image below).

hypbrid-cloud-taxonomies

Data, data, everywhere

A critical difference between a few years ago and today is the increased distribution of data. “You have data at the edge; you have data in your own data centers; you have data in the clouds; you have data in SaaS clouds,” Floyer said.

Maximizing the value of this distributed data requires services. One model is to pull everything up to one cloud, but that is slow and costly. “A better model is to move your code and services to the data. And to do that you need hybrid cloud mechanisms … to coordinate where and how you send the code and the services,” Floyer stated.

Data provides the analytical power for artificial intelligence, and despite bringing complexity into the AI development and operations pipeline, “there is value to be gained from using distributed data for all manner of AI applications,” Kobielus said. “The applications become more powerful because they can leverage more data; you can build more types of models to do more kinds of inferencing and so forth.”

Training increasingly happens at the edge, where the AI lives and dies, according to Kobielus. This requires a highly versatile data plane able to work in a meshed multicloud environment. “That’s sort of the bleeding edge of [open-source independent service mesh] Istio and all those other things that we’re seeing coming into the mainstream of AI and data management in the multicloud,” he stated.

Businesses will need to run secure storage across multiple hybrid versions; at the edge and in local private clouds and public clouds. This creates opportunity for vendors.

“A key characteristic is going to be code,” Floyer stated. “You will need different data solutions for different types of application; different ones for AI, different ones for transactional processing. For example, you have a transactional system and you want at the same time to do a large amount of fraud detection. That’s a different type of application, different databases. If you’re at the edge you’re going to be using far more time series databases, state databases, whereas you’d be using traditional SQL databases for your systems of record.”

All aboard the monetization train

This AI “monetization train,” as Floyer describes it, is based on data-derived statistical models that can be integrated into working applications. This is the “data science pipeline,” which takes source data, looks for predictive variables, then builds models to incorporate predictive variables — such as face recognition or natural language processing — that are trained and deployed into working applications to do inference.

Supporting this statement are statistics from Wikibon that show a growing niche of DevOps tools for the data science pipeline that handle all of those processes and enable teams of data scientists and data engineers and subject matter experts to work together to industrialize the process of extracting value.

“[Monetization] is happening. We’re seeing all over the world in every industry that enterprises are setting up very much industrialized processes for incorporating data science into the very heart of application development,” Kobielus said.

Here’s the complete video interview, one of many CUBE Conversations from SiliconANGLE and theCUBE:

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU