Q&A: Red Hat opens the gates for open hybrid cloud innovation
There is a new demand to run applications across different environments, whether it is bare metal, virtual machines, private-public cloud, edge computing, or even a combination of these.
This new demand is driving Red Hat Inc., a leader in open-source solutions for the enterprise, to create an open hybrid cloud strategy. In fact, Red Hat is already adding more fuel to the development of this technology.
“We’re donating more than half a billion dollars to open hybrid cloud research,” said Matt Hicks (pictured), executive vice president of products and technologies at Red Hat Inc. “And part of the reason is that running cloud native services is changing, and that research element of open source is incredibly powerful. We want to make sure that’s continuing. But we’re also going to evolve our portfolio to support this same drive.”
Hicks spoke with John Furrier, host of theCUBE, SiliconANGLE Media’s livestreaming studio, during today’s Red Hat Summit. They discussed cloud native, edge computing and managed services. (* Disclosure below.)
[Editor’s note: The following content has been condensed for clarity.]
You guys are at the center of [cloud native] and always have been. Take us through the key trends that you see on this wave for enterprises and how has Red Hat taken that through?
Hicks: If we look at what really emerged in 2020 we’ve seen three trends that we hope are going to carry through in 2021. The first is an open hybrid cloud. [It] is really how customers are looking to adapt to change. They have to use what assets they have today on-premise. We’re [also] seeing a lot of public cloud adoption.
Edge computing is another area. So, I think we’re going to see a lot of push in edge computing for computing getting closer to users. [The third is] the choice aspect we’re seeing with CIOs. We often talk about technology as a choice, but I think the model of how they want to consume technology has been another really strong trend in 2020.
How do you see the existing workloads evolving and potentially new workloads that are emerging?
Hicks: Data-driven workloads, especially in the machine learning, artificial intelligence spectrum [are] really critical. The reason that those workloads are important is when you’re running something at the edge, you have to also be able to make decisions pretty well at the edge. And that’s where your data is being generated. Coupled with that, we will see 5G change because you’re going to see more blending in terms of what can you draw closer to your data center to augment that.
Then, I think another area we’ll see is how you propagate that data through environments … being able to get that data at the edge and bring it back to locations where you might do more traditional processing, that’s going to be another really key space.
As you go from a common Linux platform into things with Kubernetes, as new technologies and this new abstraction layer, this new control plane concept comes to the table, this need for a fully open platform seems to be a hot trend this year. How do you describe that?
Hicks: With Linux [we] tried to span bare-metal and virtualized environments, and then eventually private and public cloud infrastructure as well. And in our world, that’s something that’s also open, as in Linux. But being able to run it anywhere … that’s expanded now to Kubernetes, for example. Kubernetes is taking that from single machines to cluster-wide deployments.
It’s really giving you that secure, flexible, fast innovation backbone for cloud native computing. [But] the balance there is just not for cloud native. We’ve got to be able to run traditional emerging workloads. And our goal is to let those things run wherever RHEL can. So you’re based on open technologies [and] you can run them wherever you have the resources to run.
So, having that choice and ability to run anywhere but not be able to manage it can lead to chaos or sprawl. And so our investments in our management portfolio, from Insights to Red Hat Advanced Cluster Management to our Cluster Security capabilities or Ansible, our focus has been securing, managing and monitoring those environments.
In operating systems, you need to have the data and understand what’s being instrumented. You’ve got to have things instrumented, and now more than ever having the data is critical. So take us through your vision of insights and how that translates?
Hicks: If you go to our traditional support models, we don’t have a lot of insight until there’s an issue. For example, if we fixed an issue for any customer on the planet, we can know whether you’re going to hit that same issue or not in a lot of cases. And so [with] that linkage, to be able to understand environments better, we can be very proactive.
The second challenge with this is when things do break or fail, [how] to get that data. We want that to be the cleanest handshake possible … being able to get logs, get access so that with our engineering knowledge we can fix it. That’s another key part.
We can get ahead of those issues and then use our support teams and capabilities to keep things from breaking. So that’s really our goal … finding that balance where we’re using our expertise in building the software to help customers stay stable, instead of just being in a response mode when things break.
This brings up a point that Paul Cormier said earlier, and I want to get your reactions to this. He said, “Every CIO is now a cloud operator.” What does that actually mean, cloud operator?
Hicks: [As a CIO] you really get to make the choice of, do I want to differentiate my business by running it myself? Or is this just technology I want to consume? And am I going to consume a cloud native service? And [there are] other challenges that come with that, [such as] it’s an infrastructure not in your control.
But when you think about a CIO and the axis they’re making decisions on, [with cloud] there are more capabilities now. And I think this is really crucial to let the CIO hone in on where they want to specialize doing. What do they want to consume? What do they really want to understand, differentiate and run? And to support this, we’re going to be launching three new managed cloud services.
One of those will be Red Hat OpenShift Streams for Apache Kafka — that data connectivity and the importance of it, and being able to connect apps across clouds, across data centers, using Kafka. [And], once you have that data, what do you do with it? [So] we’re launching a Red Hat OpenShift Data Science cloud service. And this is going to be optimized for understanding the data that’s brought in by streams. And then the last one for us is Red Hat OpenShift API Management … it’s really the overseer of how apps are going to talk to services.
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of Red Hat Summit. (* Disclosure: TheCUBE is a paid media partner for Red Hat Summit. Neither Red Hat, the sponsor for theCUBE’s event coverage, nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Since you’re here …
Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!
Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.
… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.