Enterprise use of containers — those simple, portable software wrappers for applications — is set to explode, with up to 80 percent of organizations expected to be in production by the end of this year.
Brian Gracely (pictured) follows this phenomenon closely both in his role as a director of product strategy at Red Hat Inc. and as a prominent blogger and co-host of the popular Cloudcast podcast. Gracely was previously lead cloud computing analyst at Wikibon, also owned by SiliconANGLE Media. Gracely explained how obstacles to their use are gradually crumbling and what the impact of rising container use will be on information technology organizations and corporate enterprises.
What’s the outlook for container adoption this year?
We’ll see a lot of stability. We’re seeing a ton of customers who are rapidly putting stateful applications into containers, and running them with Kubernetes in production. They’re seeing a lot of benefit of using one consistent platform. They can use their existing management tools, don’t have to pay the tax of virtualization and are finding that storage and networking are stable. Containers inherently have better portability than virtualization ever did.
There’s been a lot of interest in container management recently, with Docker open-sourcing its “containerd” runtime management software and a lot of recent activity and investment around Kubernetes. How does this change the equation for container user?
Just putting an application into a container doesn’t solve the majority of what’s needed by IT operations, such as how to cluster containers, network them, get data in and out and keep them secure. That’s why we’re seeing so much interest in management.
Over the past year, we’ve seen the community of contributors working on Kubernetes grow to be as large as all the other container platforms combined. The three big public cloud providers have standardized on it, which is a big deal because they don’t agree on anything. Last year we were explaining to customers the difference between Kubernetes and other options. Today, they’ve already made the decision to go with Kubernetes and they want to know how we’ll support them.
Lack of data persistence has been called a major shortcoming of containers? How will that gap be filled?
People are dealing with that issue in two ways. One is to keep their persistent data outside of the system. For example, in Red Hat OpenShift, we have a way to plug into an external storage array, Ceph, GlusterFS or the cloud. We’re also seeing storage players become container-native with their platforms. Many concepts that used to be dealt with in physical boxes – like replication and fail-over – are going to software.
What’s the outlook for virtualization?
It’ll be a long time before virtualization goes away, but the reason customers are interested in this new model is because they’ve moved past the idea of only having a private cloud. They want to take advantage of public sources as well. Containers are a way to take existing application s and isolate them. They’re a better technology for enabling portability.
There are quite a few container options out there right now. Is that a good or bad thing?
I think we’ll see the market standardize on a code base. The industry saw what happened with the trifurcation of the virtualization industry and they don’t want to repeat that.
What impact will the Open Container Initiative have?
One of the biggest complaints people have had about Docker is that it moves too fast. Docker calls it innovation, but customers call it instability. I think we’ll see a consistent container standard that will align much more with the timelines of the Linux Kernel.
There’s a perception that security is a shortcoming on containers. Is that still valid?
Containers are inherently isolated, which makes them secure. But developers won’t always think about security, so platforms like OpenShift and Kubernetes are building in those protections. They’re addressing the question of how to take untrusted content and make sure that applications have the right permissions and have been scanned for vulnerability before they attach to storage.
When are containers not appropriate?
That list is becoming shorter and shorter. We see a lot of customers putting existing applications into containers, and it’s not just cloud-native any more. There are still going to be some complex clustering environments where containers aren’t appropriate. I haven’t yet heard of anyone doing SAP HANA or Oracle RAC (Real Application Clusters) in containers because they have very low latency or specific failure-and-recovery options. However, there are features being built into Kubernetes to address that.
What are the biggest customer reservations you still encounter about containers?
It’s a tech that blurs the lines between developers and operations people. That distinction used to be clear. People are still figuring out who in their organization should care about containers.
We also need to help customers get over the perception that containers are only for new, modern, microservices-based applications. They can move existing Linux applications over and they work. People are learning what practices work best for their business. There are parallels to virtualization, where it took a while for the population of experts to build up.
How does this growing maturity affect the market for platform-as-a-service?
The PaaS world went through a big transition over the last couple of years. It used to be that platforms like Openshift, Cloud Foundry and Heroku all had their own ways to get an application onto a platform. The developer didn’t have to know anything about the platform.
But there were problems with that. Language support was limited and the mechanisms for deploying the application were proprietary. Docker is agnostic about language and framework. Kubernetes can run applications at scale. There are no language barriers, and applications can run across multiple clouds. People don’t have to worry about whether they can run their applications five or 10 years from now. The PaaS platforms that don’t natively support containers are going to struggle to adapt to the way developers want to work.
Do DevOps and containers go hand-in-hand?
You can do DevOps without using containers, but the beauty of using containers is that you have this one piece of technology for both developers and operations people. You have the same technology, and you can build a similar language around it. It forces those teams to work together.