What’s holding back containers? Red Hat evangelist has some ideas
To hear Scott McCarty say it, the biggest barriers to the adoption of software containers in the enterprise are as much political as they are technical.
McCarty (@fatherlinux, below left), whose role as a solutions architect specializing in containers at Red Hat Inc. has him traveling the world to educate and evangelize the value of the portable computing vessels, said containers have greatly sped up the process of developing software, but also introduced tricky licensing and organizational issues for enterprises to contend with.
Containers are lightweight, isolated user-space instances that run inside a virtual machine or directly on a server. They bundle together many of the elements that are needed to run a program into a single image that can be deployed quickly. Containers are popular with developers not only for their speed, but for the level of control they provide over the run-time environment. They also provide an additional layer of isolation between processes and external elements (see image above, moving left to right).
One big difference between containers and virtual machines is that containers don’t include a kernel, which is the core part of the operating system that handles essential things like memory allocation and input/output. “A container is essentially a file system without a kernel,” McCarty said in an interview with SiliconANGLE. “You would never break up an operating system in that way, but you do with containers.”
Containers also include elements of a complete system that were traditionally split among different IT disciplines, such as systems administration, storage management and network management. Some of these functions are bundled into the container, while others are shared with the underlying kernel. That means the question of who’s responsible for what can be fuzzy at best.
“Historically, operations controlled the bottom layer [of the processing stack], developers controlled the middle layer and programmers the top layer,” McCarty said. “Now you’re taking all three layers, cramming them into a file and distributing it everywhere. Do ops teams want to be paged in the middle of the night to fix an exploit in a container that they didn’t install? People are afraid to ask who controls what layer.”
Just like shipping containers
There are obvious benefits to using a containerized approach that combines run-time components into a single package that’s launched all at once. McCarty cited shipping containers as an analogy. “Imagine you’re loading a bunch of lamps onto a ship. If you break some of them on the dock, you have to wait for new ones to arrive,” he said. “But if you load at the factory, you can just go grab a new lamp and keep loading.” In the same way, containers encourage developers to gather together all the elements they need before launching an instance.
That may be more efficient, but it also presents some challenges to the status quo. “You can have 200 different layers being deployed with different people owning each,” he said. How are organizations figuring out roles and responsibilities in this new world? “People are making it up,” McCarty said. “I think that’s what’s holding back adoption” of containers.
So is legacy software, most of which was designed before containers were even a concept. Many open-source applications have already been modified to run in containers, but few legacy programs have made the switch. “The vendors are testing, but I have not seen software in production that is licensed specifically for containers,” McCarty said.
Re-architecting applications isn’t trivial. Self-contained programs – such as those that can run without installation – are a natural fit, but there are few of those. “If you have a big application that writes to multiple configuration files, then it wasn’t designed to run in a container,” McCarty said. That isn’t even taking into account legacy licensing terms, which weren’t developed to cover scores or hundreds of copies of an application running at the same time.
So that’s a problem, but it’s also an opportunity to rethink the status quo, the Red Hat evangelist noted. In the same way that some operating system images can be booted directly from CD-ROMs, containers encourage developers to think about applications as self-contained units. While container technology has sometimes been criticized for opening up new security risks, McCarty believes a new generation of applications designed with containers in mind could actually be more secure than those that preceded them.
“It’s like running a program from a CD, but a lot easier,” he said. “It’s very secure, and a lot more convenient.”
Featured image courtesy Scott McCarty
A message from John Furrier, co-founder of SiliconANGLE:
Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and soon to be Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.
We are holding our second cloud startup showcase on June 16. Click here to join the free and open Startup Showcase event.
We really want to hear from you. Thanks for taking the time to read this post. Looking forward to seeing you at the event and in theCUBE Club.