UPDATED 10:13 EST / JUNE 24 2016

NEWS

After the hype: Where containers make sense for IT organizations

Container software and its related technologies are on fire, winning the hearts and minds of thousands of developers and catching the attention of hundreds of enterprises, as evidenced by the huge number of attendees at this week’s DockerCon 2016 event.

The big tech companies are going all in. Google, IBM, Microsoft and many others were out in full force at DockerCon, scrambling to demonstrate how they’re investing in and supporting containers. Recent surveys indicate that container adoption is surging, with legions of users reporting they’re ready to take the next step and move from testing to production. Such is the popularity of containers that SiliconANGLE founder and theCUBE host John Furrier was prompted to proclaim that, thanks to containers, “DevOps is now mainstream.” That will change the game for those who invest in containers while causing “a world of hurt” for those who have yet to adapt, Furrier said.

What do containers do?

Although interest has only peaked in the last couple of years with the emergence of companies like Docker Inc. and CoreOS Inc., containers have actually been around since the early 2000s. They were created as a solution to the problem of how to get software to run reliably after being moved from one computing environment to another. That’s because major problems can arise when the software development environment is not identical to the production environment.

“You’re going to test using Python 2.7, and then it’s going to run on Python 3 in production and something weird will happen,” said Solomon Hykes, founder and CTO of Docker, in an interview with CIO.com. “Or you’ll rely on the behavior of a certain version of an SSL library and another one will be installed. You’ll run your tests on Debian and production is on Red Hat and all sorts of weird things happen.”

Containers overcome this problem by taking the entire runtime environment, which means the app itself, all of its dependencies, libraries, binaries and configuration files, and bundling it together in a single package that can run on any operating system. In this way containerization shares some similarities with virtualization, but there are also big differences. The most important is that virtual machines (VMs) include the entire operating system that the app runs on, as well as the application itself. With virtualization, a physical server running three VMs would therefore have both a hypervisor (for management) and three operating systems available.

On the other hand, “containers are lightweight and do not include a full copy of the OS, only the application and its dependencies,” said Al Hilwa, program director of application development software at International Data Corp. (IDC) As such, three containerized apps can share the same, single operating system, which means significantly less resources consumed, Hilwa explained. VMs, each with its own operating system, could be several gigabytes in size, while a container is typically just a few megabytes. As a result, many more containers can be hosted on a single server than VMs, and containerized apps can be booted up much more rapidly (seconds, as opposed to several minutes with VMs).

The building blocks of cloud native infrastructure

Wei Dang CoreOS

Wei Dang, head of product, CoreOS Inc.

Up until now, most container adoption has primarily been focused on packaging and isolating applications for easier software development and testing, explained Wei Dang, head of product at CoreOS (pictured, right). This is just the first step in a much larger transition to cloud-native architecture, in which applications are delivered as microservices in containers that run across distributed architecture.

“Cloud native infrastructure provides better security, scalability, and reliability, and it reduces operational complexity through automation,” Dang said.

Of course, containers are one of just several distributed systems components that make the wheels of cloud native infrastructure environments go round. Separate components are also needed to handle things like orchestration, networking and storage, which is why we’re hearing so much about technologies like Kubernetes, an open-source container orchestration tool built by Google that helps users to build and deploy both new and legacy applications in production.

Tools like Kubernetes play an important role in boosting container adoption, Dang explained, as they manage tasks like automating application scheduling and workload placement in clusters.

“They further simplify operations tasks,” Dang said. “Building a hybrid cloud with container infrastructure that spans on-premises and public cloud environments allows companies to quickly and easily run their applications based on business needs, not technical constraints.”

Containers or VMs: Which is best for me?

So are containers a more effective replacement for VMs? In many ways they are, but IT has become so complex that any decision to adopt containers or stick with VMs will need to take into numerous variables into account. Experts mostly agree that organizations should consider containers if they need to flexibly deploy, run, and manage applications at scale.

“Containers can increase the density of computing significantly,” said IDC’s Hilwa. That’s because container technologies were created as a less resource-intensive alternative to VMs by companies that needed to run hyperscale applications and rapidly iterate in development, he explained. Container-based infrastructure can yield significant cost reductions.

When trying to imagine the difference in how containers can scale, it can be helpful to paint a picture of how they work, stressed Holger Mueller, vice president and principal analyst at Constellation Research Inc.

“You can think of containers as building a staircase to unknown heights,” Mueller explained. In a VM environemnt, steps are one foot high in terms of resources used. With containers, they’re just an inch high. The container staircase is a lot easier to climb and “you can scale to any height”, Mueller said.

Containers scale

Holger Mueller: Containers let app developers “scale to any height”

That’s not to say containers trump virtual machines in every case. Most experts agree that VMs are more secure by virtue of their maturity and the fact that hypervisors provide less functionality than the typical Linux kernel and consequently present a smaller attack surface.

In a VM environment, “processes do not talk to the host kernel directly,” wrote Red Hat Inc. security engineer Daniel Walsh in a blog post. “They do not have any access to kernel file systems like /sys and /sys/fs, /proc/*.”

And while containers are generally a superior choice for hosting apps designed to scale, not every application needs to do so. Virtualization may be the better bet for small applications or older legacy apps.

“Sometimes you need to run a different OS on the same server,” said Rob Enderle, principal analyst at the Enderle Group. “Containers share an OS, which means they’re not always suitable. In contrast, VMs emulate hardware, which makes it possible to run a different OS instance in each one. It’s an important advantage when you need to run multiple operating systems on a single machine, or perhaps an older OS for compatibility reasons with older apps.”

Better together

Then again, there are many proponents of containers who argue that the two technologies work better when used together. Docker is one of them. The company teamed up with virtualization giant VMware Inc. at VMworld 2014 to promote the idea of running containers inside VMs, with the main advantage being that the combination addresses the inherent security isolation problem of containers.

“There is a misconception that containers are merely replacements for VMs, when they actually solve different problems,” said CoreOS’s Dang. “The real question is not ‘when should I use containers?’ but ‘when does it makes sense to use containers with or without VMs?’”.

In answer to his own question, Dang said containers can probably provide sufficient security for an organization that deploys them in its own data center. However, when running in third-party multi-tenant environments or cloud services shared by many customers, it makes sense to run containers in VMs to provide that additional hardware-based security isolation.

“We recognized the need for these different use cases when we built rkt, a container run-time that optionally executes containers as virtual machines,” Dang said.

So virtualization is unlikely to be displaced any time soon, and not just because of the security implications. Despite the hype of the last couple of years, containers are still a relatively new technology, and while systems like Kubernetes and Docker Swarm make things much easier on the management side, those tools aren’t as comprehensive as virtualization management software like VMware’s vCenter or Microsoft’s System Center. Still, IDC’s Hilwa suggested that this may not always be the case, as technologies like Kubernetes are constantly evolving.

“Without strong orchestration or PaaS [platform-as-a-service], containers will not realize their full potential,” Hilwa warned. “This is why the industry is now focused on evolving a few options in orchestration. At some point a few will reach critical mass.”

Photo Credits: Hong Kong Photographic via Compfight cc

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU