UPDATED 16:17 EDT / JUNE 20 2016

NEWS

Docker 1.12 will ship with built in DevOps orchestration | #DockerCon

As containers continue to evolve and become a necessary part of a multitude of deployments, Docker, Inc. has remained on the ball with DevOps expectations and added built-in orchestration to its most recent release. Docker 1.12 will ship with added core features designed to make multi-host and multi-container orchestration easier with new API objects, zero-configuration elements, increased simplicity and security out-of-the-box.

At the DockerCon 2016 event today Docker announced the upcoming release of Docker 1.12, which will simplify the testing, configuration and deployment of Docker containers by including Docker’s Swarm Engine with Docker. In short, Docker wants to make Docker itself the best way to orchestrate Docker containers.

In a blog post on the subject of the Docker 1.12 release, the company took an opportunity to talk about the four principles of this new design: simplicity, resiliency, security, and compatibility.

While Docker 1.12 will be released for general availability next month, it is currently available as a public beta for OSX and Windows—however, versions designed to work with AWS and Azure are only available as part of the private beta.

Using Swarms to orchestrate Docker containers

Orchestration and customization tools for containers, especially Docker containers, already exist in the developer ecosystem such as Chef (Chef Software, Inc.) and Puppet (Puppet, Inc.). However, with the introduction of Swarm, an internal engine modification for Docker that allows containers to self-discover one another in an ad hoc manner, Docker hopes to add its own internal orchestration mechanism, which will combine well with configuration and deployment tools such as mentioned above.

Initialization of a Swarm is as easy as a single command, “docker swarm init,” and this spawns a self-organizing, self-healing group of Docker engines capable of identifying and configuring with other nodes in its group. The first node acts as a manager for other nodes—and therefore can accept commands and schedule tasks—and each new node added becomes a worker, which can execute containers dispatched by the manager.

According to Docker, the new Swarm engine uses a consensus engine (called Raft), which is optimized to read and service directly from memory to make scheduling performance as fast as possible.

The Docker Swarm will provide a framework for Docker engines to be managed as a single containerized service, scale and load balance according to environmental conditions.

The Docker Swarm will provide a framework for Docker engines to be managed as a single containerized service, scale and load balance according to environmental conditions. Image courtesy of Docker, Inc.

Swarm orchestration takes some of the weight off DevOps teams to configure and keep track of deployments and mimics what a lot of the current orchestration and configuration products on the market currently do. In short, it’s a competitor to these services although it does not appear to manage the entire DevOps pipeline (development, testing, deployment, production, logging, etc.) While it adds the tools to do part of what DevOps teams need out of the box, it seems more likely it will be more readily integrated into already existing tools rather than replace them outright.

Scaling and resiliency services with Docker 1.12

A single call to the Docker engine in 1.12 can also be used to scale up a container rapidly, allowing a set of services to launch multiple instances and be aware of one another acting as a scalable service. The example put forward in the Docker blog post on the subject had 5 nginx containers launch at once under its own single-service and load balancer.

Earlier, Docker said that swarm nodes were “self-organizing and self-healing,” meaning that when things go wrong in the environment the scaling swarm attempts to correct to make things right again (or at least fail gracefully). The example provided said that if one of the machines running the above nginx instances were to suddenly go offline (unplugged or burnt out) a new container would launch on another node to take its place seamlessly.

Since the entire swarm of nodes running the service are load balanced and aware of one another, the Swarm engine is capable of redistributing computational power across the network of machines according to what’s currently available in order to maintain service.

Furthermore, scaling up and down is as simple as a command: “docker service scale frontend=100.”

Security with Docker 1.12 comes right out-of-the-box

According to GeekWire, Docker founder and CTO Solomon Hykes said at DockerCon 2016 that with Docker 1.12 “you get a secure system right out of the box.” Looking at the specifications, it seems that the system covers all the bases from encryption to authentication security the moment it launches.

Docker 1.12 comes with mutually authenticated TLS (Transport Security Layer) designed to provide authentication, authorization and encryption between every node immediately out of the box. When first starting a manager the Docker Engine produces a new Certificate Authority (CA) and a set of initial certificates. After this step, every new node joining the swarm receives its own certificate and a randomly generated ID.

The system will also automate the maintenance of the Public Key Infrastructure (PKI) of the swarm and manage certificate rotation. This is needed because occasionally certificates are “leaked” (or discovered by the outside world by attackers looking to break in). The frequency that certificates are refreshed by the nodes is set by the user, but can be set as low as 30 every minutes.

It is also possible for a developer using the Docker Swarm to use their own Certificate Authority and the Docker 1.12 engine supports an external-CA mode for third-party security—in this case the managers simply relay Certificate Signing Requests of nodes attempting to join to an external URL.

Bundling for fully portable application stacks

The release of Docker 1.12 also includes an experimental file format called a Distributed Application Bundle.

A Docker Bundle file will be a declarative specification for a set of services that have a specific image to run, specify what networks it will create and how containers in those services will be networked to run. It is basically a well-defined configuration and a set of containers that will execute services together.

As a result, Bundle files are designed to be fully-portable application stacks designed for swift deployment artifacts designed for software delivery pipelines to be able to ship full application stacks that will simply unpack themselves and go.

Docker Compose has experimental support for creating bundle files and with the release of Docker 1.12 and swarm mode enabled, bundle files can be created or shipped and unpacked for direct deployment.

This would greatly simplify testing and deployment of services for developers and DevOps teams as it would mean that a bundle image could just be sealed (at the developer side), shipped and then opened (at the operations side for testing or deployment). Bundles are primarily designed for rapid deployment of services to developer laptops or other environments; but this may have further-reaching capabilities for DevOps teams looking to just unpack an application stack wherever.

Featured image credit: Courtesy of Docker, Inc.

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU