Kubernetes co-developers aim to make the container orchestration software ‘boring’
Kubernetes, the orchestrator for the freestanding software operating environments called containers, has been described as the next Linux — meaning that it will soon be so ubiquitous that no one will even be aware that it’s there.
That suits Craig McLuckie (pictured, right) and Joe Beda (left) just fine. The two VMware Inc. executives were among the co-creators of Kubernetes while working at Google LLC seven years ago, and they continue to be actively involved in nurturing the Kubernetes kernel and its sprawling ecosystem through the Cloud Native Computing Foundation and commercial markets. Their infrastructure automation startup, Heptio Inc., was acquired by VMware in 2018.
In an interview with SiliconANGLE, McLuckie and Beda marveled at the rapid evolution of that ecosystem and the speed of Kubernetes adoption by information technology organizations. “Good grief, it’s been humbling,” McLuckie said. They also believe that innovation in the open-source and commercial software spheres will solve most of the complexity and security problems while moving Kubernetes toward a utility that empowers increasing levels of automation.
Beda likened the software’s evolution to automobiles. “The first wave of infrastructure management was like driving a car today: You put your foot on the gas and turned the wheel,” he said.
The state of the technology today is more like a semi-autonomous vehicle where the car does the driving but the driver is still actively involved in navigation. Thanks to the high-level operators inherent in Kubernetes, he said, “it’s getting more and more like telling the car to take you to the grocery store and you don’t want to worry about the twists and turns of getting there.”
It just works
The orchestrator’s power isn’t just about running containers, McLuckie said. “It’s about bringing intent-driven control to an environment, so you define the desired state and the control systems drive to that state.” So-called declarative configuration could ultimately do away with much of the manual work and scripting that currently goes into setting up Kubernetes clusters.
“It’s the ability to say, ‘I want a well-formed Kubernetes cluster’ and have something that understands what that looks like and creates that well-formed cluster,” McLuckie said. The recently released Kubernetes cluster application program interface “has been surprisingly powerful and useful” in that respect, he said.
The features and functions that will be implemented in the Kubernetes kernel in the future won’t excite anyone, but that’s the intention, the co-developers said. “We want the kernel to be boring and stable and predictable,” Beda said.
In the same way that Linux has become part of the plumbing of information technology and cloud infrastructure, the Kubernetes community will rely on the ecosystem nurtured by the CNCF and commercial vendors to layer functionality on top of that core. “We’ve been trying to concentrate on the interfaces for people to build on top of Kubernetes rather than building the features into Kubernetes itself,” Beda said.
Managing at scale
Two evolutionary challenges that Kubernetes developers are confronting right now are management at scale and security. McLuckie and Beda admitted that they have been surprised by the diversity of use cases that emerged.
“The way Google ran Borg (Kubernetes’ predecessor) and precursors like Mesos and Cloud Foundry was on very large projects and clusters,” Beda said. “With the dynamism of cloud, we have seen people move to many small clusters as well. That took us by surprise a bit.”
The growth of infrastructure at the edge, particularly in the form of bare-metal hardware, has presented unanticipated challenges that Kubernetes has handled quite well, Beda said. “When you go from managing a small number of clusters to tens or hundreds of thousands of clusters dispersed around the world it is a fundamentally different problem” than managing local clusters.
Fortunately, he added, “We’re finding that the declarative APIs are very amenable to creating hierarchical systems with a global controller that can coordinate what’s happening across a large number of independent sites and locations.”
As a result, McLuckie sees the ecosystem solving the problem of simplifying very large rollouts, “so you can deploy to 10 or 100 or 1,000 locations and rollback,” he said. “Then we have to think about how to make it available to developers, so it feels more like a [software-as-a-service] experience but running in 10,000 locations.”
Many images managed as one
Kubernetes’ optionality and consistency can simplify installations of infrastructure across a diverse range of operating environments, McLuckie said. For example, a retailer with onsite infrastructure that ranges from simple to sophisticated depending on the store in which it’s deployed can use optionality and consistency to roll out across a diverse assortment of hardware platforms.
Security is still a work in progress. Red Hat Inc.’s most recent State of Kubernetes Security survey reported that 94% of respondents have experienced a container-related security incident during the last 12 months and 55% have delayed deployment of a Kubernetes application into production because of security issues.
While acknowledging that “we need to do a better job” on security, Beda said part of the problem is also the novelty of the platform and the skills required to run it. “With new tools come new skills,” he said. “A lot of people are doing this for the first time and learning hard lessons along the way.”
In the longer term, however, Beda said Kubernetes’ observability creates the potential to secure IT environments much better than has been possible in the past.
“You can have a ton more insight in terms of what is running, who ran it, who made changes and what the communication patterns are,” he said. “We can move to the world where you have an audit list of every piece of software that is running in your data center, which lines of code are compiled into which binaries, the people who wrote that code and a chain of custody for all of it.”
That way, he added, “you can track what security scans were run when and who to talk to about remediating a problem. It’s not just about replicating the security stance we have but going far beyond that.”
A replacement for VMs?
A perpetual question since Docker Inc. introduced the modern software container in 2013 has been whether the technology would make virtual machines obsolete. Beda and McLuckie’s response to that question would warm the heart of their employer.
While acknowledging that there continues to be confusion in the market about the distinction between containers and VMs, the answer is clear in his McLuckie’s mind. “VMs abstract away the specifics of your infrastructure. It’s a very mature technology,” he said. “The container is a way to atomically package up your work and deploy it.”
He called the perception that VMs and containers solve the same problem “a false dichotomy.” Although the technologies have come closer together, he said, “most organizations will be better served with a strategy that lets you use VMs and containers for what they’re really good for.”
That means virtual machines will be the superior solution for matching workloads to available machine resources, Beda added. “You have better efficiencies when you can right-size the infrastructure and VMs do so much better at that than bare metal,” he said. Containers will be a delivery vehicle for applications. “We’re a long way from containers making VMs irrelevant,” McLuckie said.
A message from John Furrier, co-founder of SiliconANGLE:
Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.
We really want to hear from you, and we’re looking forward to seeing you at the event and in theCUBE Club.