

The information technology industry has a complexity problem, and it is leading to deeper conversations among thought leaders around how to solve it.
The days of building applications on one server using a monolithic architecture have transformed into developing numerous microservices, packaging them into containers, and orchestrating the entire production using Kubernetes in a distributed cloud.
It’s no wonder that in global survey results released by Pegasystems Inc. barely two months ago, three out of four employee respondents felt job complexity had continued to rise and they were overloaded with information, systems and processes. Nearly half singled out digital transformation as the cause.
Kubernetes has proven a great tool for driving modern IT infrastructure, yet it has also figured prominently in the design of overly complex systems. One of the tech industry’s most prominent thought leaders called attention to this issue in a recent interview during DockerCon 2022, with virtual coverage produced by theCUBE, SiliconANGLE Media’s livestreaming studio.
“The world is going to collapse on its own complexity,” noted development leader Kelsey Hightower said during a conversation with Docker Inc. Chief Executive Scott Johnston. “The number of teams I meet, and I won’t mention any names, say, ‘Kelsey, we’re going to show you our Kubernetes stack.’ Twenty minutes later, they are at piece number 275. Who’s going to maintain all of this? Why are you doing this?”
Hightower’s anecdote highlights the need for standardized tools within the Kubernetes developer community. As Kubernetes has matured, it has become a platform for building other platforms, and platform-as-a-service offerings such as CloudRun, OpenShift and Knative have enabled a great deal of operational management tasks for developers.
There has also been a move to create common interfaces within Kubernetes to enable adoption without requiring open-source community-wide agreement on implementation. These include Container Networking Interface, Container Runtime Interface and Custom Resource Definitions.
Despite the IT industry’s growing complexity, Hightower sees hope in the Kubernetes community’s ability to centralize around standardized tools.
“These contracts matter, and these standards are going to put complexity where it belongs,” Hightower said. “If you are a developer, yes, the world is complex, but it doesn’t mean that you have to learn all of that complexity. When you standardize you get to level the whole field up and move much faster. It’s got to happen.”
The challenge for many organizations is how to balance the requirements of running a data-driven business with the complexity that brings. While some enterprises have merely dipped their toes into the container deployment waters, others have jumped headfirst into the pool.
A Canonical Ltd. cloud operations report found that Kubernetes users commonly deploy two to five production clusters. The European Organization for Nuclear Research, known as CERN, is the largest particle physics laboratory in the world and runs approximately 210 clusters. Then there is Mercedes-Benz, which has pursued another model entirely. The global automaker gave a presentation at KubeCon Europe in May that described how it uses more than 900 Kubernetes clusters.
The German automaker was an early adopter of Kubernetes. It began experimenting with the container orchestration tool in 2015, only a year after Google LLC open-sourced the technology.
“We started small as a grassroots initiative,” Andrea Berg, manager of corporate communications at Mercedes-Benz North America Corp., said in comments provided to SiliconANGLE. “It was driven in a ‘from developers to developers’ mindset and became more and more successful. We helped change the mindset of our company towards cloud-native and free and open-source software.”
Mercedes-Benz Tech Innovation, the company’s subsidiary for overseeing company-wide technology, has grown its structure to support hundreds of application development teams. As the number of Kubernetes clusters grew, the company realized that it would need a tool to manage them. It turned to Cluster API on OpenStack, a Kubernetes-native way to manage clusters among different cloud providers.
The company also created a culture where developers would soon realize that as applications were completed, there would be no more ticket desks to run them. Automation tools would drive DevOps.
“We realized that a single shared cluster would not fit our needs,” Jens Erat, DevOps engineer at Mercedes-Benz, said during a KubeCon Europe presentation. “We had engineers with in-depth knowledge; we understood the tech and decided to create our own solution instead. You build it, you run it. There’s an API for that.”
The API path toward an easier approach for deploying Kubernetes in the enterprise received a boost in March when the Cloud Native Computing Foundation announced that it would accept Knative as an incubating project. Originally developed by Google, Knative is an open-source, Kubernetes-based platform for managing serverless and event-driven applications.
The concept behind severless technology is to bundle applications as functions, upload them to a platform, and have them automatically scaled and executed. Developers only have to deploy apps. They don’t have to worry about where they run or how a given network is handling them.
A number of major companies have a vested interest in seeing Knative become more widely used. Red Hat, IBM, VMware and TriggerMesh have worked with Google to improve Knative’s ability to manage serverless and event-driven applications on top of the Kubernetes platform.
“We see a lot of interest,” Roland Huss, senior principal software engineer at Red Hat Inc., said in an interview with SiliconANGLE. “We heard before the move that many contributors were not looking into Knative because of not being part of a mutual foundation. We are still ramping up and really hope for more contributors.”
The road for Knative has been a bumpy one, which has exposed growing pains as the Kubernetes community has expanded. Google took some heat for previously deciding not to donate Knative, before announcing a change of heart in December.
Ahmet Alp Balkan, one of Google’s engineers who worked on different aspects of Knative prior to last year, penned a blog post that expressed concerns around how the serverless solution had been positioned within the developer community. Among Balkan’s concerns was the description of Knative as a building block for Kubernetes itself.
“I think we overestimated how many people on the planet want to build a Heroku-like platform-as-a-service layer on top of Knative,” Balkan wrote. “Our messaging revolved around these ‘platform engineers’ or operators who could take Knative and build their UI/CLI experience on top. This was the target audience for those building blocks Knative had to offer. However, this turned out to be a very small and niche audience.”
Thought leaders in the Kubernetes community have also become more attuned to security for the container orchestration tool. Feedback from the user base has validated this focus.
In May, Red Hat published the results of a survey that found that 93% of respondents had experienced at least one security incident in their container or Kubernetes environments. More than half of respondents had delayed or slowed application deployment over security concerns. The report’s findings received additional credence in late June. Scanning tools used by the cybersecurity research firm Cyble Inc. uncovered 900,000 Kubernetes instances that were exposed online.
“Real DevSecOps requires breaking down silos between developers, operations and security, including network security teams,” said Kirsten Newcomer, director of cloud and DevSecOps strategy at Red Hat, during a KubeCon Europe interview with SiliconANGLE. “The Kubernetes paradigm requires involvement. It forces involvement of developers in things like network policy for things like the software-defined network layer.”
There is also an expanding list of open-source tools for hardening Kubernetes environments. KubeLinter is a static analysis tool that can identify misconfigurations in Kubernetes deployments. Security-Enhanced Linux, a default security feature implemented in Red Hat OpenShift, provides policy-based access control. And the CNCF project Falco acts as a form of security camera for containers, detecting unusual behavior or configuration changes in real time. Falco has reportedly been downloaded more than 45 million times.
With Kubernetes, it is easy to get caught up in metrics surrounding enterprise adoption, security and application deployments. Yet behind the increased dependence on containers can be found an important element that gets lost in the noise. Whether Kubernetes is complex or not, a lot of people now depend on this technology to work.
Near the end of his dialogue this spring with Docker’s Johnston, Hightower related a story about his previous work for a financial firm that processed shopping transactions for families needing government assistance. At one point, the transaction processor crashed and Hightower joined his colleagues in a “war room” as programmers followed a laborious set of steps to reboot the system and get the platform working.
“We’re just looking at this screen, some things were turning green and some were turning red, and the things turning red were the result of payments being declined,” Hightower recalled. “Each of those items turning red on the dashboard represented someone with their whole family trying to buy groceries. Their only option was to leave all of their groceries there. What we have to do as a community is remind ourselves that it’s people over technology, always.”
THANK YOU