Kubernetes can handle the gnarly plumbing, but can it handle the gnarly problems too?
If 2018 confirmed the position of Kubernetes at the center of the cloud-native computing universe, then 2019 could well be the platform’s ultimate stress test.
The widespread adoption of the container orchestration tool in the enterprise has helped pave the way for moving software workloads to public and private cloud platforms. The Cloud Native Computing Foundation’s enterprise recent user survey reported 200 percent growth in cloud-native technologies since last December, and Kubernetes came out as the top choice for container management among more than 80 percent of respondents.
Yet, with growth and wider adoption also comes pressure points. In the case of Kubernetes, this runs the gamut from security to complexity to scalability. These are the big issues that are part of the Kubernetes package as key developers inside the open-source community gear up for the inevitable challenges that come with scale at 1,000 to 2,000 nodes or more.
“Once you start getting into these larger numbers, that’s when you start hitting these pressure points,” said Daniel Berg (pictured), distinguished engineer for IBM Corp.’s Cloud Kubernetes Service and Istio. “You start hitting different pressure points inside of Kubernetes, things that most customers are not going to hit, and they’re gnarly problems.”
Berg spoke with John Furrier and Stu Miniman, co-hosts of theCUBE, SiliconANGLE Media’s mobile livestreaming studio, during the KubeCon + CloudNativeCon event in Seattle. They discussed security vulnerabilities for the orchestration tool, how developers are dealing with issues of scale, potential flaws in a “lift and shift” strategy, efforts to simplify Kubernetes technology, and the platform’s future in a multicloud world. (* Disclosure below.)
This week, theCUBE features Daniel Berg as its Guest of the Week.
Security flaw discovered and patched
Security is an especially troublesome issue because Kubernetes has a significant attack surface as it has grown in popularity, and expanding enterprise cloud adoption could easily entice malicious actors to take their shot at breaking into the vault.
The flaw, discovered in early December by researchers at Rancher Labs Inc., was not a small one. It affected the entire spectrum of Kubernetes services and provided hackers with complete administrative access on any node run in a Kubernetes cluster, earning a 9.8 out of 10 on the Common Vulnerability Scoring System.
The vulnerability has since been quickly patched, but it was a reminder that being a bigger platform also means becoming a bigger target.
“I’d be a little bit concerned if we didn’t find a security hole, because that means there isn’t enough adoption,” Berg said. “The community addressed it, communicated it, and all of the vendors provided a patch very quickly. We’re handling those security problems.”
In addition to security, Berg and other developers have been wrestling with the container tool’s scalability. In 2017, the release of Kubernetes 1.7 included a key feature that allows developers to plug in a managed application or object as if it were native to Kubernetes.
The feature, known as Custom Resource Definitions, or CRDs, provides yet another way that developers can leverage many of the frameworks within Kubernetes, such as API services and cluster management, without hassling over the internal drivers. The platform’s extensibility has become one of its most attractive benefits, but it also comes with pressure points as well.
“One of the most recent problems that we hit is scaling problems with CRDs,” Berg explained. “We’ve been heavily promoting CRDs, customized Kubernetes, which is a good thing. Well, it starts to hit another pressure point that you then have to start working through.”
Lift and shift at own risk
The power of Kubernetes has made it an attractive option for enterprises seeking to move legacy applications “en masse” into the cloud, a process known as lift and shift. However, the integration of Kubernetes into complex cloud computing frameworks has placed pressure on ensuring that containerized environments are appropriately certified for migrating large legacy application workloads.
While possible, it’s not an approach that Berg feels will make sense over the long term, because enterprises will ultimately need to re-write a great deal of code to make it all work.
“We do see a lot of the whole ‘lift and shift’ and just put it on Kubernetes, but they really don’t get the value,” Berg said. “They don’t have the proper probes; they don’t have the proper scheduling hints; they don’t have the proper quotas; they don’t have the proper limits. So they’re not properly using Kubernetes, and therefore they don’t get the full advantage out of it.”
Kubernetes is still a complex technology, and IBM’s strategy has been focused on addressing that issue by offering cluster management and security services through a certified platform. In June, the company expanded its cloud capabilities by allowing customers to deploy multizone Kubernetes clusters in an effort to simplify management of containerized applications.
IBM also announced a partnership last month with LogDNA Inc. to streamline application troubleshooting. LogDNA’s Kubernetes integration is two lines of code, allowing developers to log an entire cluster in seconds.
“I’ve got a large team, and they live and breathe Kubernetes,” Berg said. “Every single release is tested and validated. Let the experts do it; focus on your business. That’s where the managed piece absolutely shines.”
It’s no accident that Berg’s title at IBM includes Istio. The open-source framework connects, monitors and secures microservices, including those running in the Kubernetes Engine.
This functionality will be important in an increasingly multicloud world. Istio can provide a central control plane for multiple clusters, and IBM now provides Istio support to control both an IBM Cloud private cluster and an IBM Kubernetes Service cluster.
Kubernetes can play a significant role in helping enterprises navigate the gnarly problems associated with multicloud adoption, according to Berg.
“Kubernetes has matured; it has gotten better,” Berg said. “You’ve got to focus on a standardized platform that you’re going to use, because multicloud is here. It’s here to stay. Give it another six to 12 months. That’s going to be the practice. That’s going to be what everybody does.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the KubeCon + CloudNativeCon event. (* Disclosure: IBM Corp. sponsored this segment of theCUBE. Neither IBM nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Since you’re here …
Show your support for our mission with our one-click subscription to our YouTube channel (below). The more subscribers we have, the more YouTube will suggest relevant enterprise and emerging technology content to you. Thanks!
Support our mission: >>>>>> SUBSCRIBE NOW >>>>>> to our YouTube channel.
… We’d also like to tell you about our mission and how you can help us fulfill it. SiliconANGLE Media Inc.’s business model is based on the intrinsic value of the content, not advertising. Unlike many online publications, we don’t have a paywall or run banner advertising, because we want to keep our journalism open, without influence or the need to chase traffic.The journalism, reporting and commentary on SiliconANGLE — along with live, unscripted video from our Silicon Valley studio and globe-trotting video teams at theCUBE — take a lot of hard work, time and money. Keeping the quality high requires the support of sponsors who are aligned with our vision of ad-free journalism content.