UPDATED 15:44 EDT / OCTOBER 21 2022

CLOUD

Five ways to tell whether Kubernetes is a good fit for your app – or not

In many tellings, the path to application modernization goes like this: First, you refactor your application into microservices. Next, you containerize each service. Finally, you deploy it on Kubernetes, the open source orchestration engine that has become the de facto platform for running containerized apps.

The thing is, not all modernization stories follow this narrative. In many cases, Kubernetes isn’t – and shouldn’t be – part of the journey to modern software deployment.

The reason why is that, although Kubernetes is a great technology, it’s far from the best solution for deploying every application under the sun. Even if your application runs as microservices using containers, Kubernetes isn’t necessarily the best way to deploy it. There are other, simpler solutions – like Amazon ECS or Lambda – for running containers. And if your application isn’t a set of microservices at all, Kubernetes is hands-down not a good way to run it.

So, which types of applications actually make sense to run via Kubernetes? Which functional requirements or architectural characteristics make an application a good fit for K8s (as Kubernetes acolytes sometimes call the platform) – or, conversely, make an alternative hosting solution a better choice?

This article answers those questions by walking through five key factors that applications should have before being deployed using Kubernetes. We’ll also look at Kubernetes workload “antipatterns,” meaning common mistakes that teams make when choosing whether to commit a given workload to K8s.

How to tell if your application is appropriate for Kubernetes

When evaluating whether an application is a good candidate for deployment via Kubernetes, consider how closely it aligns with each of the following characteristics.

1. Your app runs as small, concise, independently scalable services

Applications that operate as a set of small, concise services are good fits for Kubernetes. The main reason why is that Kubernetes can dynamically scale each service independently, which in turn means your application can make the most efficient use possible of the available hosting resources.

Conversely, applications that run as “monoliths” – meaning the entire application operates as a single service – don’t benefit much from Kubernetes. Choosing to run a monolith on K8s means you’ll have a lot more complexity to contend with than you would if you chose a simpler deployment model – like running the app on a standalone VM – and you won’t gain many benefits in return because monoliths can’t scale granularly or dynamically.

2. Your app is hardware-agnostic

Applications that don’t require specific hardware configurations work well on Kubernetes because you can use K8s to set up a cluster of servers and deploy applications across them. Kubernetes decides where to place each application within the cluster and it allocates resources to the apps as necessary. (Optionally, you can – and usually should – define resource minimums that Kubernetes should assign to each app at deployment time.)

On the other hand, if you have an application that needs rigid CPU or memory allocations – or that requires access to specialized hardware devices, like GPUs – it typically makes more sense to deploy the application directly on VMs, rather than setting up a K8s cluster.

3. Your app is one of many, and they can all coexist on shared infrastructure

Kubernetes lets you segment workloads from one another using a feature called namespaces, which are essentially virtual borders that you can define within a single cluster of servers. However, Kubernetes doesn’t provide the “hard” application isolation that you get from running each app on a dedicated virtual machine or physical server.

This means that Kubernetes is great if you have a large number of workloads that can share a cluster of servers, with each workload running in its own virtual environment. K8s is not so good if you need rock-solid isolation between workloads. Nor does it make as much sense if you have just a handful of workloads, in which case setting up and managing Kubernetes would be more trouble than it’s worth.

4. Your app runs multiple services – some internal, some external

Typically, only some of the microservices inside a modern app need to be external, meaning they can connect to resources that are outside the application (but still inside the corporate network). Other services – like those that move data internally between the application frontend and a backend database, for instance – don’t require connectivity to anything outside the application or the cluster of servers that host it.

Kubernetes is a great solution for these types of applications because it lets you define in a granular way which services will be corporate network-facing and which will be internal-only. It also – and this is the really big deal – lets you conserve corporate network IP addresses, which is important because IP addresses are often in limited supply within enterprise environments.

5. Your app requires custom DNS settings

Kubernetes gives admins a great deal of control over how network names are resolved. That’s beneficial for applications that require custom domain name settings – as opposed to using generic DNS servers – to map IP addresses to host or service names.

Most conventional applications don’t require special DNS settings. But in enterprise environments where DNS configurations are set manually, or for apps that have a large number of internal services that require special DNS settings, Kubernetes is beneficial because it provides a level of control and flexibility over DNS that isn’t available in other types of hosting environments.

When you absolutely should not use Kubernetes

To add context to the decision-making process surrounding whether or not to use Kubernetes, let’s look at the archetypal example of when not to go the k8s route.

It’s when you have a monolithic application that you decide to stuff into a Docker container. Although in a technical sense Kubernetes is capable of running your containerized monolith, choosing to deploy this type of application on Kubernetes exposes you to a lot of challenges and almost no benefits.

Your app won’t be able to consume host resources efficiently because Kubernetes won’t be able to scale individual parts of it. As a monolith, the app can only scale up and down as a whole.

It’s also likely that your containerized monolith will require different configurations at different stages of delivery (development, testing and production), which means you won’t enjoy the benefit of having consistent configurations – and, therefore, fewer opportunities for things to go wrong due to a configuration error – that containers offer in other situations.

Worst of all, you may end up having to bake security configuration data, such as access credentials, into your monolith’s container image. That increases the risk of sensitive information falling into the wrong hands.

The bottom line here is that, although there’s nothing stopping you technically speaking from running a monolith on Kubernetes as a container, doing this is never a good idea. It’s one way to get your app on Kubernetes, but it’s an objectively bad way.

Conclusion: Kubernetes is awesome – sometimes

Let me conclude by emphasizing that I’m not anti-Kubernetes. Kubernetes certainly has a lot to offer, especially for apps that run as focused, discrete microservices, that can run well on a shared cluster and that require special networking configurations.

But for other applications, it’s likely that there is an alternative deployment solution that will be less complicated to setup and administer than Kubernetes, while also delivering more performance, scalability and cost-savings benefits. Before jumping on the Kubernetes bandwagon just because everyone else seems to be doing it, it’s critical to step back and think about which deployment strategy is best for your specific app, not what’s most popular.

Derek Ashmore is application transformation principal at Asperitas Consulting LLC. He wrote this article for SiliconANGLE. 

Photo: Markus Distelrath/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU