UPDATED 14:43 EDT / JUNE 04 2023

CLOUD

Cloud conundrum: The changing balance of microservices and monolithic applications

Around the turn of the century, there was really just one choice for building business applications: using a server that you owned and that sat in your own data center. Then came the cloud and with it the debate over whether an app would be better run in the cloud or on-premises.

As Wikibon Chief Analyst Dave Vellante opined in a recent Breaking Analysis, “While we believe there is plenty of upside in cloud, we think it’s going to come from new innovation, new workloads and new industry solutions rather than lifting and shifting the remaining on-prem workloads.”

But that debate seems so old-school now, because it isn’t just migrating to the cloud, but how the cloud app is constructed. Since then, the landscape has gotten a lot more complicated, with virtual machines, cloud computing, microservices and containers. The modern developer has almost too many choices and has to balance the various tradeoffs among those architectures.

Case by case

A few weeks ago, developers of Amazon.com Inc.’s Prime Video streaming service posted a provocative screed describing their journey from microservices back to what is now popularly known as a monolithic application — meaning that everything is contained in a single code base.  The application in question was an internal monitoring tool that was getting expensive to run and hitting a series of performance bottlenecks.

The dev team rewrote the tool so all of its various components ran in a single Amazon Web Services Elastic Container task, in the process saving 90% of its infrastructure cost. The post concluded, “Microservices and serverless components are tools that do work at high scale, but whether to use them over a monolithic app has to be made on a case-by-case basis.”

While not quite the same thing as running a bunch of COBOL on your father’s IBM mainframe, the decision to retrench still shook up many readers. “So, monoliths aren’t dead (quite the contrary), but evolvable architectures are playing an increasingly important role in a changing technology landscape, and it’s possible because of cloud,” said Amazon.com Inc. Chief Technology Officer Werner Vogels.

As Vogels said in his post, “There is no one-size-fits-all. If there are a set of services that have the exact same scaling and performance requirements, same security vectors, and most importantly, are managed by a single team, it is a worthwhile effort to see if combining them [into a monolithic app] simplifies your architecture.” He pointed out that AWS’ S3 storage service now has now evolved into more than 300 related microservices. “There is not one architectural pattern to rule” every application, he added.

Part of the issue is that microservices applications are typically composed of numerous containers, so linking them together can make their performance complicated. “In its early days, Kubernetes was primarily focused on building features for microservice-based workloads,” Maciek Różacki, a Google product manager working on enabling computational workloads on Kubernetes and Google Kubernetes Engine, wrote in The New Stack. But although containers were useful to allocate resources and made it easier to share particular technical tasks in the design of a piece of software, they were mainly about how the software was packaged and deployed.

The trick was how to split up these tasks with the app. It could make the code easier to manage, and easier to scale up and down as demands change. For instance, the parts of the code that need that extra computing power can be scaled up with microservices, unlike with a monolithic code where everything has to scale up to prevent a particular bottleneck — when they are written well, that is.

Done properly, a collection of microservices is easier to understand and manage. A service can be matched with a specific business function, for example. But having hundreds or thousands of services means it’s important to do a better job at setting up connections among the services and test things carefully.

Done poorly, the setup could increase network and other communications latencies that could slow down the app and defeat its ability to scale up. These connections make use of APIs, which are often not carefully vetted. “The continued adoption of microservices emphasizes the importance of secured APIs,” according to an analysis in the Cloud Security Alliance’s Top Threats research last year.

New issues

Layered on top of this decision are two other important factors. First, cybercriminals have gotten more sophisticated, and are targeting VM collections, what CrowdStrike Holdings Inc. calls Big Game Hunting — because typically hundreds of VMs are running on the same hardware server called a hypervisor, and an attack can compromise the whole lot at once. They are also targeting container software supply chains, because the explosion of containers means that not every one is as secure as it could be.

Second, containers aren’t really designed for those old-style mainframe batch processing jobs, at least not yet. As more high-performance computing is used for cloud-based applications, containers need to move into this arena.

Różacki describes several efforts underway to provide this batch-processing support, plus ways to manipulate them for these more demanding workloads. He’s working with the Houston-based geophysical analytics company PGS ASA to run more than a million virtual CPUs on multiple Google Kubernetes clusters, replacing a 260,000-core Cray supercomputer. That is an impressive step.

Containers and microservices are great for applications that can put everything together in a single place, and make it easier for developers to run across many different platforms and computing equipment. Containers are also better at scaling up and down an application than starting and stopping a whole bunch of VMs, since they take fraction of seconds to bring up, versus minutes for a VM.

But there are still tradeoffs. Here is one way to describe the situation: “The microservices architecture is more beneficial for complex and evolving applications. But if you have a small engineering team aiming to develop a simple and lightweight application, there is no need to implement them.”

Stepping stones

But it would be wise not to discount VMs entirely. They can be an important stepping stone from the on-premises world, as Southwire Co. LLC’s Chief Information Officer Dan Stuart told SiliconANGLE in a recent interview. “We had a lot of old technology in our data center and were already familiar with VMware, so that made the move to Google’s Cloud easier,” he said. “It helped that we were familiar with many of the virtualization concepts already.”

Adrian Cockcroft, a former AWS engineer and now a technology adviser, likes to categorize two different types of container architectures, serverless-only and serverless-first. That emphasis on containers may not work for every app, and he recommends containers for low-traffic apps and where latency isn’t critical.

There’s a “realization that the complexity of Kubernetes has a cost, which you don’t need unless you are running at scale with a large team, he wrote recently. “Maybe the answer to the question of whether to build with microservices or a monolith is neither, you should be calling an existing service rather than rolling your own.”

And as Andrew Sullivan and Alex Handy recently wrote in another The New Stack article, “change is hard. It is important to find a method of supporting both containers and virtual machines inside your environments, as the only real mistake you can make is to ignore one of these technologies completely.”

Southwire’s Stuart points out another reason to examine applications carefully. “We moved many things over to an SD-WAN [software-defined wide-area network], which required larger internet bandwidth to handle the traffic,” he said. “But that also improved our app performance too.”

Cloud computing thought leader Reuven Cohen imagines a future where “the majority of applications will be built, deployed, and managed by intelligent agents. These agents, powered by AI and cloud technologies, will leverage the best architectural paradigms suited to the specific requirements of each application. They will navigate the complexities of choosing between monolithic, serverless, or even emerging architectural approaches based on their ability to optimize efficiency, adaptability, and scalability.”

Until then, it is up to us humans to make these choices.

Image: TheDigitalArtist/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU