UPDATED 11:53 EST / JULY 18 2020

CLOUD

As cloud native computing rises, it’s transforming culture as much as code

Rice University is in the process of shifting from a legacy on-premises enterprise resource planning application to one built entirely in the cloud. Along the way, the university is also shedding about 20 related special-purpose applications it has accumulated over the years.

“These are little products that fit a space at one time,” said Klára Jelínkova, Rice’s vice president for international operations and information technology. “We are ripping those out and replacing them so that we don’t end up being at the mercy of niche players that aren’t ready to redeploy as cloud native.”

Rice sees its ERP migration as an opportunity to simplify a complicated patchwork of specialty applications. She’s looking to replace it with one centered on an ERP engine that supports a constellation of partners, which then can integrate tightly with its core platform through the use of published application program interfaces. APIs are one of the byproducts of cloud native computing, a concept that’s shaking up the way organizations think about how they build and buy software.

Rice's Jelínková:

For Rice University’s Jelinkova, adopting a cloud native architecture was an opportunity to simplify a complex portfolio. Photo: Rice University

Interest in cloud native computing has surged with the growing popularity of Kubernetes, the open-source orchestration manager for the self-contained software operating environments called containers. But some experts say fascination with containers and the application portability they enable has obscured the broader benefits of cloud native functions, such as nearly limitless scalability, automation and interoperability. Organizations that hesitate to go the container route can still tap into cloud native features, often without major restructuring of their existing portfolios.

Disciples say cloud native is a concept that transcends specific technologies. Rather, it’s a new way of thinking about how software is built. “The overall story for cloud native is about not just moving to cloud, but optimizing for technology, organization and skills for the cloud,” said Stu Miniman, senior analyst at Wikibon, SiliconANGLE’s sister research company.

The term “cloud native” came into vogue with the arrival of Kubernetes and the 2015 establishment of the Cloud Native Computing Foundation to nurture the Kubernetes ecosystem. But although Kubernetes and containers are important cloud native elements, they aren’t essential to this new way of building software, experts say. And applications don’t have to be fully cloud native to deliver substantial new business benefits.

“It’s about being laser-focused on addressing and solving business pain points, which are often centered around the speed and agility required to get new products and services into the market,” said Rajeev Kaul, global managing director of cloud native engineering at Accenture Plc.

The industry is still debating the definition itself, even as it hashes out where it makes the most sense and how quickly organizations need to adopt its principles in order to stay competitive. The CNCF defines cloud native as “scalable applications” that run in “modern, dynamic environments such as public, private and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure and declarative APIs exemplify this approach,” the organization says.

But some people say that definition reflects the CNCF’s container focus and doesn’t take into account the broader range of cloud native options. The term has also been co-opted by technology providers to fit their own product profiles.

“The term is overloaded,” said Craig Lowery, research director in the technology and service provider group at Gartner Inc. “It’s come to mean whatever it takes to sell people things. The best way to think about cloud native is to do something differently than you were able to do before because of cloud technologies.”

The time is right for the industry to coalesce around a set of common principles for cloud native computing as many organizations come to the realization that their initial forays into the cloud yielded limited returns. An International Data Corp. survey last year found that 80% of respondents had repatriated workloads back on-premises from public cloud environments and, on average, expect to move half of their public cloud applications to private locations over the next two years.

Gartner's Lowery: The term cloud-native has "come to mean whatever it takes to sell people things." Photo via Twitter

Gartner’s Lowery: The term cloud native has “come to mean whatever it takes to sell people things.” Photo: Twitter

Many early public cloud adopters moved existing on-premises applications to virtual machines in the cloud, only to discover that benefits were minimal or worse. “You should not just take an existing app and throw it in the cloud and that’s it,” Miniman said. “Chances are it’ll be more expensive and harder to manage.”

Corey Quinn, cloud economist at cloud consultancy firm The Duckbill Group, goes so far as to say the popularity of hybrid cloud – in which processing is split between on-premises and public cloud infrastructure – is as much a function of failed cloud migrations as an architecture of preference.

“Virtually no one sets out with a plan to either be multicloud or hybrid,” he said. “They decide to do a full migration and realize that holy crap, it’s hard. So they plant the flag and say they’re going hybrid.”

So what is cloud native?

The definition of cloud native computing has both technical and cultural dimensions. From a technology perspective, it’s building applications around cloud-specific principles.

One of those is software-defined infrastructure, which enables automation by moving as much hardware functions as possible into software. Another is horizontal scalability, in which processing power is increased by lashing commodity servers together rather than the costlier approach of adding more powerful processors, storage and memory to an existing server.

Instead of being vertically integrated, cloud native applications are built of loosely coupled components that “allow each component to choose the right technology stack and infrastructure to solve its particular task, and which can be built, tested and released independently,” said Anders Wallgren, vice president of technology strategy at CloudBees Inc. That approach reduces hard-coded dependencies and makes modification easier.

APIs are an essential element. They provide consistent and safe access to data and services without messing with the underlying code, a practice that can create crippling dependencies. APIs enable applications to be safely modified, extended and integrated with other software. Declarative APIs are even more cloudlike, enabling developers to specify a desired result rather than a sequence for achieving it. It’s the equivalent of giving an address to a navigation system and leaving it to the computer to calculate the route rather specifying turn-by-turn directions.

Other cloud native technologies include NoSQL databases which can act upon distributed structured and unstructured data stores, cloud object storage and the agile development discipline of continuous integration and delivery.

Wikibon's Miniman: Cloud-native is about "optimizing for technology, organization and skills for the cloud," Photo: SiliconAngle

Wikibon’s Miniman: Cloud native is about “optimizing for technology, organization and skills for the cloud,” Photo: SiliconAngle

More debatable is whether cloud native applications must necessarily use containers and microservices, the latter being modular services that are invoked at runtime to perform specified tasks and then shut down. Both are included in the CNCF’s definition, but the principal benefit of containers is to enable application portability. Microservices, on the other hand, usually require a fundamental restructuring of an application or its components, a task that may have questionable return on investment.

Organizations that adopt containers for their portability benefits may actually sacrifice some of the benefits of cloud native architecture by making tradeoffs that make them less able to take advantage of underlying platforms, said Gartner’s Lowery.

“The African veldt is a lion’s natural environment; you can put it in a zoo but it’s not as conducive to the lion maximizing its potential,” he said, drawing an analogy. In the same way, abstracting services for the sake of portability has the tradeoff of limiting an organization’s ability to take advantage of all the features of a given cloud platform.

“Anytime you use a more generic approach, you’re giving up depth,” Lowery said. “Reducing lock-in will also reduce your cloud-nativeness.” The Gartner analyst advises organizations to think hard about how important portability is to them. Many will conclude that there’s more upside to using the native capabilities of a single platform, even if that locks them in. “Most CIOs are still trying to understand the tradeoffs,” he said.

Duckbill’s Quinn agreed that technology shouldn’t be a guiding star. “If your goal is to align to a Kubernetes strategy, then that’s a failure because you shouldn’t have a Kubernetes strategy,” he said.

Think different

Most experts say it’s a mistake to think of cloud native computing in technology terms. Rather, it’s a mindset shift based on the assumption that better technology is available in the cloud than inside a company’s data center. Think of cloud native principles as more of a business enables than an infrastructure improvement.

“The primary benefit is to reduce the time between forming a business idea and delivering it into production,” said John Clingnan, senior principal project manager, middleware, at Red Hat Inc., an IBM Corp. subsidiary. “It can also be critical when businesses are in a pitched battle to implement a concept and out-executing the competition through rapid, incremental change” will determine who wins.

“Cloud gives you the flexibility to pick and choose which parts of your application to abstract and which to spend your engineering resources on, giving companies the ability to focus on building on their core differentiators,” said Steven Mih, chief executive of Ahana Inc., a startup that provides services around the Presto distributed database platform

Duckbill's Quinn: "If your goal is to align to a Kubernetes strategy then that’s a failure because you shouldn’t have a Kubernetes strategy." Photo: Duckbill Group

Duckbill’s Quinn: “If your goal is to align to a Kubernetes strategy, then that’s a failure because you shouldn’t have a Kubernetes strategy.” Photo: Duckbill Group

Among the benefits are that applications are “better designed to handle a higher frequencies of changes, they’re better designed to handle horizontal scaling and failures and better able to handle business challenges that benefit from modularity of applications,” said Brian Gracely, senior director of product strategy at Red Hat.

The architecture works best when workloads are unpredictable or temporary. “If your application needs access to resources for only brief and bursty periods of time or if iterative development is a critical requirement, cloud native makes a lot of economic sense,” said Justin Kestelyn, vice president of product marketing at analytics startup Yellowbrick Data Inc.

Zymergen Inc., a company that invents new materials based on biotechnology, runs its compute-intensive scientific applications on a cloud native architecture to enjoy the benefits of rapid scalability, both up and down. The architecture has smoothly accommodated the company’s growth, said Chief Technology Officer Aaron Kimball.

“We can run scientific compute applications that might require 100 nodes for a week and only pay for them for that week,” he said. “Cloud native has also allowed us to grow from a simple internal app to a collection of hundreds without leasing any data center space.” The company manages all its infrastructure with only five engineers, “a level of efficiency that would be impossible with conventionally hosted systems,” Kimball said.

Not for everyone

That’s a best-case scenario for a cloud native architecture, but not every application is worth rebuilding. Vertically integrated and scaled applications that change little and run consistent workloads probably aren’t worth the effort to transform, Gracely said.

If a workload runs continuously and predictably across a defined infrastructure without much variation, said Zymergen’s Kimball, “it’s likely less expensive to buy the hardware yourself.” Added Yellowbrick’s Kestelyn: “Elasticity is great for workloads you rarely run, but for ones that run all the time, an always-on scenario will beat it on economics every time.”

In fact, one of the reasons so many legacy applications are being moved back from the cloud is that they never belonged there in the first place. “The merit of redeploying the same application meant for on-prem IT infrastructure is very limited,” said Vijay Raman, vice president of product management cloud at Information Builders Inc., which does business as ibi. “It is possible, yes, but the result is not a cloud native solution.”

Zymergen's Kimball: Using a cloud-native architecture gave the scientific computing operation a level of efficiency that would be impossible with conventionally-hosted systems." Photo: Zymergen

Zymergen’s Kimball: “Using a cloud native architecture gave the scientific computing operation a level of efficiency that would be impossible with conventionally hosted systems.” Photo: Zymergen

As many early cloud adopters have learned, merely redeploying applications from local virtual machines to those running in the cloud has marginal value at best. “Unless you follow through to modernize and truly take advantage of being in the cloud, you will probably just end up spending a lot of time and money and have little improvement to show for it,” said CloudBees’ Wallgren.

Among the factors that can complicate restructuring efforts are hardware dependencies for functions such as processing and storage as well as reliance upon particular databases. Among the questions Gracely said IT organizations should ask before embarking upon a restructuring are whether source code and test suites still exist, whether application elements are modular or interdependent and if traffic patterns would incur financial penalties that would make the move impractical.

Step by step

Fortunately, cloud native isn’t an all-or-nothing proposition. Organizations can take a staged approach by breaking off pieces of an application and encapsulating them in containers or rewriting them as microservices. Although that requires investment, the flexibility upside is often worth the effort.

“Since all cloud native technologies are community-driven, there is great developer support available,” said Om Moolchandani, CTO at code-to-cloud security vendor Accurics Inc. “You can continue to remain in your choice of deployment: on-premise, cloud, hybrid or multicloud.”

It’s possible to be only partially cloud native “if some elements are more flexible while others are still tied to some legacy aspects,” said Red Hat’s Gracely. But he cautioned that partial cloud-nativeness “isn’t a great position to be in long-term, as you create this impedance mismatch which will likely drag you back to the lowest-common-denominator behaviors.”

Experts recommend that a partial migration should be a step in a broader migration plan, not a goal in itself.  “The approach introduces complexity and risk, especially if components that run in the cloud need to communicate with components that run on-premise” said CloudBees’ Wallgren. “Such approaches are best used sparingly as a transitional strategy.” Summed up Ahana’s Mih: “Just because it’s possible doesn’t mean you ought to do it.”

However, a hybrid strategy that takes advantage of what an organization already has as well as the unique capabilities of multiple clouds providers under a consistent control plane is “the authentic state-of-the-art today — not an inflexible and expensive cloud-only approach,” said Yellowbrick’s Kestelyn.

Separating the data layer from the applications and allowing database and data services to be scaled independently while using technologies like caching for acceleration is a good start, said Alex Miłowski, data platforms researcher and evangelist at Redis Labs Inc. “Over time you can evolve your applications to a set of services that coordinate with each other and, before you know it, you have a full-fledged microservices architecture,” he said.

Developers can also “wrap” legacy services in standardized APIs to enable other applications to talk to them consistently. Eric Johnson, executive vice president of engineering at DevOps lifecycle management company GitLab Inc., recalled that one former employer that encapsulated a desktop software suite for making photo mosaics with an API set written in Python to enable it to communicate with microservices running in containers.

“It took a weekend of effort for two engineers because we had already made the upfront investment in testing, deploying and running cloud native applications,” he said. “We found cloud native technologies were extremely powerful, even under such unusual starting conditions.”

Rebuilding an application around microservices is more complicated and requires a thorough architectural review, said Colin Zima, head of Looker analytics and strategy at Google Cloud. “It’s a beautiful ideal, but for every perfect microservices-driven application, there are Rube Goldberg machines” that are probably better left alone, he said.

One thing everyone agrees is that being cloud native should never be a goal in and of itself. There’s nothing wrong with continuing to run purpose-built legacy applications that do what they’re supposed to do. The top reason to rebuild an application around cloud native principles are: “Does it make sense for the business?” said Gracely. “The fact that it’s cool technology should be reason No. 10,963.”

There’s also broad agreement the cloud native is as much a way of thinking as it is a development tactic. “It means doing things differently from the way legacy development would be done,” said Miniman.

Rice University’s Jelínková sees cloud native principles as flipping the traditional paradigm of building applications for the business. Traditional waterfall development is to “describe the business process, document it and code it,” she said. “In cloud, you’ve very thoughtful about not trying to change the process but adapting to the process as delivered. It’s not about building what the business wants but how we together consume innovative practices.”

That means discarding assumptions about control and trusting cloud providers to deliver the services, scalability and reliability that they promise. “When all of your infrastructure runs in the cloud, the provider isn’t your vendor,” Quinn said. “They’re your partner.”

Photo: blende12/Pixabay

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU