UPDATED 12:05 EDT / MARCH 27 2026

CLOUD

From cloud native to AI native: The role of context density

As with the rest of the technology landscape, artificial intelligence – and in particular, agentic AI – came to dominate the conversations at the Cloud Native Computing Foundation’s recent flagship event, KubeCon + CloudNativeCon Europe 2026.

In fact, AI is so pervasive that cloud-native computing is giving way to AI-native computing: the application of cloud native principles to AI workloads, agentic AI in particular.

There were two sides to the AI native story at this software infrastructure-focused conference: As organizations leverage Kubernetes to put agentic AI into operation, they are looking to implement infrastructure for running agentic workflows at scale, and they are leveraging AI agents to facilitate the deployment of flexible, dynamic infrastructure generally.

Peel back the layers of the AI-native abstraction, however, and you’ll find at the heart of both sides of the agentic AI story is a single concept: context. Context is the central facilitator of agentic behavior.

Understanding what we mean by context, how it relates to the metadata that have always been central to cloud native computing, and how to leverage context to build successful agentic applications are the new imperatives for architects, operators and developers as we move from the cloud-native to the AI-native computing era.

From metadata to context

One of the keys to Kubernetes’ success is its declarative nature: Metadata drives the behavior of its components.

To configure and deploy Kubernetes and the applications running on it, operators create and manage various forms of metadata. Helm charts, manifests, infrastructure-as-code or IaC scripts, telemetry and all the various forms of configuration information are all metadata.

All these metadata potentially become context for AI agents. Feed an agent context metadata to prompt it, configure it, govern it and evaluate its behavior.

According to Patrick Debois, developer relations at Tessl AI Ltd. and adviser to the AI native developer community, “context is the new code.” The software delivery lifecycle, or SDLC, becomes the CDLC – the context delivery lifecycle.

The cloud-native world may be replete with metadata, but not all metadata make for good context. Provide poor, ambiguous context to agents and they are likely to misbehave.

Understanding what makes for good context and deploying tools for creating and managing such context, therefore, become critical AI-native priorities.

Understanding context density

What makes for “good” context depends upon whether humans or agents are working with it. To understand this distinction, it’s important first to understand semantic density.

Semantic density – how much meaning a person or a system can cram into a given number of words – is an essential consideration in the agentic AI era.

In fact, in his article from Feb. 27, 2026, John Furrier makes an important point: “In the [agentic] orchestration era, software defensibility is no longer about UI polish or workflow checklists. It’s about semantic density.”

Humans deal well with high semantic density – where a few words can convey many layers of meaning. AI – in particular, AI agents – struggle with high semantic density, especially when dealing with context – in other words, context density.

While semantic density measures the internal complexity of meaning within a message, context density measures the meaningful content around a message.

To operate properly, agents require precise, concise context – in other words, metadata with low context density. Humans, on the other hand, work best when communicating with high context density – an essential and defensible human advantage over AI, as I explained in a recent article.

Understanding AI-native infrastructure through the lens of context density

As I wandered the floor of KubeCon interviewing vendors, I looked for how they were dealing with the metadata essential to cloud-native computing – and how well their tools would support metadata with low context density.

In the pre-AI cloud-native era, context density wasn’t an important consideration, since humans would deal with situations where metadata had high context density. Now that we are delegating so many tasks to AI agents, however, it’s essential for tools to ensure that agents have context with low density.

This requirement, however, was never explicit among the vendors at KubeCon. Context density is not yet a focus of discussion, despite its importance to the transition from cloud native to AI native computing.

As a result, I had to read between the lines to uncover how well the vendors on my list dealt with the issue of context.

Here are my takes:

Treating metadata as context

There are various forms of metadata in the Kubernetes world, and any of them might serve as context for AI agents.

Telemetry is one example. The cloud-native world has made great strides in standardizing telemetry formats with the OpenTelemetry, or OTel, standard – but even OTel-formatted telemetry can have issues that limit the ability for AI agents to leverage it as context.

Many organizations don’t realize that there are  problems with their telemetry, and even those operators that recognize that such problems exist don’t know how to find them.

To solve this problem, OllyGarden Inc. fixes OTel-formatted telemetry on the fly.

OllyGarden automatically resolves issues with OTel telemetry, including excess or redundant telemetry, provenance issues including labeling inconsistencies, as well as finding and masking sensitive data. All of these fixes help make telemetry suitable for providing context for agentic workflows.

IaC scripts (typically in Terraform) also represent important configuration metadata that can drive agentic behavior, as long as it has low context density.

I spoke to two vendors that address issues with IaC code. Infralight Ltd., which does business as Firefly, leverages AI agents to automate cloud configuration by generating Terraform code, enabling operators to instantly rebuild environments during outages and cyberattacks with minimal downtime.

Firefly aggregates configuration metadata across cloud providers, creating a configuration inventory that provides a single source of truth across environments. It then uses this inventory to generate concise, precise Terraform code – in other words, IaC code with low semantic density.

Terramate GmbH also deals with IaC by abstracting Terraform code to provide greater reusability and management of IaC-configured infrastructure at scale. In essence, Terramate shifts IaC to the left, providing curated, reviewed IaC code that supports agentic control within explicit guardrails.

Scaling stateful agentic workflows

AI agents can execute individual tasks, but the real power of agentic AI is when agents work together to implement workflows.

Workflows are inherently stateful, which means that the agents participating in the workflow must keep track of everything that is going on within the workflow.

Kubernetes, in contrast, is best suited for stateless applications. Horizontal scalability is relatively straightforward with such applications, but keeping track of workflows is harder to scale – especially when the behavior of AI agents is non-deterministic.

I spoke to three vendors that are tackling this scalability issue. Architect from Loophole Labs Inc. provides scalability for Kubernetes infrastructure to support long-running and stateful applications, including agentic applications. Architect also supports the higher reliability requirements that agentic workflows have as compared to the stateless applications that Kubernetes specializes in.

Diagrid Inc. provides a platform for resilient agentic workflows, guaranteeing that such workflows will run to completion. The platform provides security and resilience for model context protocol or MCP servers and MCP communications between servers and endpoints, thus enabling the communication of low density context among agents, large language models and humans.

Kedify Inc. offers autoscaling for Kubernetes clusters, including clusters running on graphics processing units. Instead of leveraging memory and CPU/GPU capacity metrics to kick off autoscaling (which can be too late to avoid failure), Kedify leverages other signals including traffic metrics, database writes and other signals that indicate a need for an autoscaling event.

Since agentic workflows are stateful, it’s important to route prompts and other context-based interactions to the appropriate replica instance. Kedify automatically handles the movement of context metadata among replicas to maintain agentic workflow state.

Controlling AI-native infrastructure at scale

One of the most important advantages of Kubernetes’ declarative, metadata-driven approach is how it supports centralized control via control planes.

As we move from the cloud native to AI-native world, control planes are just as important – only now for the more complex, nondeterministic behavior of AI agents and agentic workflows.

Upbound Inc. offers a control plane for cloud native and AI native infrastructure. This control plane handles nondeterministic operations scenarios, say when Upbound’s AI agents make ops decisions based on telemetry inputs. The control plane also facilitates the management of infrastructure for AI-based workloads.

Cue Labs AG offers a configuration-centric control plane for Kubernetes. Kubernetes itself offers basic configuration controls, but Cue Labs handles complex configurations including multicluster, multicloud and hybrid configuration scenarios.

Cue Labs enables operators to control configurations proactively to avoid configuration-related downtime and helps them align policies to configurations – even when the people managing policies don’t communicate with the operators responsible for the configurations. The company provides a succinct format for configuration metadata that is both more human readable and also offers the low context density that agents require, giving operators the control they need over the most complex deployments of non-deterministic agentic applications.

Context is the new oil

Some people may argue that context is just another word for metadata – and we’ve been dealing with metadata for years. So what’s really new here?

The answer, of course, is agentic AI, and the role context plays within it and for generative AI generally.

Prompts themselves, after all, are either mostly or entirely context – the remainder consisting of the data themselves. Any instruction you put into a prompt, for example, is context.

Humans may prompt AI agents as well, of course, but the more important story here is how agents behave – either by requesting data, interacting with APIs, or communicating with other agents.

All such behavior requires precise, concise context for agents to behave properly and accomplish the tasks we set out for them.

As enterprises build out their AI-native infrastructure, therefore, it is becoming increasingly important to leverage tools and platforms that support the low-density context requirements of agents as well as AI at large.

Jason Bloomberg is founder and managing director of Intellyx, which advises business leaders and technology vendors on their digital transformation strategies. He wrote this article for SiliconANGLE.

Photo: CNCF

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.