AI
AI
AI
Observability is entering a new phase. As cloud-native architectures scale and AI workloads intensify, enterprises are being forced to rethink how they collect, manage and pay for telemetry data — a shift that came into sharp focus during last year’s Open Source Summit NA and has continued to accelerate since.
At the June event, engineers, operators and platform leaders kept returning to the same problem from different angles: Modern systems generate more logs, metrics, traces and events than teams can realistically store, analyze or afford. The result is an uncomfortable paradox — unprecedented visibility paired with rising costs and diminishing returns.
“As AI workloads scale, the old model of collecting everything and figuring it out later doesn’t work financially or operationally anymore,” said Paul Nashawaty, practice lead and principal analyst at theCUBE Research. “Survey data shows that most enterprises now rely on double-digit observability toolchains. Even so, many still struggle with alert fatigue and slower-than-real-time awareness, which highlights the gap between telemetry volume and actionable insight. That’s why the shift toward discipline, cost control and AI-readiness was unmistakable at OSSNA.”
What surfaced most clearly at OSSNA was not a call for more observability tools, but for better signal discipline. Enterprises want telemetry that informs decisions, controls cost and increasingly supports AI-driven operations — not systems that monetize data volume for its own sake.
Chronosphere Inc. emerged at the center of that conversation, using the event to introduce its Logs 2.0 architecture and argue for a shift away from volume-based observability economics. Since June, both the company’s trajectory and the broader market response suggest that this shift is moving quickly from theory to practice.
This feature is part of SiliconANGLE Media’s ongoing market insights coverage, examining how infrastructure, platforms and operating models are evolving as enterprises adapt to cloud-native architectures, AI-driven workloads and mounting cost pressures across the IT stack. (* Disclosure below.)
Telemetry growth across cloud-native environments has become relentless. Chronosphere and others point to enterprise log data growth exceeding 250% year over year, while many organizations estimate that roughly 70% of their observability spend goes toward storing logs that are never queried. At the same time, tool sprawl continues to expand, with surveys showing that most enterprises rely on anywhere from six to 15 different observability tools.
Legacy pricing models exacerbate the problem. Many platforms continue to charge based on data ingested, creating incentives to collect everything rather than prioritize what matters. In Kubernetes-first environments — where services are ephemeral and infrastructure constantly shifts — that model becomes increasingly untenable.
“The secret in the industry is that … all of the existing solutions are motivated to get people to produce as much data as possible,” said Martin Mao, co-founder and chief executive officer of Chronosphere, during an interview with theCUBE. “What we’re doing differently with logs is that we actually provide the ability to see what data is useful, what data is useless and help you optimize … so you only keep and pay for the valuable data.”
Mao’s framing reflects a broader reckoning playing out across the observability market. As telemetry volumes balloon and cloud costs come under sharper scrutiny, enterprises are being forced to question long-standing assumptions about what data they actually need to retain and analyze. The conversation is shifting away from raw ingestion and toward economic discipline — measuring signal value, enforcing governance and aligning observability spend with real operational and business outcomes. That recalibration is increasingly shaping how buyers evaluate platforms, according to industry analysts tracking the space closely.
Widespread digital modernization is driving open-source adoption, which in turn demands more sophisticated observability tools, according to Nashawaty.
“That urgency is why vendor innovations like Chronosphere’s Logs 2.0, which shift teams from hoarding raw telemetry to keeping only high-value signals, are resonating so strongly within the open-source community,” he said.
Chronosphere’s introduction of Logs 2.0 at OSSNA was aimed squarely at teams constrained by logging and metrics tools that lag behind modern microservices-based environments. Rather than treating logs as an add-on, Logs 2.0 integrates them directly into the same platform that handles metrics, traces and events.
The architecture rests on three pillars. First, logs are ingested natively and correlated with other telemetry types in a shared backend and user interface. Second, usage analytics quantify which logs are actually referenced in dashboards, alerts and investigations. Third, governance recommendations guide teams toward sampling rules, log-to-metric conversion or archival strategies based on real usage patterns.
Together, these capabilities aim to replace guesswork with measurement. Instead of assuming which telemetry might be useful someday, teams can see what drives action today — and adjust retention and cost accordingly. Chronosphere reports that customers using these controls have reduced logging costs by more than 50% while maintaining operational visibility.
“It’s about getting that value measurement out there and showing that it’s an important component, versus just collecting everything and figuring out how to deal with it later when the bill comes,” Bill Hineline, field chief technology officer at Chronosphere, told theCUBE.
Since OSSNA, the observability market has continued to consolidate around fewer platforms, tighter cost controls and deeper integration with AI-driven operations. Chronosphere has been recognized in the 2025 Gartner Magic Quadrant for Observability Platforms, reflecting growing enterprise adoption. Gartner research, including its Magic Quadrant and related market projections, shows the observability platform market expanding rapidly — with forecasts that it will grow to more than $14 billion by 2028 — underscoring the increasing role of automation, analytics and intelligent telemetry across complex environments.
At the ecosystem level, Cloud Native Computing Foundation research and project activity increasingly point to a shift toward automation, intelligence and cost-aware observability, as end users look to rein in telemetry sprawl while maintaining visibility across complex Kubernetes and cloud-native environments.
Across vendor roadmaps, CNCF project momentum and enterprise buying patterns, the signals clearly point toward observability becoming more intelligent, automated and cost-disciplined, according to Sam Weston, head of research operations and industry analyst for theCUBE Research.
“Recent survey data shows that more than half of enterprises now rely on 11 to 20 observability tools, yet nearly a quarter still report that less than half of their alerts represent true incidents, which shows a widening gap between data volume and actionable insight,” she said.
The most consequential development, however, came in early 2026, when Palo Alto Networks completed its acquisition of Chronosphere, Nashawaty added.
“Palo Alto Networks’ acquisition of Chronosphere is less about portfolio expansion and more about market validation since it confirms that observability has become foundational infrastructure for AI-driven operations, security and platform engineering,” he said. “The era of unconstrained telemetry growth is ending, and the next phase will be defined by platforms that can prove value, control cost and operate at enterprise scale.”
Palo Alto Networks’ acquisition of Chronosphere marks a turning point in how observability is positioned within enterprise infrastructure stacks. Rather than remaining a standalone operational function, observability is being pulled into a broader platform strategy that spans security, AI and real-time decision-making.
According to Palo Alto Networks, the deal is intended to unify observability and security for the AI era, enabling telemetry data to feed directly into autonomous detection, response and optimization workflows. In practical terms, that reflects a growing recognition that logs and metrics are no longer just diagnostic artifacts — they are a shared data layer for resilience, risk management and AI-driven operations, according to Nashawaty.
“Palo Alto Networks is acknowledging that observability has crossed a threshold from tooling to platform infrastructure,” he said. “Survey data shows that more than 93% of organizations now track SLOs and increasingly measure success using business impact and customer experience metrics, not just system health, which shows that telemetry is now central to security, AI operations and real-time decision-making. For buyers, this raises the bar since observability platforms must now integrate into broader security and AI strategies instead of just monitoring systems in isolation.”
The acquisition also reshapes competitive dynamics. Established observability vendors, such as Datadog, Splunk and New Relic, now face pressure not only from next-generation specialists, but from large platform vendors embedding observability alongside security and AI operations. As observability spend increasingly overlaps with security and infrastructure budgets, ownership and buying decisions are shifting as well.
As observability platforms mature, their role is extending beyond troubleshooting and root-cause analysis. Emerging use cases include capacity forecasting and cost planning, business-IT correlation and AI-driven recommendations that suggest architectural changes based on cross-telemetry patterns, both Nashawaty and Weston explained.
Open standards such as OpenTelemetry play a critical role in this evolution, keeping data structured and interoperable rather than locked into proprietary formats. That openness enables observability data to move more freely into analytics pipelines, security workflows and AI systems — reinforcing its position as a foundational intelligence layer.
The next phase of observability will be shaped by AI-native workloads, agent-based systems and continued telemetry growth. In that environment, success will depend less on how much data platforms can ingest and more on how effectively they help teams identify, prioritize and act on the signals that matter.
Chronosphere’s trajectory — and its acquisition by Palo Alto Networks — illustrates how quickly observability is being elevated from an operational tool to a strategic asset. In a market flooded with data, the advantage no longer lies in collecting more, but in knowing what to keep, what to discard and how to turn telemetry into action, according to Weston.
“Observability is being redefined by AI-native workloads and automation, and that redefinition favors precision over volume,” she said. “The platforms that succeed will be those that help teams understand which signals drive outcomes and which simply drive cost. In that sense, observability is moving from instrumentation to intelligence.”
(* Disclosure: TheCUBE is a paid media partner for the Open Source Summit. Sponsors of theCUBE’s event coverage do not have editorial control over content on theCUBE or SiliconANGLE.)
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.