SECURITY
SECURITY
SECURITY
Application observability startup groundcover Ltd. today announced a major expansion of its AI Observability capability, adding native support for agentic AI systems fully compatible with Google Vertex AI.
The update allows users to trace every large language model interaction and gives engineering and platform teams the ability to add observability to production environments at the speed with which language model services are incorporated into modern applications.
The release is seeking to address the issue whereby, as organizations rapidly integrate LLMs into production systems, they’re encountering new visibility gaps. Groundcover argues that traditional observability tools were designed for deterministic software, not systems where dynamic prompts drive outputs. That means teams often struggle to understand how AI-powered features behave in real-world environments, including what inputs are driving outcomes, how responses vary and how usage impacts cost.
The expansion of groundcover’s AI Observability capability addresses this challenge by taking a different approach to observability. It captures the full context of LLM interactions and traces how outputs are generated across increasingly complex, multistep systems.
“Our customers made it clear that their LLM calls have been invisible to the teams that manage the observability of their production systems [and that] they’ve been searching for a way to systematically understand their LLM calls by prompts, responses and cost” said Vice President of Product Orr Benjamin. “They deployed groundcover for its traditional observability features and we built AI Observability as a direct response to their demands for scale and mission-critical workload monitoring.”
The expansion sees the introduction of agent trace visibility, so groundcover can now surface complete agent execution traces across every model call and every tool invocation with its arguments and results and the reasoning path connecting them.
A new accurate cost attribution, including a prompt caching feature, allows token costs to be tracked at the span level. That accounts for most edge cases of pricing complexity of modern LLM application programming interfaces, correctly distinguishing between regular input tokens, cache creation tokens and cache read tokens. Using the feature, teams can see what individual agent runs and sessions actually cost.
The final new feature, Google Vertex AI support, sees groundcover’s automatic capture now extending to teams building on Google Cloud’s managed AI infrastructure, with all observability data remaining inside the customer’s own environment and zero instrumentation.
AI Observability is now generally available and automatically deployed to all groundcover customers. The new release is also being demonstrated at Google Cloud Next April 22-24.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.