BIG DATA
BIG DATA
BIG DATA
Google Cloud is turning the traditional enterprise data platform on its head, unveiling the Agentic Data Cloud infrastructure platform that aims to act as a kind of central nerve center for the era of artificial intelligence agents.
In a blog post, Andi Gutmans, Google’s vice president and general manager of Data Cloud, explains that existing data infrastructures were designed to act as “static repositories,” where information just sits until it’s asked a question by a human. But in the era of AI, this kind of “human-scale” infrastructure is no longer fit for purpose. To that end, Google has designed the Agentic Data Cloud to work as a “system of action” that evolves data infrastructure into a dynamic reasoning engine that enables autonomous agents to get to work, rather than just think about the problems they’re trying to solve.
Announced at Google Cloud Next 2026 this week in Las Vegas, the Agentic Data Cloud will provide the connective tissue AI agents need to work across the enterprise without hindrance, and it’s built on three main pillars: a universal context engine that aims to prevent agents from “hallucinating,” a suite of agentic-first developer tools, and the cross-cloud lakehouse platform that unifies data from across any cloud environment.
According to Gutmans, one of the biggest hurdles with deploying AI agents today is the so-called “context gap.” If an agent doesn’t understand a company’s specific definition of what something like “gross margin” actually means, it’s probably going to end up making expensive mistakes.
To fix this, Google has evolved its Dataplex Universal Catalog into the Knowledge Catalog, which is a kind of map of business meaning that’s meant to inform AI agents of the peculiarities of the organization they serve. The catalog scans all of a company’s documents, including its accounts, PDFs, PowerPoint presentations and images, extracting entities and studying the relationships within them to build a navigable schema that agents can use.
Also helping with this is BigQuery Measures and a new LookML Agent that will help to bake business logic into the entire Agentic Data Cloud stack. By aggregating all of these metrics into a single, governed data foundation, Google says that when an AI agent queries company data, it will use the same “source of truth” each time.
This new context engine is already powering Google’s new Deep Research Agent, enabling it to perform multistep reasoning across web assets and internal documents to create complex research reports that would take human analysts weeks.
The lives of developers are being made easier, too. The company has announced a new Google Cloud Data Agents Kit that brings “agentic skills” directly into the tools developers already use, including platforms such as Claude Code and VS Code. With the Data Agent Kit, developer environments can autonomously orchestrate outcomes, including selecting frameworks such as Apache Spark or dbt, while generating production-ready code based on Google’s best practices.
Three new, highly specialized AI agents were also announced to make life easier for developers. They include a new Data Engineering agent for building and governing complex data transformations, a Data Science agent for automating AI model lifecycles across BigQuery and Spark, and a Database Observability agent that acts like a “guardian,” tasked with diagnosing and repairing data infrastructure issues.
Gutmans said Google has embraced the Model Context Protocol to ensure these agents play nicely with one another. “[It] provides a secure, universal interface that allows any agent to safely discover and use your data assets across our core engines, including: BigQuery, Spanner (Preview), AlloyDB, Cloud SQL (GA) and Looker MCP (Preview),” he said. “MCP for Google Cloud uses our security stack, governing agent interactions based on your existing IAM policies, VPC Service Controls, and data residency requirements.”
Finally, Google is trying to address the problem of AI agent “gravity.” This refers to how agents lose their autonomy when they’re slowed down by cross-cloud latency or prevented from accessing data trapped in other cloud platforms.
Gutmans introduced the new “cross-cloud Lakehouse,” which aims to provide a borderless data environment for AI agents. It integrates with Google’s Cross-Cloud Interconnect service directly into the data plane, and employs the Apache Iceberg REST catalog to connect to the Amazon Web Services and Microsoft Azure clouds. What this means is that AI agents can treat data stored in Azure data lake or in an S3 bucket as if it were sitting locally in Google Cloud, without the usual headaches associated with data migration and egress fees.
To aid data mobility further, Google also introduced bi-directional federation capabilities for Databricks Unity Catalog, Snowflake Polaris and AWS Glue to break down proprietary data siloes. It’s also unchaining its Spanner Omni database, allowing it to run on-premises or in rival clouds.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.