

Databricks Inc. and Anthropic PBC said today that they have entered a five-year partnership to make Anthropic’s Claude large language models and services available on the Databricks Data Intelligence Platform.
The arrangement gives Databricks’ customers direct access to Claude 3.7 Sonnet, a new hybrid reasoning model, from within the Databricks ecosystem on the Amazon Web Services Inc., Microsoft Corp. Azure and Google LLC clouds.
Claude 3.7 Sonnet is notable for its ability to “think” about questions for as long as the users ask it to, a tactic that often results in a better answer. Claude 3.7 Sonnet can also return different responses to the same question depending on how much time is allowed for thought.
The companies said the collaboration is intended to help enterprises more securely build, deploy and govern artificial intelligence agents — systems that can perform tasks autonomously with little or no human supervision — that draw on proprietary data. The move comes amid increasing demand for AI tools that can handle enterprise-scale data governance, privacy and compliance.
Partnerships between large data platforms and AI research firms are becoming more common as organizations seek ways to integrate advanced language models into their workflows without compromising data controls. Databricks rival Snowflake Inc. recently announced partnerships with Anthropic and Microsoft to make Claude 3.5 sonnet and the Azure OpenAI Service available from its cloud data platform.
Anthropic’s Claude models, known for their focus on AI safety, will operate with Databricks Mosaic AI. That will enable customers to create customized models for use in vertical industries such as healthcare, energy management and financial services. Databricks Mosaic AI is used to build domain-specific AI agents based on an organization’s unique data that deliver reliable and fully governed results. Gartner Inc. predicts more than half of the generative AI models enterprises use will be specific to either an industry or business function by 2027, up from about 1% in 2023.
Anthropic models will be integrated with Databricks’ Unity Catalog to provide data lineage tracking, access controls and monitoring features aligned with enterprise standards. The company said Claude can also support advanced reasoning applications with complex multistep workflows, such as streamlining patient onboarding process for clinical trials or dynamically adjusting loads on electrical grid based on fluctuations in supply and demand.
Separately, Databricks said it has found a new fine-tuning method that leverages Test-time Adaptive Optimization, a type of reinforcement learning that make it easier to build agents for a specific task and domain. This method removes dependence on expensive labeled data to enable model fine-tuning faster and cheaper.
Traditional LLM training often relies on large, carefully prepared datasets, which can be costly and time-consuming to produce. Instead of collecting large sets of human-labeled examples, TAO taps into patterns within the model itself and the data it sees during testing.
This approach leverages a model’s existing knowledge to refine it further, making training more efficient. It also helps handle tasks that do not have well-defined or abundant labeled data, a bonus for scenarios in which labels are limited or expensive to obtain. By focusing on the interactions and signals available at test time — such as how the model responds to new prompts — TAO can adjust the model’s parameters to improve accuracy without standard human-labeled training sets.
Databricks said even without labeled data, TAO can achieve better model quality than traditional fine-tuning, bringing inexpensive open-source models such as Llama to parity with costly proprietary models like OpenAI LLC’s GPT-4o and o3-mini. Other benefits include reduced reliance on massive curated datasets and the ability to continually learn in changing environments.
THANK YOU