UPDATED 18:45 EST / FEBRUARY 23 2026

AI

Anthropic slams Chinese AI firms for harvesting data from its Claude chatbot

Anthropic PBC is claiming that three Chinese artificial intelligence companies are illegally harvesting massive amounts of data from its chatbot Claude in an effort to accelerate the development of their own platforms.

In a blog post today, Anthropic called out DeepSeek Ltd., Moonshot and MiniMax, three of China’s most prominent AI firms, accusing them of creating thousands of fraudulent Claude accounts to generate millions of conversations and use that data to train their own chatbots.

DeepSeek is accused of engaging in 150,000 interactions with Claude, while Moonshot reportedly had more than 3.4 million and MiniMax had 13 million, Anthropic said.

The process of using data from one AI system to train another is known as “distillation,” and it’s a fairly common technique for developers. But Anthropic’s terms of service prohibit anyone from harvesting Claude’s responses in this way. In addition, they’re meant to prevent its chatbot from being used by anyone in China.

Anthropic’s accusations come after similar claims by its rival OpenAI Group PBC that Chinese firms are also harvesting data from ChatGPT for similar reasons. OpenAI has taken its complaint further. Last week it sent a memorandum to the U.S. House Select Committee on China, saying DeepSeek and other Chinese AI firms were using “new and obfuscated” distillation techniques as part of an ongoing effort to “free-ride” on U.S. technologies.

Anthropic said in its blog post that the Chinese firms’ activities could be a significant national security risk, as distillation could allow them to create AI technologies that power more advanced military weapons or build tools for mass surveillance of U.S. citizens. Though the company says it has built extensive guardrails to prevent its technology being used in this way, they can be removed during the distillation process.

APIs present a new attack surface

DeepTempo’s founding AI engineer Mayank Kumar told SiliconANGLE that DeepSeek has been accused of using distillation before, but what’s different this time is that it, along with its competitors, is engaging in an “industrialized” distillation campaign. “Frontier AI systems are emerging as a new class of attack surface,” he said. “These models compress vast amounts of knowledge and reasoning into deployable inference endpoints. But as their capabilities scale, so does their strategic value, not only to customers, but also to competitors and nation-state-aligned actors.”

According to Kumar, when frontier models are accessed via application programming interfaces, they provide access to such vast amounts of information that they make extremely tempting targets. The API functions as an extraction channel for high-volume, capability-targeted prompting that aims to replicate models’ reasoning patterns. “Unlike traditional intellectual property and confidential information theft, this does not require source code access or insider compromise,” he added. “The interface itself is the surface. When outputs can be systematically harvested and operationalized downstream, the line between legitimate usage and capability exfiltration becomes a security control challenge rather than a product design consideration.”

Anthropic called on the U.S. government to take action to prevent the Chinese firms from doing this. “These campaigns are growing in intensity and sophistication,” the company said. “The window to act is narrow, and the threat extends beyond any single company or region. Addressing it will require rapid, coordinated action among industry players, policymakers and the global A.I. community.”

Not much sympathy

But it’s not clear if Anthropic will get much sympathy from the U.S. government considering its reported standoff with the Department of War, which has clashed with it over its own use of Claude. The Pentagon has approved a special version of Claude for use with its classified systems and has been using it extensively since last year. However, last week it threatened to sever ties with the chatbot maker and designate it as a “supply chain risk” over its refusal to let its technology be used in the development of autonomous weapons or surveillance tools.

Anthropic certainly hasn’t found much sympathy online. In response to its blog post, numerous critics accused it of hypocrisy, pointing out that its practice of scraping the public internet isn’t really much different from what the Chinese AI firms have been doing.

Anthropic appeared to address these critics with its own post on X, saying distillation is legitimate when it’s done in line with the rules around licensed, open-source technology.

But its critics were far from convinced.

Anthropic, which is valued at $380 billion, is currently facing numerous lawsuits over allegations that it illegally used copyrighted data to train its AI systems. In September, the company agreed to pay $1.5 billion to a group of authors and publishers in a landmark settlement after a judge ruled that it had illegally downloaded and stored millions of books illegally. That was the largest payout in the history of copyright lawsuits.

OpenAI is also being sued by publishers such as The New York Times over allegations that it scraped millions of news articles to train its GPT models, which have since emerged as competitors for online traffic.

Image: SiliconANGLE/Microsoft Designer

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.