UPDATED 18:37 EST / FEBRUARY 16 2026

POLICY

Pentagon officials threaten to blacklist Anthropic over its military chatbot policies

The U.S. Department of War is reportedly considering cutting all business ties with the artificial intelligence startup Anthropic PBC and designating it as a “supply chain risk” amid disagreements over how it intends to use its chatbot tool Claude.

If the War Department went ahead with the move, it would be a severe blow to Anthropic, as it would require all U.S. military contractors to stop using the company’s technology, or risk losing their Pentagon contracts.

Defense Secretary Pete Hegseth and senior Pentagon officials are said to be close to making the decision after months of trying to negotiate with Anthropic, Axios reported today. One unnamed source told the outlet: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”

The “supply chain risk” designation is normally used to label foreign adversaries and other hostile actors, rather than American companies, which makes the threat unusually severe for a firm that is considered among the country’s leading technology lights.

A spokesperson for the Pentagon told Axios in a statement that the War Department’s AI partnerships are currently under review, and stressed that “our nation requires our partners to be willing to help our warfighters win in any fight.”

Anthropic currently enjoys a privileged status as the only AI model maker to win a contract with the U.S. military. Its Claude Gov chatbot was built specifically for the U.S. national security apparatus and is widely used by Pentagon officials and has received lots of praise. Notably, it was used extensively in the operation last month that saw U.S. special forces snatch Venezuelan President Nicolas Maduro from his residence in Caracas, according to the Wall Street Journal.

But as the company’s contract comes up for renewal, there is disagreement over Anthropic’s reluctance to let the Pentagon use Claude “for all lawful purposes.” Apparently, the company is worried that officials might use the chatbot to conduct mass surveillance of Americans and build and run fully autonomous weapons systems.

According to Anthropic, its existing restrictions are necessary to protect the privacy of U.S. citizens and prevent unchecked AI systems from targeting or harming them. But the Pentagon insists that the current limits are too restrictive and could hamper its effectiveness on the battlefield.

Should the Pentagon follow through with its threat of designating Anthropic as a supply chain risk, any company that does business with the War Department would be required to certify that it does not use Claude in its workflows. That would likely cause a lot of headaches, given that Anthropic reportedly has a much stronger presence in the private sector than other AI firms, such as Google LLC and OpenAI Group PBC. According to Axios, eight out of the largest 10 U.S. companies currently use Claude.

Anthropic needs to take the threat seriously, because if the Pentagon followed through it could potentially put the company’s survival at risk, said Rob Enderle of the Enderle Group. “Any ban would likely affect all contractors it works with and also spill over into the private sector,” he said. “Equally, it would have a chilling impact on the ethics of all of the AI labs.”

The analyst said it won’t be easy for the Pentagon to rip out and replace Anthropic’s technology, and any attempt to do so could put the systems it integrates with at risk, especially if whatever replacement used is integrated hurriedly. But at the same time, the Pentagon may feel it’s necessary to do this, because the risk of Anthropic’s tools suddenly failing to work in the event that the company deems its activities to be illegal is equally troublesome.

“If the Pentagon gets its way, it will own a much more reliable weapon that performs even if it decides to do something illegal,” Enderle explained. “But if Anthropic gets its way, it would cement its status as the strongest ‘safety-first’ AI provider. By refusing to do things like mass surveillance of U.S. citizens, it would enhance the privacy of everyone and reassert what every AI company says, namely that AI is primarily a tool to be used for social good.”

The Pentagon has made it clear it has alternatives to Anthropic if it does decide to slap the supply chain risk label on it. Axios said it’s currently holding talks with Google, OpenAI and Elon Musk’s xAI Corp. over the possibility of the military using their chatbots instead. The report states that all three companies have agreed to remove guardrails preventing military use on unclassified systems. They’re also reportedly negotiating to access classified military networks, Axios’ sources said. Pentagon officials are confident that all three would be prepared to comply with its insistence on being able to use their tech for “all lawful purposes.”

Anthropic has tried to argue that U.S. law currently forbids domestic mass surveillance, and worries that the rapidly advancing capabilities of AI would outpace the evolution of existing statutes. The company declined to talk about the reported threat, but told Axios that its negotiations with the War Department are ongoing and being conducted in “good faith” to try to resolve complex policy issues.

Constellation Research analyst Holger Mueller said the entire AI industry will be watching and waiting to see the outcome of this standoff. “This will decide if AI companies have the power to limit how their tools and products are used, or if buyers can force the issue and use them in any way they want, so it’s an important issue that every enterprise needs to watch,” he said.

Anthropic’s contract with the military is relatively small, worth around $200 million over two years, representing only a fraction of its reported $14 billion in annual revenue. Yet many more of its enterprise deals could be at risk if the Pentagon makes good on its threat.

Image: SiliconANGLE/Microsoft Designer

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.