AI
AI
AI
OpenAI Group PBC is revising an artificial intelligence deal that it inked with the U.S. Defense Department last year.
The ChatGPT developer published some of the updated legal language late Monday. The change is designed to ensure that the Pentagon won’t use OpenAI models for domestic surveillance. This morning, Axios cited sources as saying that the revised agreement has not yet been formally signed.
Last June, OpenAI won a one-year contract to provide the Pentagon with access to its AI models. The company stated at the time that the agreement was worth up to $200 million. According to OpenAI, officials planned to apply its AI to use cases such as data analysis and cybersecurity.
Anthropic PBC inked a similar $200 million deal with the Pentagon around the same time. This January, reports emerged that the OpenAI rival had raised concerns about the agreement. Anthropic sought to ensure that its technology wouldn’t be used to conduct mass surveillance or build autonomous weapons. It equipped its models with guardrails designed to block such uses.
The Pentagon took issue with the company’s policy. Last week, U.S. President Donald Trump ordered federal agencies to stop using Anthropic’s software. In a related move, U.S. Defense Secretary Pete Hegseth announced plans to designate Anthropic as a supply chain risk. The designation prohibits U.S. military contractors and suppliers from doing business with the AI provider.
On Friday, OpenAI announced plans to provide the Pentagon with access to its AI models under a revised agreement. In a late Monday post on X, Chief Executive Sam Altman elaborated that the contract includes protections against domestic surveillance.
“The AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals,” reads one of the contract’s clauses. “For the avoidance of doubt, the Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable information.”
One of the most important changes is that the clause covers “commercially acquired personal or identifiable information.” According to Axios, OpenAI’s original contract with the Pentagon only mentioned “private information.” That language didn’t prevent the use of personal data purchased from data brokers.
Altman added that the agreement doesn’t permit OpenAI’s models to be used by intelligence agencies. He wrote that such use would require “a follow-on modification” to the contract. Additionally, the agreement states that OpenAI’s models may only be deployed in “cloud networks.”
OpenAI’s push to bring its AI to federal agencies kicked off last year when it launched an offering called ChatGPT Gov. It’s a customized version of ChatGPT that can run in Microsoft Azure and Azure Government environments. The offering ships with administrative controls designed to help agencies comply with the public sector’s cybersecurity regulations.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.