

OpenAI has submitted a lengthy proposal to the U.S. government, aiming to influence its upcoming AI Action Plan, a strategy report that many believe will guide President Donald Trump’s policy on artificial intelligence technology.
The proposal from America’s most recognizable AI company is predictably controversial, calling for the U.S. government to emphasize speed of development over regulatory scrutiny, while also warning of the dangers posed by Chinese AI firms to the country.
Trump called for the AI Action Plan to be drafted by the Office of Science and Technology Policy and submitted to him by July shortly after taking up his second residence in the White House. That happened in January, when he threw out an executive order pertaining to AI that was signed by his predecessor Joe Biden in October 2023, replacing it with his own, declaring that “it is the policy of the United States to sustain and enhance America’s global AI dominance.”
OpenAI has wasted little time in trying to influence the recommendations in that plan, and in its proposal it made clear its feelings on the current level of regulation in the AI industry. It called for AI developers to be given “the freedom to innovate in the national interest,” and advocated for a “voluntary partnership between the federal government and the private sector,” instead of “overly burdensome state laws.”
It argues that the federal government should be allowed to work with AI companies on a “purely voluntary and optional basis,” saying that this will help to promote innovation and adoption of the technology. Moreover, it called for the U.S. to create an “export control strategy” covering U.S.-made AI systems, which will promote the global adoption of its homegrown AI technology.
The company further argues in its recommendations that the government give federal agencies greater freedom to “test and experiment” AI technologies using “real data,” and also asked for Trump to grant a temporary waiver that would negate the need for AI providers to be certified under the Federal Risk and Authorization Management Program. It called on Trump to “modernize” the process that AI companies must go through to be approved for federal government use, asking for the creation of a “faster, criteria-based path for approval of AI tools.”
OpenAI argues that its recommendations will make it possible for new AI systems to be used by federal government agencies up to 12 months faster than is currently possible. However, some industry experts have raised concerns that such speedy adoption of AI by the government could create security and privacy problems.
Pushing harder, OpenAI also told the U.S. government it should partner more closely with private sector companies in order to build AI systems for national security use. It explained that the government could benefit from having its own AI models that are trained on classified datasets, as these could be “fine-tuned to be exceptional at national security tasks.”
OpenAI has a big interest in opening up the federal government sector for AI products and services, having launched a specialized version of ChatGPT, called ChatGPT Gov, in January. It’s designed to be run by government agencies in their own secure computing environments, where they have more control over security and privacy.
Aside from promoting government use of AI, OpenAI also wants the U.S. government to make its own life easier by implementing a “copyright strategy that promotes the freedom to learn.” It asked for Trump to develop regulations that will preserve the ability of American AI models to learn from copyrighted materials.
“America has so many AI startups, attracts so much investment, and has made so many research breakthroughs largely because the fair use doctrine promotes AI development,” the company stated.
It’s a controversial request, because the company is currently battling multiple news organizations, musicians and authors over copyright infringement claims. The original ChatGPT that launched in late 2022 and the more powerful models that have since been released are all largely trained on the public internet, which is the main source of their knowledge.
However, critics of the company say it is basically just plagiarizing content from news websites, of which many are paywalled. OpenAI has been hit with lawsuits by The New York Times, the Chicago Tribune, the New York Daily News and the Center for Investigative Reporting, the nation’s oldest nonprofit newsroom. Numerous artists and authors have also taken legal action against the company.
OpenAI’s recommendations took aim at some of the company’s rivals too, notably DeepSeek Ltd., the Chinese AI lab that claims to have developed the DeepSeek R-1 model at a small fraction of the cost of anything OpenAI has developed.
The company described DeepSeek as being “state-subsidized” and “state-controlled,” and asked the government to consider banning its models and those from other Chinese AI firms.
In the proposal, OpenAI claimed that DeepSeek’s R1 model is “insecure,” because it is required by Chinese law to comply with certain demands regarding user data. By banning the use of models from China and other “Tier 1” countries, the U.S. would be able to minimize the “risk of IP theft” and other dangers, it said.
“While America maintains a lead on AI today, DeepSeek shows that our lead is not wide and is narrowing,” OpenAI said.
THANK YOU