

OpenAI is reportedly upping its internal security to protect its intellectual property from corporate espionage amid claims that it has been targeted by Chinese artificial intelligence companies.
According to the Financial Times, which references several unnamed people close to OpenAI today, the changes recently have included stricter controls of sensitive information and enhanced vetting of staff.
The decision to ramp up security is also said to have accelerated after Chinese AI startup DeepSeek released a rival AI model in January that is alleged to have used ChatGPT data to train its R1 large language model, a process known as model “distillation.” The move angered OpenAI in an ironic twist, considering that the company trains its models on vast swaths of public internet data, much of it used without direct permission.
OpenAI has put in place safeguards to stop a repeat of the Deepseek situation and also implemented physical safeguards on the ground to protect its IP.
The company’s internal projects are now being developed under a system of “tenting,” which limits access to information only to team members who are read into specific projects. Key initiatives, like the o1 model that was developed last year, have been subject to these extreme compartmentalization practices, effectively walling off code, data and even conversations between teams.
Other new measures include the implementation of biometric authentication, such as fingerprint scans for sensitive lab access, as well as a hardened “deny-by-default” approach to internet connectivity within internal systems. Portions of the company’s infrastructure have been air-gapped to ensure critical data remains physically isolated from external networks.
The company has also beefed up its cybersecurity and governance team, hiring former Palantir Technologes Inc. security head Dane Stuckey as chief information security officer and has appointed retired U.S. Army General Paul Nakasone to its board.
While the security measures are meant to shield OpenAI’s IP from prying eyes, they have allegedly introduced new frictions internally. The increased compartmentalization has made cross-team collaboration more difficult and slowed development workflows. “It got very tight – you either had everything or nothing,” one person told FT, before adding that over time “more people are being read in on the things they need to be, without being read in on others.”
The shift comes as part of a broader industry trend: As generative AI becomes more strategically and commercially valuable, protecting the models that power it is becoming just as important as building them.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.