AI
AI
AI
OpenAI Group PBC is looking to hire a head of preparedness, a senior safety role tasked with anticipating potential harms from the company’s artificial intelligence models and guiding how those risks are mitigated as capabilities advance.
According to a job listing published on OpenAI’s careers site, the role will lead the technical strategy and execution of OpenAI’s Preparedness Framework, which the company uses to track and assess frontier capabilities that could create new or severe risks. The risks to be monitored include misuse scenarios, cybersecurity threats and other impacts that may emerge as models become more capable.
The head of preparedness will sit within OpenAI’s Safety Systems organization and work across research, policy and product teams. Responsibilities include developing threat models, running capability evaluations, setting risk thresholds and determining when additional safeguards or deployment restrictions are required. The work feeds directly into decisions around whether and how new models and features are released.
OpenAI describes the role, which is offering a base salary of $550,000 plus equity, as demanding and requiring experience in large-scale technical systems, security, risk analysis or safety governance, along with the ability to translate research findings into operational controls.
The opening comes after a period of change within OpenAI’s safety leadership. The company’s former Head of Preparedness Aleksander Madry was reassigned in mid-2024, with preparedness responsibilities subsequently overseen by senior executives Joaquin Quiñonero Candela and Lilian Weng. Weng later departed the company and earlier this year, Quiñonero Candela moved to lead recruiting at OpenAI, leaving the preparedness role without a dedicated permanent head.
OpenAI CEO Sam Altman has previously pointed to preparedness as a core internal function as model capabilities expand. According to Engadget, Altman has previously referred to the head of preparedness as “a critical role at an important time,” acknowledging challenges associated with model capabilities.
The hiring effort comes amid heightened attention on how advanced AI systems may be abused or cause unintended harm.
Areas of concern frequently cited in industry discussions include AI-assisted cyberattacks, the discovery and exploitation of software vulnerabilities and potential effects on users’ mental health at scale.
The mental health aspect has been raised by the company before, such as in October when OpenAI revealed that more than a million people per week reported experiencing severe mental distress in conversations with ChatGPT. The data did not suggest that ChatGPT necessarily caused the distress but rather that users were discussing serious mental health issues with ChatGPT.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.