UPDATED 20:50 EDT / DECEMBER 18 2023

AI

New OpenAI safety team will have power to block high-risk developments

OpenAI today announced a new safety plan that will give its board of directors veto power to overrule Chief Executive Sam Altman if it considers the risks of the AI being developed to be too high.

“The study of frontier AI risks has fallen far short of what is possible and where we need to be,” the company wrote in a post. “To address this gap and systematize our safety thinking, we are adopting the initial version of our Preparedness Framework. It describes OpenAI’s processes to track, evaluate, forecast, and protect against catastrophic risks posed by increasingly powerful models.”

Although there will be “safety systems” teams that will overlook possible abuses and risks associated with current AI models such as ChatGPT, the Preparedness Team will access frontier models, and a “superalignment” team will watch over the development of “superintelligent” models, all of which will report to the board.

Humanity seems a long way off from developing AI that is more intelligent than humans, but the company says it wants to “look beyond what’s happening today to anticipate what’s ahead.”

Such new models, said OpenAI, will be pushed “to their limits,” after which detailed scorecards will be given for four risk categories: cybersecurity, persuasion (lies and disinformation), model autonomy (doing its own thing) and CBRN (chemical, biological, radiological and nuclear threats, that is, creating something nasty).

Each section will be given a low, medium, high or critical risk score, after which there will be a post-mitigation score. If the risk is deemed medium or below, the technology can be deployed. If it’s high, development can still proceed, and if it’s critical, all development will stop. There will also be steps taken regarding accountability, with OpenAI saying if issues do emerge, independent third parties will be brought in to audit the technology and offer feedback.

“We will collaborate closely with external parties as well as internal teams like Safety Systems to track real-world misuse,” said the company. “We will also work with Superalignment on tracking emergent misalignment risks. We are also pioneering new research in measuring how risks evolve as models scale, to help forecast risks in advance, similar to our earlier success with scaling laws.”

Image: Mariia Shalabaieva/Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU