UPDATED 22:19 EDT / AUGUST 29 2024

AI

US AI Safety Institute will have access to OpenAI and Anthropic for a safer future

OpenAI and Anthropic PBC today announced they have agreed to share AI models before and after release with the U.S. government’s AI Safety Institute.

The institute, housed at the U.S. Department of Commerce’s National Institute of Standards and Technology, or NIST, was set up through an executive order by President Biden in 2023. Working with a consortium of companies and experts, the onus is on establishing safety guidelines and good practices while evaluating potentially dangerous AI systems.

The companies today explained that the institute will have early access to anything they create as well as access once the product is on the market. The institute will also provide feedback to its counterpart in the U.K.

“Safety is essential to fueling breakthrough technological innovation,” said Elizabeth Kelly, director of the U.S. AI Safety Institute. “With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety. These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

The existential threat AI could pose to humanity was a hot-button topic long before this recent surge in the use of generative AI. The rationale has mostly been to proceed with care, but at the same time not get bogged down by AI panics.

In an open letter in June, signed by a group of current and former researchers from OpenAI, Alphabet Inc.’s Google DeepMind research group, and Anthropic, the signatories asked that there be more transparency and oversight to protect the public from potentially harmful AI products.

The signatories warned that “strong financial incentives” could mean a lack of “effective oversight.” They added that the leading AI companies in the U.S. “have only weak obligations to share some of this information with governments, and none with civil society,” and it’s likely they won’t “share it voluntarily.” The announcement today might placate the concerned group.

OpenAI Chief Executive Sam Altman wrote on X that it’s “important that this happens at the national level,” adding that the “U.S. needs to continue to lead!”

“This strengthens our ability to identify and mitigate risks, advancing responsible AI development,” Anthropic co-founder and Head of Policy Jack Clark said in a statement to the media.

Photo: Jonathan Kemper/Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU