UPDATED 22:19 EDT / AUGUST 29 2024

AI

US AI Safety Institute will have access to OpenAI and Anthropic for a safer future

OpenAI and Anthropic PBC today announced they have agreed to share AI models before and after release with the U.S. government’s AI Safety Institute.

The institute, housed at the U.S. Department of Commerce’s National Institute of Standards and Technology, or NIST, was set up through an executive order by President Biden in 2023. Working with a consortium of companies and experts, the onus is on establishing safety guidelines and good practices while evaluating potentially dangerous AI systems.

The companies today explained that the institute will have early access to anything they create as well as access once the product is on the market. The institute will also provide feedback to its counterpart in the U.K.

“Safety is essential to fueling breakthrough technological innovation,” said Elizabeth Kelly, director of the U.S. AI Safety Institute. “With these agreements in place, we look forward to beginning our technical collaborations with Anthropic and OpenAI to advance the science of AI safety. These agreements are just the start, but they are an important milestone as we work to help responsibly steward the future of AI.”

The existential threat AI could pose to humanity was a hot-button topic long before this recent surge in the use of generative AI. The rationale has mostly been to proceed with care, but at the same time not get bogged down by AI panics.

In an open letter in June, signed by a group of current and former researchers from OpenAI, Alphabet Inc.’s Google DeepMind research group, and Anthropic, the signatories asked that there be more transparency and oversight to protect the public from potentially harmful AI products.

The signatories warned that “strong financial incentives” could mean a lack of “effective oversight.” They added that the leading AI companies in the U.S. “have only weak obligations to share some of this information with governments, and none with civil society,” and it’s likely they won’t “share it voluntarily.” The announcement today might placate the concerned group.

OpenAI Chief Executive Sam Altman wrote on X that it’s “important that this happens at the national level,” adding that the “U.S. needs to continue to lead!”

“This strengthens our ability to identify and mitigate risks, advancing responsible AI development,” Anthropic co-founder and Head of Policy Jack Clark said in a statement to the media.

Photo: Jonathan Kemper/Unsplash

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.