UPDATED 21:54 EST / MARCH 19 2025

AI

Report co-authored by Fei-Fei Li stresses need for AI regulations to consider future risks

A new report co-authored by the artificial intelligence pioneer Fei-Fei Li urges lawmakers to anticipate future risks that have not yet been conceived when drawing up regulations to govern how the technology should be used.

The 41-page report by the Joint California Policy Working Group on Frontier AI Models comes after California Governor Gavin Newsom shot down the state’s original AI safety bill, SB 1047. He vetoed that divisive legislation last year, saying that legislators need a more extensive assessment of AI risks before they attempt to craft better legislation.

Li (pictured) co-authored the report alongside Carnegie Endowment for International Peace President Mariano-Florentino Cuéllar and the University of California at Berkeley College of Computing Dean Jennifer Tour Chayes. In it, they stress the need for regulations that would ensure more transparency into so-called “frontier models” being built by companies such as OpenAI, Google LLC and Anthropic PBC.

They also urge lawmakers to consider forcing AI developers to publicly release information such as their data acquisition methods, security measures and safety test results. In addition, the report stressed the need for more rigorous standards regarding third-party evaluations of AI safety and corporate policies. There should also be protections put in place for whistleblowers at AI companies, it recommends.

The report was reviewed by numerous AI industry stakeholders prior to being published, including the AI safety advocate Yoshua Bengio and Databricks Inc. co-founder Ion Stoica, who argued against the original SB 1047 bill.

One section of the report notes that there is currently an “inconclusive level of evidence” regarding the potential of AI to be used in cyberattacks and the creation of biological weapons. The authors wrote that any AI policies must therefore not only address existing risks, but also any future risks that might arise if sufficient safeguards are not put in place.

They use an analogy to stress this point, noting that no one needs to see a nuclear weapon explode to predict the extensive harm it would cause. “If those who speculate about the most extreme risks are right — and we are uncertain if they will be — then the stakes and costs for inaction on frontier AI at this current moment are extremely high,” the report states.

Given this fear of the unknown, the co-authors say the government should implement a two-pronged strategy around AI transparency, focused on the concept of “trust but verify.” As part of this, AI developers and their employees should have a legal way to report any new developments that might pose a safety risk without threat of legal action.

It’s important to note that the current report is still only an interim version, and that the completed report won’t be published until June. The report does not endorse any specific legislation, but the safety concerns it highlights have been well-received by experts.

For instance, the AI researcher Dean Ball at George Mason University, who notably criticized the SB 1047 bill and was happy to see it vetoed, posted on X that it’s a “promising step” for the industry. At the same time, California State Senator Scott Weiner, who first introduced the SB 1047 bill, noted that the report continues the “urgent conversations around AI governance” that were originally raised in his aborted legislation.

Photo: Steve Jurvetson/Flickr

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.