

California Governor Gavin Newsom today signed into law a sweeping measure requiring major artificial intelligence developers to meet state-mandated safety standards and disclose potential risks associated with their products.
The Transparency in Frontier Artificial Intelligence Act, SB 53, also strengthens protections for employees and executives who blow the whistle on harms or unsafe practices. Though other states have passed laws regulating AI, SB 53 is said to be the first to put the emphasis on the safety of cutting-edge products.
Since the bill was first introduced by California Senator Scott Wiener, a Democrat from San Francisco, it and a similar previous bill, SB 1047, that Newsom (pictured) vetoed sparked debate. Critics argue such state laws could slow innovation at a time when the U.S. is racing China in AI development. Supporters say oversight is vital, particularly in California, home to industry giants such as Meta Platforms Inc., OpenAI, Google LLC, Anthropic PBC and chipmaker Nvidia Corp.
The state is home to many lesser giants, with a recent Forbes list of the top 50 AI firms putting 32 of them in or close to Silicon Valley. Given the reach of these companies, Newsom’s passing of SB 53 will be felt all over the world. He called the law “a blueprint for well-balanced AI policies beyond our borders, especially in the absence of a comprehensive federal AI policy framework.”
Wiener said the law shows that the state is “stepping up” to meet potential present and future safety issues head on, introducing what he calls “commonsense guardrails” that embrace “trust, fairness and accountability.”
“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” said Newsom. “This legislation strikes that balance.”
Some of the AI that might be affected have expressed support for the bill, while others have shown a preference for federal laws over state regulations. Just today, Republican Senator Josh Hawley and Democrat Senator Richard Blumenthal proposed a federal bill that asks AI firms to “evaluate advanced AI systems and collect data on the likelihood of adverse AI incidents.”
An OpenAI spokesperson said the law is “a critical path toward harmonization with the federal government,” what he called “the most effective approach to AI safety,” adding the caveat, “if implemented correctly.”
Anthropic co-founder and head of policy Jack Clark was similarly positive, describing SB 53 as “meaningful transparency requirements for frontier AI companies without imposing prescriptive technical mandates.”
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.