California lawmakers have approved sweeping AI safety legislation that now awaits Governor Gavin Newsom’s signature setting up a critical decision point for the second year in a row.
The proposal would require major artificial intelligence developers to disclose their safety testing protocols and certify compliance, a move that could reshape both the state’s relationship with its powerful tech sector and the broader national conversation around AI regulation.
The bill, authored by San Francisco Democrat Scott Wiener, seeks to build transparency into the development of advanced AI models.
Companies generating over $500 million in annual revenue would be required to provide detailed reports of their testing frameworks, while smaller developers working on so-called “frontier models” would submit more general disclosures. Lawmakers argue that this tiered approach strikes a balance between public safety and the need to foster innovation.
Newsom, however, faces a familiar dilemma. Last year, he vetoed a more expansive version of Wiener’s legislation, warning that it could undermine California’s competitive edge.
His concerns centered on the potential for overly strict rules to drive companies and their capital out of the state.
Because California is home to some of the world’s most influential AI companies, the bill could serve as a model for other states should it become law. Advocates say the framework could establish a baseline for responsible AI practices, particularly in an industry where fears about risks, misuse, and unchecked development are mounting.
Supporters of the legislation include AI lab Anthropic, which praised the bill for requiring companies to “tell the truth” about the testing they perform. OpenAI, while not explicitly backing or opposing the measure, has acknowledged that the principles behind it are constructive.
Still, not everyone is on board. The California Chamber of Commerce, TechNet, and other industry lobbying groups have voiced strong objections. In a letter to Wiener, they warned that focusing only on “large developers” risks overlooking smaller players whose models could also carry catastrophic risks.
Tech giants like Apple, Google, and Amazon have indirectly pushed against the measure, citing potential regulatory overlap and inconsistencies with international frameworks.
The decision also carries heavy political weight. Newsom, widely seen as a likely Democratic contender for the 2028 presidential election, must weigh whether siding with voters concerned about AI’s dangers is worth alienating the state’s tech donors. Signing the bill would distinguish him from President Donald Trump, whose administration has pursued an aggressive pro-AI development agenda, framing it as a race against China.
For Wiener, the stakes are equally high. Fresh off filing to run for the congressional seat long held by Nancy Pelosi, he has pitched the bill as the most comprehensive state-level AI safety framework in the country. His revised, narrower proposal reflects lessons learned from last year’s defeat, as well as recommendations from an AI policy panel convened by Newsom himself.
If Newsom signs the bill, it would mark a turning point for U.S. tech governance, potentially shaping national standards in the absence of comprehensive federal regulation. If he vetoes it again, however, it could signal a prioritization of industry competitiveness over public safeguards, and highlight the deep influence Silicon Valley wields over California politics.
The post California Lawmakers Send AI Safety Bill Back to Newsom’s Desk appeared first on CoinCentral.


