So the rumors were true. California governor Gavin Newsom has indeed caved under pressure from tech companies and venture capitalists as he’s vetoed an important artificial intelligence (AI) safety bill. That's a setback for supporters of stricter AI regulation.
Senate Bill 1047 would have forced companies that spend more than $100 million on training large AI models to do thorough safety testing. If the firms didn’t, they would’ve been liable if the systems led to a “mass casualty event” or more than $500 million in damages in a single incident.
State legislators already approved the bill, opposed fiercely by influential big tech lobbyists and investors, in late August. But soon, signs of Governor Newsom's doubts about the legislation surfaced.
For instance, in mid-September, even after signing other laws regulating AI deepfakes, Newsom said: “The impact of signing wrong bills over the course of a few years could have a profound impact on the state’s competitiveness.”
The writing was essentially on the wall for SB 1047. Finally, the governor killed it. In a statement on his veto, Newsom said that the bill was wrong to single out the most powerful AI projects and ignore the question of whether they’re involved in critical decision-making.
“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” wrote (PDF) Newsom.
“Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.”
The governor explained that California “will not abandon its responsibility” and adopt safety protocols but pointed out that the state’s approach “must be based on empirical evidence and science.”
Still, SB 1047 is now dead, even though the bill’s authors and proponents have always said that it would simply fill the vacuum left by lawmakers in Washington. There’s no federal AI safety regulation in the US yet.
Hollywood also supported the bill. Last week, over 100 stars of the entertainment industry urged Newsom to sign SB 1047 in an open letter – to no avail.
Tech firms, including Meta and Google, argued that the bill would quash innovation because it would be difficult and extremely expensive to test for all the potential harms of AI technology. They also said that it’s not the developers that should be penalized, instead, it is those that cause harm using AI.
Anthony Aguirre, executive director of the Future of Life Institute, a nonprofit that supported the AI safety bill, said in a statement that it was time for federal or even global regulation to oversee big tech companies and their AI technology.
“The furious lobbying against the bill can only be reasonably interpreted in one way: these companies believe they should play by their own rules and be accountable to no one,” said Aguirre.
Your email address will not be published. Required fields are markedmarked