Experts: California’s AI safety bill was flawed but not all hope is lost


California governor Gavin Newsom has vetoed a state bill aimed at preventing AI disasters, which has some activists fuming. But others say the legislation was flawed in the first place and that a new approach is needed.

The first-of-its-kind bill, SB 1047, was the most ambitious proposal in the US aimed at curtailing the growth of AI.

It would have required safety testing of large AI models before their release, given California’s attorney general the right to sue companies over serious harm caused by their tech, and created a kill switch to immediately turn off AI systems causing major damage.

ADVERTISEMENT

However, Governor Newsom vetoed the bill, saying it was wrong to single out the most powerful AI projects and ignore the question of whether they’re involved in critical decision-making.

“By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology,” wrote (PDF) Newsom.

Supporters of the bill are upset. They think Newsom has simply caved in to pressure from influential big tech lobbyists and investors who all fiercely opposed the legislation that would have allegedly stifled innovation.

But others say it’s not that simple. According to them, SB 1047 was a flawed bill and should have targeted real risks today rather than distant fears of superintelligence or disasters.

Besides, the legislation, designed to make AI model developers liable for their harms, could be seen as overkill – after all, do we make gun manufacturers liable for mass shootings?

Are risks not real today?

David Brauchler, technical director at cybersecurity consulting firm NCC Group, points out that grim future scenarios have not demonstrated themselves to be likely or near at this time.

Of course, one could say that the vetoed bill concerned the AI models that are yet to be created – they could be much larger than today and indeed cause catastrophic events or huge cyberattacks, technologists say.

ADVERTISEMENT

“Models, especially language models, have not yet produced evidence of novel risks in the form of discovering new or easy means to create widescale damage (explosives or chemical weapons), empowering otherwise-benign threat actors to commit acts of harm, or the model otherwise itself going rogue and conducting impactful, malicious behavior,” explains Brauchler.

Moreover, AI safety tests are unlikely to uncover and mitigate these risks were they to arise, he says: “This bill appeared to address a problem beyond the capabilities of existing models, instead shifting liability to AI developers for damage that could be easily misinterpreted as being caused by an AI model.”

According to Brauchler, it’s very important to distinguish between the model itself and the actions of a threat actor that incorporates AI.

“This bill appeared to address a problem beyond the capabilities of existing models, instead shifting liability to AI developers for damage that could be easily misinterpreted as being caused by an AI model.”

David Brauchler.

“The bill put the model developers at risk of liability instead of focusing on the threat actor. The critical harm exclusions were not delineated strongly enough to protect developers and, consequently, could have discouraged AI innovation,” said the expert.

Brauchler also adds that even though many AI regulation bills focus on the computational power, model size, or monetary cost required to train a model, these values are the wrong metrics to use.

“There is not a direct correlation between model size and risk, and these bills mistakenly address the computational power required to train large language models and overlook that small, specialized, and powerful models may be far more equipped to do harm than large natural language processing,” he says.

Open letters didn’t help

On the other hand, academics and quite a few members of the AI crowd lament the killing of SB 1047. Three weeks ago, for example, at least 113 current and former employees of leading AI companies such as OpenAI, Google DeepMind, Anthropic, Meta, and xAI published an open letter in support of the bill.

“We believe that the most powerful AI models may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure,” said the letter.

ADVERTISEMENT

“It is feasible and appropriate for frontier AI companies to test whether the most powerful AI models can cause severe harms, and for these companies to implement reasonable safeguards against such risks.”

In September, 50 academics, including Geoffrey Hinton, a University of Toronto professor known as the godfather of AI, also addressed Newsom in an open letter, describing SB 1047 as a “reasonable” and important deterrent to the fast deployment of unsafe models.

hinton-ai-godfather
Geoffrey Hinton. Image by Shutterstock.

“As per their previous commitments, developers are already taking steps today to evaluate their systems and keep risks at a reasonable level. This is very good, but we do not think that it should be optional. Voluntary commitments are insufficient,” said the letter by the academics.

“AI developers face immense pressure to release models quickly, and it is the unfortunate reality that these commercial incentives can erode safety and encourage companies to cut corners on quality control. Without regulation, those companies who take responsible precautions are placed at a competitive disadvantage.”

Will develop “workable guardrails”

Needless to say, the mood among the bill's supporters was grim after the veto. Still, Melissa Ruzzi, director of artificial intelligence at AppOmni, a SaaS security company, thinks not all hope is lost.

“We all know AI is very new and there are challenges in writing laws around it. We cannot expect the first laws to be flawless and perfect – this will most likely be an iterative process, but we have to start somewhere,” said Ruzzi.

“This may feel similar to the mandated use of seatbelts – some may see this as an invasion of freedom and unnecessary guardrail that imposes limitations to free movement, but the overall security of the general population is at stake, and that is the main motivator for those decisions.”

According to Ruzzi, laws are needed to ensure that all players follow the rules. The main challenge is the time it takes for governments to implement or change them.

ADVERTISEMENT

California might get back to AI safety in the near future. After all, together with his veto, Newsom also announced that his administration would work with academics to develop “workable guardrails” for deploying generative AI.

The experts include Fei-Fei-Li, co-director of Stanford’s Human-Centered AI Institute, Mariano-Florentino Cuellar, president of the Carnegie Endowment for International Peace, and Jennifer Tour Chayes, dean of the College of Computing, Data Science, and Society at University of California, Berkeley.