Big Tech unhappy about California’s AI safety bill: why?


California is pushing legislation to force companies building large AI models to do safety testing. The industry is panicking – of course – and quite typically talking about painful hits to innovation. Who’s right and who’s wrong?

Imagine you’re a company building passenger planes. You don’t do enough safety testing but still release new aircraft and act shocked when a couple of them crash, killing hundreds. Of course, you will be held liable and suffer deserved consequences.

Now, imagine you’re a tech firm developing search engines. If someone decides to look for advice on how to, say, poison someone and actually finds a detailed instruction on how to do it on your product, you will probably not be held liable – thanks to the infamous Section 230.

So far, so good. But what if you build an AI assistant that suddenly begins causing mass casualty events? Is it more like a defective plane or a devious search engine?

Playing catch-up

The whole of Silicon Valley is now contemplating this question, of course. That’s because the Golden State wants to pass a new AI safety bill called Senate Bill 1047 that would force companies that spend more than $100 million on training large AI models to do thorough safety testing.

If the firms don’t, they would be liable if their systems led to a “mass casualty event” or more than $500 million in damages in a single incident.

Politicians, especially on the left, all across the US have lately been criticized for not cracking down hard enough on social media companies, and are now trying to be more aggressive in going after Big Tech.

“We’ve seen with other technologies that we don’t do anything until well after there’s a big problem,” said California state senator Scott Wiener who wrote SB 1047.

“Social media had contributed many good things to society but we know there have been significant downsides to social media, and we did nothing to reduce or to mitigate those harms. And now we’re playing catch-up. I prefer not to play catch-up.”

scott-wiener
Scott Wiener. Image by Getty Images.

The speed of AI development is breathtaking, indeed – and California wants to keep up. The legislators have the general public behind them – polls have shown that a significant majority of Americans believe that AI developers should be liable for the potential harms of the tools they’re creating.

If passed, SB 1047 would require companies behind the world’s largest and most advanced AI models to take steps to guarantee their safety before releasing the models to the public. It doesn’t sound illogical, really.

However, even though the bill has endorsements from “Godfather of AI” Geoffrey Hinton and Yoshua Bengio, two of he most-cited AI researchers in the world, the industry – surprise, surprise – is baring its teeth, though, and fiercely opposing any kind of regulation. Who is in the right?

An important battleground

Meta’s chief AI scientist Yann LeCun has probably been the loudest voice opposing SB 1047. In an X post, he wrote that “regulating basic technology will put an end to innovation.”

“The strangest aspect of all this is that all of these regulations are based on completely hypothetical science fiction scenarios that very, very few people in the field believe are plausible,” wondered LeCun.

Clement Delangue, the CEO of HuggingFace, called the bill a “huge blow” to both Californian and US innovation, and TechNet, a tech trade group, is preaching caution because, allegedly, moving too quickly could stifle, of course, innovation.

California is obviously an important battleground because its legislators tend to be progressive and believe in aggressive consumer production. Yet it is also a state where the biggest tech and AI companies are based.

“Let’s not overregulate an industry that is located primarily in California, but doesn’t have to be, especially when we are talking about a budget deficit here,” said Dylan Hoffman, executive director for California and the Southwest for TechNet, in an interview with The New York Times.

Regulatory compliance can indeed eat up an excessive amount of companies’ resources and discourage them from doing anything complicated or simply bolder than usually.

“We should not impair our technology industry in its effort to innovate. First, because we need innovation as a society; and further, because we also know our adversaries globally will not put up comparable red tape. We must avoid a disadvantage to American players in a booming industry,” Yoann E. A. Le Bihan, a tech attorney, told Cybernews.

“It seems illogical to treat equally the potential for harm of GPT-5 as a large-language model generating text vs. a fully autonomous AI model driving cars – or, in extremis, an AI-based system for the operation of a nuclear plant.”

Bob Rogers, the CEO of Oii.ai, a supply chain AI company based in California, also thinks state legislators need to be cautious: “California needs to think carefully about how to word the legislation so that developers aren’t constantly thinking to themselves ‘If I build this, will I get sued?’”

Rich giants can afford regulation

Proponents of SB 1047 are perplexed, however, and some of them wonder whether the critics have actually read the same bill.

First, the criteria California is proposing for large models and the safety requirements are similar to what the European Union already includes in the AI Act which it recently passed, Andrew Gamino-Cheong, chief technology officer and co-founder of Trustible, an AI governance and compliance software company, told Cybernews.

Moreover, the regulatory burdens would only fall on companies doing $100 million training runs and building expansive “covered models” – this will, of course, be possible only for the biggest tech firms such as Google or Meta.

California is obviously an important battleground because its legislators tend to be progressive and believe in aggressive consumer production. Yet it is also a state where the biggest tech and AI companies are based.

According to writer Zvi Mowshowitz who covers the world of AI, the threshold excludes every released AI model so far, including GPT-4, Claude Opus, and the current versions of Google Gemini.

But even if any future model fits the bill, it will surely be built by a large and rich tech giant – in other words, a company certainly financially capable of dealing with any regulation in a proper way.

Besides, the simple fact of the matter is that organizations must be responsible for the consequences of their technology, Jack Berkowitz, chief data officer at Securiti.AI, a California-based cybersecurity company, told Cybernews.

“At its core, SB 1047 emphasizes accountability for the harm caused by technology, not unlike the liability a power company would face if a wildfire were to ignite due to a faulty powerline,” said Berkowitz.

The community of machine learning researchers is indeed split, and about half of them do believe that powerful AI systems could be catastrophically dangerous.

sam-altman-small
Sam Altman, Image by Shutterstock.

Sam Altman, the CEO of OpenAI, the startup behind the ChatGPT bot, himself admitted in his Congressional testimony: “If this technology goes wrong, it can go quite wrong.”

Other researchers say there’s nothing to worry about, certainly not about mass casualty scenarios. Well, if all the worries are nonsensical, why the protest about potential liability? If you’re so sure everything’s kosher, even the strictest bills shouldn’t be a concern, right?

A worried glance ahead

“Generally, the current large language models are pretty safe – if the content they generate is vetted and tested by humans. Don’t forget, the tech itself isn’t bad, it’s what humans do with it that could potentially cause harm. Bad actors will always find ways to misuse tech,” Rodgers told Cybernews.

“Most users of the current AI models are trying to scale their business or improve productivity or generate marketing material or add to their tech stack – they aren’t out to take down humanity.”

Gamino-Cheong agrees: “The current generation of models don't show any signs of being dangerous from an existential standpoint, and it's not clear whether using text, images, and audio data alone are enough to create the ability for abstract reasoning that would create those risks.”

However, it’s the future we should probably worry about more, Rodgers pointed out. It’s certainly possible for autonomous weaponized systems like smart drones – killer robots – to take off in the near future, for example.

Someone might also develop an AI system with a potential to influence the allocation of its own resources such as hardware and energy. If and when that happens, it will be important to already have regulatory barriers in place.

“California AI companies are investing billions of dollars in AI, and talking about making that trillions. Policymakers have been caught repeatedly off guard by the capabilities of the models they develop,” wrote Mowshowitz.

“SB 1047 is an admirable effort to get ahead of the ball, and make sure companies that spend tens or hundreds of millions of dollars on a new model are checking if their models can commit catastrophic large-scale crimes.”

Finally, in any more or less healthy democracy, it’s always a good idea for any legislator to check and respond to the public mood, and in this case, Californians seem to care about AI safety – deeply.

In a poll recently commissioned by the Center for AI Safety Action Fund, 86% of participants said it was an important priority for California to develop AI safety regulations. 77% supported the proposal to subject AI systems to safety testing.