California governor Gavin Newson has signed some of America’s toughest laws yet regulating AI, cracking down on both dangerous deepfakes that could influence elections and the use of the technology in Hollywood.
“Home to the majority of the world’s leading AI companies, California is working to harness these transformative technologies to help address pressing challenges while studying the risks they present,” Newsom’s office said in a press release.
One of the new laws, AB 2655, requires large online platforms such as Facebook or X to remove or label AI deepfakes related to elections during specified periods and requires them to provide mechanisms to report such content.
The law also authorizes candidates, elected officials, elections officials, the Attorney General, and a district attorney or city attorney to seek injunctive relief against a large online platform for noncompliance with the act.
Another law, AB 2355, requires disclosures of AI-generated political ads. In theory, this means that the Donald Trump presidential campaign would not be able to get away with posting AI deepfakes of Taylor Swift allegedly endorsing him online.
The last two AI laws signed on Tuesday – and the nation’s largest film and broadcast actors union, SAG-AFTRA, was pushing for them – create new standards for California’s media industry.
One requires studios to obtain permission from an actor before creating an AI-generated replica of their likeness or voice. Another bans studios from creating digital replicas of deceased performers without consent from their estates.
“Safeguarding the integrity of elections is essential to democracy, and it’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation – especially in today’s fraught political climate,” said Newsom.
“These measures will help to combat the harmful use of deepfakes in political ads and other content, one of several areas in which the state is being proactive to foster transparent and trustworthy AI.”
The governor of California is actually currently considering 38 AI-related bills in total, including the contentious AI safety bill called Senate Bill 1047, which would force companies that spend more than $100 million on training large AI models to do thorough safety testing.
If the firms don’t, they would be liable if their systems led to a “mass casualty event” or more than $500 million in damages in a single incident.
The criteria California is proposing for large models and the safety requirements are similar to what the European Union already includes in the AI Act. But the critics say that overly eager regulation will kill innovation in the Golden State.
Your email address will not be published. Required fields are markedmarked