Tech billionaire Elon Musk said on Monday he thinks California lawmakers should “probably pass” a controversial AI safety bill requiring tech companies and AI developers to conduct safety testing on some of their own models.
The SB 1047 bill, also known as the Secure Innovation for Frontier Artificial Intelligence Systems Act, would require any AI firm doing business in the Golden State to self-test the models in development before releasing them to the public, with the goal of establishing common safety standards.
Musk posted his views on his social media platform X Tuesday, acknowledging that the decision will be “a tough call and will make some people upset.”
“But, all things considered, I think California should probably pass the SB 1047 AI safety bill,” the Tesla chief said.
“For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public,” he added.
This is a tough call and will make some people upset, but, all things considered, I think California should probably pass the SB 1047 AI safety bill.
undefined Elon Musk (@elonmusk) August 26, 2024
For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk…
Critics of SB 1047 are concerned that regulations will hamper innovation, which would give America's adversaries – particularly China – in the race to develop AI, eventually impacting national security. Others fear that regulations would drive investments out of the country.
According to California’s legislative database, SB 1047 is one of 65 other bills involving AI regulation introduced this legislative season – although many of them have already died in chambers.
These include measures to ensure all algorithmic decisions are proven unbiased and protect the intellectual property of deceased individuals from exploitation by AI companies.
Earlier in the day, Microsoft backed OpenAI voiced support for California’s AB 3211, another AI bill that would require tech companies to label AI-generated content, which would range from harmless memes to deep fakes aimed at spreading misinformation about political candidates.
With a third of the world's population having elections this year, experts are concerned about the role AI-generated content will play in disinformation campaigns, proving to already have been a prominent issue during some elections, such as in Indonesia.
Your email address will not be published. Required fields are markedmarked