Major AI company calls for urgent regulation to avert catastrophe


AI can already “be misused for catastrophic risks,” and the window for proactive risk prevention is closing. Anthropic, the major developer of AI models, is calling for urgent AI regulation.

In November, Anthropic upgraded its most advanced Claude model, which is now capable of controlling a computer. Now, the company warns about the potential catastrophic risks of AI models.

“Governments should urgently take action on AI policy in the next eighteen months. The window for proactive risk prevention is closing fast,” the statement reads.

ADVERTISEMENT

AI models are already so advanced that they help accelerate scientific progress, unlock new medical treatments, and grow the economy. However, as with any tool, they could expose us to significant risks, and the industry lacks “judicious, narrowly-targeted regulation.”

Why such urgency?

Anthropic fears that the current dragging of feet will lead to the worst of both worlds, with “poorly designed, knee-jerk” regulation and hampered AI progress while also failing at preventing risks.

“Grappling with the catastrophic risks of AI systems is rife with uncertainty. We see the initial glimmers of risks that could become serious in the near future, but we don’t know exactly when the real dangers will arrive. We want to make the critical preparations well in advance.”

Anthropic gave an illustrative example of how fast the models are advancing and hinted that inside AI companies, they see continued progress “on as-yet-undisclosed systems and results.”

A year ago, a large language model (LLM) was able to solve less than 2% of real-world coding problems brought by SWE-bench. In March 2024, the ‘first AI software engineer, ’ Devin, tackled the test with 13.5% accuracy. Six months later, Claude 3.5 Sonnet model demonstrated a 49% score.

The 3.5 Sonnet is a mid-size LLM, and Anthropic has yet to upgrade its largest LLM offering, Opus.

“We expect that the next generation of models – which will be able to plan over long, multi-step tasks – will be even more effective,” the company said.

ADVERTISEMENT

LLMs can be misused in other fields, such as chemistry, biology, radiology, and nuclear, where, according to the UK AI Safety Institute, AI models demonstrated expert-level knowledge on par with PhDs.

In one of the most challenging benchmarks, GPQA, which measures performance in graduate-level science problems, LLMs already score around 77%, while the best human experts managed to achieve 81.2%.

“About a year ago, we warned that frontier models might pose real risks in the cyber and CBRN domains within 2-3 years. Based on the progress described above, we believe we are now substantially closer to such risks. Surgical, careful regulation will soon be needed.”

jurgita Gintaras Radauskas Konstancija Gasaityte profile Paulius Grinkevicius
Get our latest stories today on Google News

How does Anthropic see potential regulation?

Anthropic provides its own Responsible Scaling Policy, which, while not perfect, might serve as an initial framework. The company suggests focusing on three key elements: transparency, better safety incentives, and simplicity.

Anthropic believes all AI companies should publish similar policies and risk evaluations for each new generation of AI systems, and flexible regulation should incentivize the effectiveness of these policies “at preventing catastrophes” while not imposing unnecessary burdens.

The company believes that risk mitigation measures should be proportionate with the increase in AI system capabilities and iterative, course-correcting.

“We regularly measure the capabilities of our models and rethink our security and safety approaches in light of how things have developed.”

The potential regulation would ideally be at the federal level, “though urgency may demand it be instead developed by individual states.”

ADVERTISEMENT

Anthropic also said its framework includes tests to quickly identify whether the model is capable of posing a catastrophic risk. The company believes that carefully implemented security components wouldn’t hinder AI progress.

“It is unrealistic that regulation would impose literally zero burden. Our goal should be to achieve a large reduction in catastrophic risk for a small and manageable cost in compliance burden,” the company said.