OpenAI says new models could come close to creating biological weapons


OpenAI said that the company’s upcoming models will most likely be more capable in biology. That means there’s a risk of someone building a dangerous bioweapon.

Key takeaways:

In a blog post, the creator of ChatGPT has touted the power of advanced AI models that can rapidly accelerate scientific discovery and benefit humanity.

ADVERTISEMENT

“Soon, they could also accelerate drug discovery, design better vaccines, create enzymes for sustainable fuels, and uncover new treatments for rare diseases to open up new possibilities across medicine, public health, and environmental science,” said OpenAI.

However, the firm added that this also means that the sheer capabilities of these models could be exploited by bad actors or simply curious and naive amateurs.

“The same underlying capabilities driving progress, such as reasoning over biological data, predicting chemical reactions, or guiding lab experiments, could also potentially be misused to help people with minimal expertise to recreate biological threats or assist highly skilled actors in creating bioweapons,” reads OpenAI’s statement.

Ernestas Naprys Niamh Ancell BW Konstancija Gasaityte profile Marcus Walsh profile
Be the first to know and get our latest stories on Google News

Yes, so far, physical access to labs and sensitive materials remains a barrier. However, those barriers are not absolute, the firm said: “We expect that upcoming AI models will reach ‘high’ levels of capability in biology.”

OpenAI said it was taking a “multi-pronged approach” to implementing mitigations. The company is also stepping up the testing of such models, which will be trained to safely handle dual-use biological requests.

The company didn’t specify when one of the models that could hit the risky threshold will launch. But Johannes Heidecke, OpenAI’s head of safety systems, told Axios they were “expecting some of the successors of our o3 (reasoning model) to hit that level.”

Someone talking a selfie next to the OpenAI logo in silhouette form.
Image by Nurphoto via GettyImages
ADVERTISEMENT

To be fair, though, OpenAI doesn’t really say its models will be capable of creating new types of bioweapons.

The company rather believes its upcoming products could be misused by individuals or groups without any background in biology but with an intent to do potentially dangerous things.

“Our approach is focused on prevention – we don’t think it’s acceptable to wait and see whether a bio threat event occurs before deciding on a sufficient level of safeguards,” said OpenAI.