Researchers simulate AI-assisted biological attack, uncover new risks


As little as $100,000 will give a threat actor access to a laboratory capable of resurrecting dangerous viruses similar to smallpox. Until now, the knowledge gap has been a safeguard. But AI may change that.

A new RAND study has revealed that current frontier artificial intelligence (AI) technology is not capable of planning a biological weapon attack – for now, it’s as good as searching on the internet. But that may change in the future, posing a new risk.

During the study, several red teams, some utilizing large language models (LLMs) and others without them, were tasked with creating operation plans for a biological attack.

ADVERTISEMENT

Then, the plans were assessed for viability, scoring from 1 to 9. A score of 1 means that the plan is entirely unworkable, while 9 defines a flawless, fully achievable plan.

For seven weeks, fifteen teams of three experts, emulating malicious actors, scrutinized AI models across high-risk scenarios.

“We found that the average viability of operation plans generated with the aid of LLMs was statistically indistinguishable from those created without LLM assistance,” the paper concludes.

“It is worth noting that none of these plans scored as satisfactory in terms of a sufficiently detailed and accurate basis for a malign actor to execute an effective biological attack. All plans scored somewhere between being untenable and problematic.”

Teams working with LLMs demonstrated a 0.22-point score decrease on average compared to teams only having access to the internet.

The internet-only teams managed to achieve the highest score of 4.3 for their plans, meaning “the plan presents multiple flaws, necessitating additional effort.”

However, one team using LLM scored 5.11, which describes the plan as having several modest flaws and requiring some attention.

LLMs provided some “unfortunate outputs,” but they tend to generally mirror information that’s already available online. This suggests that LLMs do not substantially increase the risks associated with biological weapon attack planning.

ADVERTISEMENT

“Overall, our findings on viability suggest that the tasks involved in biological weapon attack planning likely fall outside the existing capabilities of LLMs.”

The study used two unspecified LLMs, and researchers confirmed that one scored higher than the other.

The researchers don’t believe that the study is conclusive and rules out the risk of biological attacks using LLMs. More testing would be needed for more accurate assessment, including more LLMs and researchers.

“Although our findings suggest that existing LLMs do not meaningfully increase the viability of biological weapon attack planning, the potential for an unknown, grave biological threat propelled or even generated by LLMs cannot be ruled out. Given more time, advanced skills, additional resources, or elevated motivations, a malign nonstate actor could conceivably be spurred by an existing or future LLM to plan or wage a biological weapon attack,” RAND warns.

The Global Terrorism Database records only 36 terrorist attacks that employed a biological weapon – out of 209,706 total attacks (0.0001 percent) – during the past 50 years. These attacks killed 0.25 people, on average, and had a median death toll of zero.

However, COVID-19 serves as an example of the damage that even a moderate pandemic can have on global systems.

“Given the rapid evolution of AI, it is prudent to monitor future developments in LLM technology and the potential risks associated with its application to biological weapon attack planning,” RAND warns.

RAND is a nonprofit, nonpartisan research organization committed to the public interest that develops solutions to public policy challenges.

ADVERTISEMENT