AI and Chemical, Biological, Radiological, & Nuclear Hazards: A Regulatory Review

post by Elliot_Mckernon (elliot), Deric Cheng (deric-cheng) · 2024-05-10T08:41:51.051Z · LW · GW · 1 comments

Contents

    What are potential chemical hazards arising from the increase in AI capabilities? 
    What are biological hazards arising from the increase in AI capabilities? 
    What are radiological and nuclear hazards arising from the increase in AI capabilities?
  Current Regulatory Policies
    The US
    China
    The EU
  Convergence’s Analysis
None
1 comment

This article is last in a series of 10 posts comprising a 2024 State of the AI Regulatory Landscape Review, conducted by the Governance Recommendations Research Program at Convergence Analysis. Each post will cover a specific domain of AI governance, such as incident reporting [EA · GW]safety evals [EA · GW], model registries [LW · GW], and more. We’ll provide an overview of existing regulations, focusing on the US, EU, and China as the leading governmental bodies currently developing AI legislation. Additionally, we’ll discuss the relevant context behind each domain and conduct a short analysis.

This series is intended to be a primer for policymakers, researchers, and individuals seeking to develop a high-level overview of the current AI governance space. We’ll publish individual posts on our website and release a comprehensive report at the end of this series. Thank you to Melissa Hopkins for providing feedback on this report.

Humanity has developed technologies capable of mass destruction, and we need to be especially cautious about AI in relation to these technologies. These technologies and associated risks commonly fall into four main categories, collectively known as CBRN:

In this section, we’ll briefly contextualize current and upcoming examples of each of these types of hazards in the context of AI technologies. 

What are potential chemical hazards arising from the increase in AI capabilities? 

In particular, a prominent concern of experts is the potential for AI to lower the barrier of entry for non-experts to generate CBRN harms. That is, AI could make it easier for malicious or naive actors to build dangerous weapons, such as chemical agents with deadly properties.

For example, pharmaceutical researchers use machine learning models to identify new therapeutic drugs. In this study, a deep learning model was trained on ~2,500 molecules and their antibiotic activity. When shown chemicals outside that training set, the model could predict whether they would function as antibiotics. 

However, training a model to generate novel safe and harmless medications is very close to, if not equivalent to, training a model to generate chemical weapons. This is an example of the Waluigi Effect [LW · GW]; the underlying model is simply learning to predict toxicity, and this can be used to rule out harmful chemicals, or generate a list of them, ranked by deadliness. This was demonstrated by the Swiss Federal Institute for Nuclear, Biological, and Chemical Protection (see here for a non-paywalled summary). By telling the same model to generate harmful molecules, it generated a list of 40,000 such molecules in under 6 hours. These included deadly nerve agents such as VX, as well as previously undiscovered molecules that it ranked as more deadly than VX. To quote the researchers:

This was unexpected because the datasets we used for training the AI did not include these nerve agents… By inverting the use of our machine-learning models, we had transformed our innocuous generative model from a helpful tool of medicine to a generator of likely deadly molecules.

As AI models become more deeply integrated into the process of developing chemicals used for industrial and medical purposes, it will become increasingly accessible for malicious parties to use these models for dangerous means. 

What are biological hazards arising from the increase in AI capabilities? 

In the near future, AI may lower the barrier of entry for malicious actors to generate pandemic-level biological hazards. This risk comes from both specialized AI trained for biological research and more generic AI, such as large language models.

Large language models (LLMs) have been identified by recent papers to lower barriers to misuse by enabling the weaponization of biological agents. In particular, this may occur from the increasing application of LLMs as biological design tools (BDTs), such as multimodal lab assistants and autonomous science tools. These BDTs make it easier and faster to conduct laboratory work, supporting the work of non-experts and expanding the capabilities of sophisticated actors. Such abilities may produce “pandemic pathogens substantially more devastating than anything seen to date and could enable forms of more predictable and targeted biological weapons”. Further, the risks posed by LLMs and by custom AI trained for biological research can exacerbate each other by increasing the amount of harm an individual can do while providing access to those tools to a larger pool of individuals.

It’s important to note these risks are still unlikely with today’s cutting-edge LLMs, though this may not hold true for much longer. Two recent studies from RAND and OpenAI have found that current LLMs are not more prone to misuse than standard internet searches regarding biological and chemical weapons. 

Another leading biological hazard of concern is synthetic biology – the genetic modification of individual cells or organisms, as well as the manufacture of synthetic DNA or RNA strands called synthetic nucleic acids

This field poses a particularly urgent risk because existing infrastructure could theoretically be used by malicious actors to produce an extremely deadly pathogen, for example. Researchers are able to order custom DNA or RNA to be generated and mailed to them, a crucial step towards turning a theoretical pandemic-level design into an infectious reality. Currently, we urgently need mandatory screening of ordered material to ensure it won’t enable pandemic-level threats. 

Some researchers are developing tools specifically to measure and reduce the capacity of AI models to lower barriers of entry for CBRN weapons and hazards, with a particular focus on biological hazards with pandemic potential. For example, OpenAI is developing “an early warning system for LLM-aided biological threat creation”, and a recent collaboration between several leading research organizations produced a practical policy proposal titled Towards Responsible Governance of Biological Design Tools. The Centre for AI Safety has also released the “Weapons of Mass Destruction Proxy”, which measures how particular LLMs can lower the barrier of entry for CBRN hazards more broadly. Tools and proposals such as these, developed with expert knowledge of CBRN hazards and AI engineering, are likely to be a crucial complement to legislative and regulatory efforts. 

For more context on these potential pandemic-level biological hazards, you can read:

What are radiological and nuclear hazards arising from the increase in AI capabilities?

A prominent and existential concern from many AI safety researchers is the risk of integrating AI technologies in the chain-of-command of nuclear weapons or nuclear power plants. As one example, it’s been proposed that AI could be used to monitor and maintain the activity of nuclear power plants.

Elsewhere, The Atlantic cites the Soviet Union’s Dead Hand as evidence that militaries could be tempted to use AI in the nuclear chain-of-command. Dead Hand is a system developed in 1985 that, if activated, would automatically launch a nuclear strike against the US if a command-and-control center stopped receiving communications from the Kremlin and detected radiation in Moscow’s atmosphere (a system which may still be operational).

As the reasoning of AI technology is still poorly understood and AI models have unpredictable decision-making abilities, it’s quite likely that such an integration may lead to unexpected and dangerous failure modes, which for nuclear technologies have catastrophic worst-case outcomes. As a result, many researchers argue that the risk of loss-of-control means we shouldn’t permit the usage of AI anywhere near nuclear technologies, such as decision-making regarding the nuclear launch codes or the storage and maintenance of nuclear weapons

In proposed legislation, some policymakers have pushed for banning AI in nuclear arms development, such as a proposed pact from a UK MP and Senator Mitt Romney’s recent letter to the Senate AI working group. Romney’s letter proposes a framework to mitigate extreme risks by requiring powerful AIs to be licensed if they’re intended for chemical/bio-engineering or nuclear development. However, nothing binding has been passed into law. There have also been reports that the US and China are having discussions on limiting the use of AI in areas including nuclear weapons.

Current Regulatory Policies

The US

The Executive Order on AI has several sections on CBRN hazards: various department secretaries are directed to implement plans, reports, and proposals analyzing CBRN risks and how to mitigate them, and Section 4.4 specifically focuses on analyzing biological weapon risks and how to reduce them in the short-term. In full:

The following points are all part of 4.4, which is devoted to Reducing Risks at the Intersection of AI and CBRN Threats, with a particular focus on biological weapons:

China

China’s three most important AI regulations [LW · GW] do not contain any specific provisions for CBRN hazards. 

The EU

The EU AI Act does not contain any specific provisions for CBRN hazards, though article (60m) on the category of “general purpose AI that could pose systemic risks” includes the following mention of CBRN: “international approaches have so far identified the need to devote attention to risks from [...]  chemical, biological, radiological, and nuclear risks, such as the ways in which barriers to entry can be lowered, including for weapons development, design acquisition, or use”.

Convergence’s Analysis

Mitigating catastrophic risks from AI-enabled CBRN hazards should be a top global priority.

Despite this, current and near-future legislation and regulation regarding AI and CBRN hazards is wholly insufficient given the scale of potential risks.

Effective regulation of CBRN and AI will require close collaboration between AI experts, domain experts, and policymakers.

AI governance in other high-risk domains like cybersecurity and the military has major implications for CBRN risks.

  1. ^

    Defined in the Executive Order as “biomolecules, including nucleic acids, proteins, and metabolites, that make up a cell or cellular system

  2. ^

    Defined in the Executive Order in the following: “The term “synthetic biology” means a field of science that involves redesigning organisms, or the biomolecules of organisms, at the genetic level to give them new characteristics. Synthetic nucleic acids are a type of biomolecule redesigned through synthetic-biology methods.

1 comments

Comments sorted by top scores.

comment by quiet_NaN · 2024-05-10T21:45:54.391Z · LW(p) · GW(p)

I was fully expecting having to write yet another comment about how human-level AI will not be very useful for a nuclear weapon program. I concede that the dangers mentioned instead (someone putting an AI in charge of a reactor or nuke) seem much more realistic. 

Of course, the utility of avoiding sub-extinction negative outcomes with AI in the near future is highly dependent on p(doom). For example, if there is no x-risk, then the first order effects of avoiding locally bad outcomes related to CBRN hazards are clearly beneficial. 

On the other hand, if your p(doom) is 90%, then making sure that non-superhuman AI systems work without incident is alike to clothing kids in asbestos gear so they don't hurt themselves while playing with matches. 

Basically, if you think a road leads somewhere useful, you would prefer that the road goes smoothly, while if a road leads off a cliff you would prefer it to be full of potholes so that travelers might think twice about taking it. 

Personally, I tend to favor first-order effects (like fewer crazies being able to develop chemical weapons) over hypothetical higher order effects (like chemical attacks by AI-empowered crazies leading to a Butlerian Jihad and preventing an unaligned AI killing all humans). "This looks locally bad, but is actually part of a brilliant 5-dimensional chess move which will lead to better global outcomes" seems like the excuse of every other movie villain.