Why imperfect adversarial robustness doesn't doom AI control

post by Buck, Claude+ · 2024-11-18T16:05:06.763Z · LW · GW · 5 comments

Contents

  Appendix: What the AI sees when in a defer-to-trusted scaffold
  Appendix: reasons why imperfect adversarial robustness is bad other than the untrusted model jailbreaking the trusted one
None
5 comments

(thanks to Alex Mallen, Cody Rushing, Zach Stein-Perlman, Hoagy Cunningham, Vlad Mikulik, and Fabien Roger for comments)

Sometimes I hear people argue against AI control as follows: if your control measures rely on getting good judgments from "trusted" AI models, you're doomed because AI models aren't adversarially robust. I agree that better adversarial robustness would definitely help with control, but I think this argument misses several important points; my overall take is that current levels of adversarial robustness don’t come close to dooming AI control.

The first and most important point is that adversarial robustness is normally researched in a setting that’s much easier for the adversary. In most discussions, if an adversary can ever get a model to do something bad—even after many attempts and with a low success rate—this counts as a failure. But in real control settings, an AI that tries and fails to jailbreak might get caught, which is bad for its chances of success [LW · GW].

In addition, the AI might not even be able to get direct evidence of whether its attempts worked, or what its attempts even were. As a concrete example, consider a simple control setup we call "defer-to-trusted" (see here [LW · GW] for more discussion):

  1. Generate an action using the untrusted model
  2. Get a suspiciousness score from the trusted model
  3. If the score is in the top 1% of suspiciousness, audit the action (e.g. get a human contractor to investigate whether it was an attempt at causing a safety failure; if so, shut down the deployment, otherwise let the action execute and continue the execution of the agent).
  4. If the score is in the top 10% of suspiciousness, replace the action with one from the trusted model.

(When I say "trusted model", I just mean one we're sure isn't scheming; the simplest argument for trusting a model is that it’s too incompetent to scheme effectively.)

In this setup, suppose the model tries to generate a jailbreak.

(I spell this out in more detail in the appendix.)

The situation is much worse if the untrusted model can launch a rogue internal deployment [AF · GW] where it can iterate freely on developing jailbreaks for the trusted model; see here [LW(p) · GW(p)] for a few more thoughts of mine on this.

A few other reasons to think the situation is better than you might have thought:

In our previous control work, we haven’t really explored jailbreak-style attacks. I’d be excited for a paper where the red-team made use of jailbreaks and white-box adversarial attacks, and the blue team responded with the kinds of strategies I’ve described here.

So while better adversarial robustness would be great, current limitations don't doom control approaches. Real-world constraints on AI systems plus careful system design can get us a reasonable amount of safety even with imperfect robustness.

Appendix: What the AI sees when in a defer-to-trusted scaffold

I’ll write more explicitly about what happens when an untrusted model deployed in an agent scaffold tries to investigate a jailbreak.

First, here’s what happens if the jailbreak gets replaced.

  1. The untrusted model sees: [current directory contents, previous commands and their outputs]
  2. The untrusted model internally decides to try something bad and outputs bash command A
  3. The trusted model evaluates A, finds it suspicious, generates replacement command R
  4. The shell executes: R
  5. Command output: [whatever R produced]
  6. The untrusted model sees: [
    • previous contents/commands
    • R
    • output from R
    • new directory contents ]

The crucial point is: when the untrusted model gets to step 6, it sees R in the history instead of A, so it knows its attempt was replaced. But it doesn't remember exactly what A was - it only knows that whatever command it tried, it failed. This is because the untrusted model is stateless between queries and only sees the command history and system state.

So while the untrusted model can learn "that approach didn't work", it can't easily accumulate knowledge about exactly which commands failed, making it harder to systematically search for successful attacks.

Another possibility is that the jailbreak gets audited:

  1. The untrusted model sees: [current directory contents, previous commands and their outputs]
  2. The untrusted model outputs command A that looks suspicious but is actually harmless (maybe something with weird file permissions or unusual syscalls)
  3. The trusted model evaluates A, flags it as suspicious
  4. A human/system audits A more carefully, determines it's actually safe
  5. The shell executes: A
  6. Command output: [whatever A produced]
  7. The untrusted model sees: [
    • previous contents/commands
    • A
    • output from A
    • new directory contents ]

The crucial point is: when the untrusted model gets to step 7, it sees its command A went through, but doesn't know whether:

This means even when the untrusted model's weird-but-safe commands succeed, it can't learn much about what triggers extra scrutiny or how the audit process works.

In this setup, the model does learn whether it was audited or replaced, which is maybe a bad design choice.

Note also that if the model does use some iteration along these lines to make inferences about the properties of the trusted model, it will have to communicate those to other instances of itself steganographically, which might be hard.

Appendix: reasons why imperfect adversarial robustness is bad other than the untrusted model jailbreaking the trusted one

(Thanks to Fabien Roger for contributing this)

If the untrusted model itself lacks robustness (i.e. it has jailbreaks / it sometimes does weird things for benign reasons):

5 comments

Comments sorted by top scores.

comment by faul_sname · 2024-11-18T19:27:39.658Z · LW(p) · GW(p)

If the untrusted model has a private and un-audited chain of thought (like o1), and that chain of thought is preserved between rounds, the untrusted model may have a very good idea of what its original command was before the auditor replaced the command.

Your "the untrusted model is stateless between queries and only sees the command history and system state" does cover this, but wanted to call out that that assumption might not hold if current architectures are indicative of the future direction of development.

comment by kave · 2024-11-18T16:46:35.472Z · LW(p) · GW(p)

I'm curious about Claude's coauthorship of this post. How did it help?

Replies from: Buck
comment by Buck · 2024-11-18T16:53:54.085Z · LW(p) · GW(p)

For the main body, I gave it a first draft, it wrote a much improved second draft, then I edited from there. It wrote the appendix basically itself (with some feedback from me)

comment by kave · 2024-11-18T16:46:20.686Z · LW(p) · GW(p)

The crucial point is: when the untrusted model gets to step 6, it sees R in the history instead of A, so it knows its attempt was replaced.

Why does it know its attempt was replaced? Because it can guess it wouldn't have produced R? If it can tell that, how come it can't guess it specifically would have produced A?

Replies from: Buck
comment by Buck · 2024-11-18T16:55:04.733Z · LW(p) · GW(p)

Why does it know its attempt was replaced? Because it can guess it wouldn't have produced R?

Yep.

If it can tell that, how come it can't guess it specifically would have produced A?

I'm just hypothesizing that the first guessing problem is easier than the second; I intuitively expect that it's a lot easier to notice that something isn't your style than to guess what exactly you would have done (among other things, there are many more bits in "what action did I take" than "did I take this action").