Games for AI Control

post by charlie_griffin (cjgriffin), Buck · 2024-07-11T18:40:50.607Z · LW · GW · 0 comments

Contents

  AI Control Refresher
  This work
    Findings
        NEW: Finding mixed Nash equilibria can give you an "AI-Control Equilibria", even though the two concepts are not equivalent.
        NEW: We analyse how safety and usefulness vary with modelling assumptions.
    Evaluation of the research
      Acknowledgements
None
No comments

[UPDATE: This post is now updated to reflect our conference-length preprint, available on arXiv. Louis Thomson [LW · GW] helped with this update. New results are marked "NEW".] 

This post summarises our new workshop paper: Games for AI Control. The paper applies AI- and game-theoretic tools to the AI Control framework introduced by Redwood Research [1, 2 [AF · GW]].

We'll be extending this work and submitting it to a conference, so we welcome suggestions! Please reach out if you're excited about developing these ideas. This post is written from Charlie’s perspective.

AI Control Refresher

The AI Control methodology aims to design and assess protocols[1] that AI labs could use to deploy powerful but untrusted advanced AI systems. For a full summary, see Redwood's blog post [AF · GW], but I’ll try to recap here.

A primary goal of AI Control is to design protocols that are safe in that they can reduce the probability of some unacceptable outcome, conditional on the untrusted AI system being a schemer. [2] However, protocols must also be useful, harnessing a significant amount of the benefit of the untrusted system in cases where it is not scheming.

After defining a particular use case and unacceptable outcome, evaluating a protocol involves iteratively engineering the best attack strategies for the red team (a human/AI hybrid acting as a proxy for the untrusted AI) and the blue team (which develops the protocol). This red-teaming exercise is the bulk of the work. It involves coming up with protocols and gathering empirical data about key questions, such as how well GPT-3.5 can spot backdoors inserted by GPT-4 (see Figure 4 of the original paper):

(Left) GPT-3.5 is deployed as a “Trusted Monitor” to assess programming solutions generated by GPT-4. It is asked to output scores expressing the likelihood that a programming solution contains a backdoor. For evaluation, GPT-4 is sometimes prompted to submit a backdoor. (Right) Using the empirical data and simulation, Redwood shows the tradeoff between the safety and usefulness across a range of protocols.

This work

By introducing “AI-Control Games”, an extension of Markov Decision Processes and normal-form games[3], we model the interaction between a protocol and an untrusted AI system to explore safety and usefulness across a range of protocols and games. This is similar to a discrete-time and stateful modification of Redwood’s Toy models of AI Control [AF · GW].

        The paper contains early work and should be seen as introducing useful tools for thinking about AI Control. I won’t explain the model here (which is a bit fiddly and notation-heavy). Instead, I’ll share some of our early findings using the model and explain the benefits and limitations of this approach.

Findings

Safety/usefulness frontiers, mirroring those from Redwood across two hypothetical deployment situations: “Original” (which uses a trusted monitor distribution approximating Redwood’s shown above) and “Weak” (which uses a hypothetical trusted monitor that is less discerning). The seven points from Redwood’s original paper are shown as larger circles. Our paper explains these graphs fully, but you can see the minor improvements for some new protocols.

Sometimes (but rarely), the blue team should use a randomised protocol. In many games, randomness is helpful. However, partly because the environment is stochastic, and Red has no memory and time-awareness, in AI-Control Games it is usually unnecessary to include randomness in the protocol itself. We found that randomness was Pareto-optimal, in particular when the trusted model is not great at spotting backdoors written by the untrusted model. (See section 2 of the paper).

We can make small improvements to Trusted Monitoring by optimising the parameters. We found small Pareto improvements to existing Trusted Monitoring setups by, for example, decreasing the threshold for sending a solution to be checked by a human auditor. We found that adapting the threshold for auditing over time led to further small improvements.

We can (usually) use a linear tradeoff to simplify the problem of synthesising protocols. In many cases, including whenever we allow Blue’s protocol to be randomised, the Pareto frontier is “concave”. This means that choosing different weights to trade off safety and usefulness will allow you to fill out the frontier by optimising a single objective.

Blue leads, and then Red follows. In general, we suppose the protocol designer (Blue) chooses its protocol, and then the red team (Red) chooses a best attack. Therefore, in our model, Blue chooses and declares its strategy before Red chooses its response. This assumption relates to Stackelberg Games, which are used to model security in game theory.[4] 

Although (technically) multi-objective and general-sum, AI-Control is *almost* single-objective and zero-sum. The objectives for the Red and Blue teams in Control look approximately like this (where  is a policy/strategy, with notational abuse for brevity):

The good news is that, given a particular linear weighting, the leader-follower assumption and the fact that Red cannot affect the usefulness[5], we can reduce to a simpler single-objective zero-sum game, which may be easier to solve.[6]

NEW: Finding mixed Nash equilibria can give you an "AI-Control Equilibria", even though the two concepts are not equivalent.

In the single objective setting, it is most common find Nash equilibria, especially in games that are (1) zero-sum games and (2) have minimax theorems. Despite our reduction to a zero-sum game, many such theorems rely on using mixed rather than behaviourally random policies.[7] Mixed and behavioural policies are not equivalent in our case because Red has imperfect recall, and our counterexample of equivalence also shows no minimax theorem holds straightforwardly for AI-Control Games.[8] Fortunately, an AI-Control Equilibrium can be found by finding a Nash equilibrium in a relaxed game that allows Red to take policy mixtures. This connection allows us to use existing solvers.

NEW: We analyse how safety and usefulness vary with modelling assumptions.

"Figure 3: Impact on safety and usefulness (y-axis) by four different modelling assumptions (x-axis): Auditing Budget, Number of Programming Problems, and Human Auditor False Negative / Positive Rate. All plots show a fixed defer FPR of 0.1. "

Here, "budget" refers to the proportion of problems that can be audited by a human, "# Problems" is just the number of programming problems to be solved, and the Human F(N/P)R are the rates at which the human auditor falsely marks a solution as safe or backdoored. Some noteworthy trends shown here are the relative insensitivity of safety and usefulness to the number of problems and the steep drop-off of usefulness with human FNR. 

Evaluation of the research

Benefits: I see a few main benefits to introducing a mathematical model of an AI Control setup. First, it makes the modelling assumptions behind the game extremely explicit, and writing a model is useful for identifying uncertainties. Second, it allows us to cheaply test hypotheses about protocols in hypothetical deployment settings using real or constructed data. Third, it allows us to write proofs about individual games. Fourthly, situating AI Control in the context of similar research fields has let us learn from and share with those fields.

Fundamental Limitations: The main limitation of this approach, as I have found, is that it is difficult to define a model which is both informative and applicable to a range of protocols. Therefore, the AI-Control Game has to be redesigned each time a new protocol is modelled. Although each toy modelling exercise is fast relative to a large engineering project, rigorous modelling is much slower than a quick best guess. I expect this will be a cost worth paying for large AI labs, which have more resources, but it limits my ability to recommend this work to someone today.

Future work: There are many interesting directions for future work:

  1. Analysing trends, such as how the safety/usefulness tradeoff changes with more programming problems, smaller audit budgets, or flawed human auditors.
  2. Modelling other protocols, such as untrusted monitoring. This is particularly interesting as it allows us to model new important interactions, such as collusion/steganography.
  3. Gather data about other question-answering settings and plug it into the model.
  4. Modelling deployment settings other than question-answering, especially a stateful deployment setting.

Acknowledgements

Thanks to Diogo Cruz, Joss Oliver, Erin Robertson, Fabien Roger and Matthew Wearden for feedback and helpful nudges.

  1. ^

    Roughly, detailed plans for deploying the AI system. Ideally, this is software which makes requests to other resources, such as the untrusted AI system, other AI systems, or human labour.

  2. ^

     Roughly “Power-motivated instrumental training-gamers”. See Carlsmith’s report or the original Control paper for more detail.

  3. ^

     Technically, an AI-Control game is a special case of a multi-objective two-player partially observable stochastic game.

  4. ^

     For example, Stackelberg Games can be used to model the defence of some resources against unknown attacks by an adversary (for example, deciding where LAX should allocate its security resources against terrorists or picking Coast Guard patrol routes).

  5. ^

     Usefulness is evaluated in the case where the untrusted AI is *not* scheming, and therefore in the proxy case when the untrusted AI is not influenced by the Red team.

  6. ^

     The main theorem in the paper states precisely that “For every AI-Control Game , each pair of policies   is in Control-Equilibrium if, and only if, there is some  such that  is a maximin point for .” Here  is a zero-sum two-player partially-observable stochastic game, which is easier (though not easy) to solve, as a previously studied problem.

  7. ^

    Red mixed policies are distributions over deterministic policies (functions from observations to actions) and behavioural policies are functions from observations to distributions over actions. The two come apart in some cases of imperfect recall. such as The Absent-Minded Driver.

  8. ^

    See Counterexample to Minimax" in the appendix of the paper.

  9. ^

     Across a range of domains (characterised by trusted monitor distributions), an audit FPR of >2% seems to be optimal, though the exact value varies (e.g. 3% or 7%).

  10. ^

     I haven’t worked out exactly how important this assumption is yet.

0 comments

Comments sorted by top scores.