AI-assisted alignment proposals require specific decomposition of capabilities

post by RobertM (T3t) · 2023-03-30T21:31:57.725Z · LW · GW · 2 comments

Contents

  AI as Research Assistant
    "Shape of Problem" Dependencies
    "How Capabilities Decompose" Dependencies
  AI as Independent Researcher
  RLAIF[1]
    "Shape of Problem" Dependencies
    "How Capabilities Decompose" Dependencies
  Scalable Oversight
    "Shape of Problem" Dependencies
    "How Capabilities Decompose" Dependencies
None
2 comments

Some alignment strategies involve using weaker AI systems to help with alignment.  I'm interesting in figuring out what dependencies each plan has, which need to be true for the plan to succeed.  These are likely to vary across plans.  (Separate, but related, is the question of what assumptions those working on the plans are appear to be operating under, and how well those line up with the actual dependencies.)

There are two frames I've been using to think about this lately.  One is to ask, "What assumptions does this approach make about the shape of the alignment problem itself?"  The other is more specifically about how capabilities might (or might not) decompose, both in theory and in practice.  I'm more interested in exploring the second frame in this post; I'm mostly using the first frame as a to set aside other considerations related to whether any given plan is viable.

I'm also operating with a couple of broader background assumptions; absent these you might look at my arguments and be confused:

  1. Unaligned, broadly superhuman AGI ("ASI") is a coherent, possible thing in to bring about in our universe
  2. If dropped on our heads tomorrow, unaligned ASI would with very high likelihood cause human extinction/loss of ~all value in the lightcone/etc.

 

Here are some broad classes of strategies I've seen proposed:

  1. AI as Research Assistant
  2. AI as Independent Researcher
  3. RLAIF
  4. Scalable Oversight (RLHF + RLAIF + IDA + ???)

This breakdown is kind of dumb - the first two are less "strategies" and more "desired end states".  Also, I'm sure I'm missing some things.  (I have read An overview of 11 proposals for building safe advanced AI [LW · GW], but many of the proposals feel similar to each other, or don't have an obvious path to actually being implemented.  If you think one or more of those proposals is both possible to implement right now and differs meaningfully in terms of its dependencies, please let me know.)

AI as Research Assistant

"Shape of Problem" Dependencies

I don't think I've seen anyone argue that employing these kinds of strategies will do very much to move the needle from "default outcome is the bad one" to "quite likely we succeed", so trying to reverse-engineer the claims that argument would be making about what the alignment problem actually is doesn't feel like a good use of time.

I basically don't have any objection in principle to the idea that we could use weaker AI systems (including current ones) to speed up the efforts of existing researchers, and maybe even do so differentially with respect to capabilities in some cases.  I just don't think that gets us there.

"How Capabilities Decompose" Dependencies

I don't think this relies on any facts about how future capabilities decompose; we have existing capabilities that seem likely to be (at least marginally) helpful if a good UI/UX wrapper can be created for them.

 

AI as Independent Researcher

The closest thing to this that I can think of is some form of STEM AI [LW · GW].  I don't think I've seen anyone expand on the details of such a proposal in a way that overcomes the serious difficulties noted in that post, so I'll skip the decompositions in favor of this excerpt:

However, if one of the major use cases for your first advanced AI is helping to align your second advanced AI, STEM AI seems to perform quite poorly on that metric, as it advances our technology without also advancing our understanding of alignment. In particular, unlike every other approach on this list, STEM AI can't be used to do alignment work, as its alignment guarantees are explicitly coming from it not modeling or thinking about humans in any way, including aligning AIs with them. Thus, STEM AI could potentially create a vulnerable world situation where the powerful technology produced using the STEM AI makes it much easier to build advanced AI systems, without also making it more likely that they will be aligned.

 

RLAIF[1]

An example would be Constitutional AI [LW · GW].  I'm not sure if anyone who accepts my two background assumptions thinks this will work by itself as an end-to-end solution, but putting it here just in case.

"Shape of Problem" Dependencies

Some (not mutually exclusive) possibilities for what has to be true about the alignment problem for this to work:

"How Capabilities Decompose" Dependencies

None of the shapes of the problem that RLAIF working would imply seem to require any assumptions on this dimension.

 

Scalable Oversight

Described at a high level as a strategy being persued by OpenAI and Anthopic, among others.

"Shape of Problem" Dependencies

The proposals I'm familiar with don't seem to strictly require assumptions not related to the decomposition of capabilities, but the more complicated the alignment problem is, the more assumptions you need to make about how capabilities decompose for this class of approaches to be tractable.

"How Capabilities Decompose" Dependencies

The biggest requirement here is that there exist a relatively clean way to separate the kind of cognition that reasons effectively about the alignment problem from cognition that would make the system dangerous, and that we can make that happen in practice.  This decomposition seems more likely to be possible[3] in worlds where the alignment problem is mostly an engineering challenge, with very little in way of anything resembling "philosophy" required.  I think this because the cognitive machinery necessary to solve novel philosophical challenges seems much broader and more likely to be dangerous than what would be necessary to solve more concretely specified engineering challenges.  The kind of thing I mean by "engineering challenge" is something like "create a system which can reliably write code which passes some set of unit tests given to it", whereas by "philosophy challenge" I mean something like "figure out what human values are", and all the normative bits that remain after you dissolve those questions into empirical ones.

As far as I understand current scalable oversight proposals, they mostly aren't structured in ways which acknowledge this requirement.  On my model, the less "narrow" your system is, the more powerful its cognition needs to be to solve any specific problem, when compared to the narrowest possible system trained specifically for that problem.

As a toy example, let's look at the case of chess.  To the extent one can meaningfully evaluate how well GPT-4 plays chess [LW · GW], training a model specifically to play chess to a similar level (without building it on top of an existing foundation model) would give you a model with much weaker cognition.  Correspondingly, a GPT-n which is capable of giving you useful alignment ideas (or of usefully evaluating the ideas spit out by another instance of itself) must be much more broadly capable than a narrow system trained specifically for that task[4].

But current proposals generally seem to start with, "First, train a foundation model capable of reasoning across a broad range of tasks and domains".  Then some further refinements are made, but none of those look to me like something that might differentially train a specific narrow capability helpful for alignment research, rather than just more reliably eliciting existing capabilities.

Some possibilities that resolve this tension:

If you are familiar with the details of an AI-assisted alignment scheme - regardless of whether it maps to one of the four strategies described above - and have thoughts about what that scheme's success requires (or substantially depends on) in terms of capabilities decomposition, I would be very interested to hear your thoughts.

If you think this line of thinking is fundamentally mistaken with respect to evaluating proposals like this, I'd like to hear arguments for that as well.

  1. ^

    Reinforcement Learning from AI Feedback.

  2. ^

    I don't think I've actually seen anyone use this in an argument, but it does seem like it might help if it were true.

  3. ^

    Though still not obviously possible.

  4. ^

    One problem, of course, is that we don't know how to train a narrow system to do that, or even how narrow such a system could be, since that depends on the actual cognitive capabilities required by the task.  Relatedly, I think the fact that we don't know how to construct a narrower training regime for this task is evidence in favor of the problem being more difficult than a set of relatively well-understood engineering challenges.

2 comments

Comments sorted by top scores.

comment by Martín Soto (martinsq) · 2023-05-11T13:22:48.296Z · LW(p) · GW(p)

I don't think I completely grok the distinction you're trying to point at with "Shape of problem" vs "How capabilities decompose".

I guess "Shape of problem" is about systematic incentives that will be present, like inductive biases in our training procedures, while "How capabilities decompose" is about how easy/natural it is for a mind to solve the task without solving other tasks. The latter is about "minds in general" and the former about "minds trained by us"?

But then I don't understand some of your classifications. For example, how is "it stumbles into human-friendliness before x-risk capability” a claim about shape of the problem (instead of also depending on how hard are the tasks of making humans extinct, understanding/imitating humans, etc.), while things like “IDA does/doesn’t converge to deception (because of obfuscated arguments etc.)” (which would be a part of Scalable Oversight) are not shape of the problem, but capabilities decomposition?

I feel like this is a pretty blurry line to classify evidence (and thus maybe not the most useful, but I'm not sure).

comment by Martín Soto (martinsq) · 2023-05-11T12:37:27.436Z · LW(p) · GW(p)
  • Moral realism[2], and for some reason the agent cares.
  • Not moral realism, but the practical equivalent - orthogonality may be technically true, but the default thing that comes out of making something sufficiently smart is something human-friendly, and you'd have to have a very good mechanistic understanding of intelligence to create something unfriendly.

How would the first be different from the second? What are you understanding by "moral realism makes the AI human-friendly" that is not just "practical convergence"?

Are you picturing something like "the AI reasons enough / reads this text (which would also convince humans if presented correctly) and becomes completely convinced that a certain moral theory is true, and that it must follow it (a la Descartes with God)"? Because that's just a particular way of getting "practical convergence". You probably agree with that (as you mentioned, they are not mutually exclusive), but I'm interested in whether you understood anything else by moral realism here.