Deliberation as a method to find the "actual preferences" of humans

post by riceissa · 2019-10-22T09:23:30.700Z · LW · GW · 5 comments

Contents

  Approaches to deliberation that have been suggested so far
  Properties of deliberation
  Comparison table
  Takeaways
  Acknowledgments
None
5 comments

Some recent discussion [LW(p) · GW(p)] about what Paul Christiano means by "short-term preferences" got me thinking more generally about deliberation as a method of figuring out the human user's or users' "actual preferences". (I can't give a definition of "actual preferences" because we have such a poor understanding [LW · GW] of meta-ethics that we don't even know what the term should mean or if they even exist.)

To set the framing of this post: We want good outcomes from AI. To get this, we probably want to figure out the human user's or users' "actual preferences" at some point. There are several options for this:

The third option is the focus of this post. The first two are also very worthy of consideration—they just aren't the focus here. Also the list isn't meant to be comprehensive; I would be interested to hear any other approaches.

In terms of Paul's recent AI alignment landscape tree [LW · GW], I think this discussion fits under the "Learn from teacher" node, but I'm not sure.

Terminological note: In this post, I use "deliberation" and "reflection" interchangeably. I think this is standard, but I'm not sure. If anyone uses these terms differently, I would like to know how they distinguish between them.

Approaches to deliberation that have been suggested so far

In this section, I list some concrete-ish approaches to deliberation that have been considered so far. I say "concrete-ish" rather than "concrete" because each of these approaches seems underdetermined in many ways, e.g. for "humans sitting down", it's not clear if we split the humans up in some way, which humans we use, how much time we allow, what kind of "voting"/parliamentary system we use, and so on. Later on in this post I will talk about properties for deliberation, so the "concrete-ish" approaches here are concrete in two senses: (a) they have some of these properties filled in (e.g. "humans sitting down" says the computation happens primarily inside human brains); and (b) within a single property, they might specify a specific mechanism (e.g. saying "use counterfactual oracles somehow" is more concrete than saying "use an approach where the computation doesn't happen inside human brains").

Properties of deliberation

With the examples above in hand, I want to step back and abstract out some properties/axes/dimensions they have.

I'm not sure that these dimensions cleanly separate or how important they are. There are also probably many other dimensions that I'm missing.

Since I had trouble distinguishing between some of the above properties, I made the following table:

Output Process
Implicit vs explicit Implicit vs explicit output Human-like vs non-human-like cognition
Understandable vs not understandable Human understandability of output (human intermediate integration also implies understandability of intermediate results and thus also of the output) Human-like vs non-human-like cognition (there might also be non-human-like approaches that are understandable)
Inside vs outside human brain (Reduces to understandable vs not understandable) Human vs non-human computation

Comparison table

The following table summarizes my understanding of where each of the concrete-ish approaches stands on a subset of the above properties. I've restricted the comparison to a subset of the properties because many approaches leave certain questions unanswered and also because if I add too many columns the table will become difficult to read.

In addition to the approaches listed above, I've included HCH since I think it's an interesting theoretical case to look at.

Inside human brain? Human-like cognition? Implicit vs explicit output Intermediate integration Understandable output?
Human sitting down yes yes explicit (hopefully) yes yes
Uploads sitting down no yes explicit maybe yes
Counterfactual oracle no no explicit yes yes
Imitation-based IDA no no implicit/depends on question* no no
RL-based IDA no no† explicit† no no†
HCH yes no implicit/depends on question* no n.a.
Debate no no explicit yes yes
CEV no ?‡ explicit no yes
Ambitious value learning no no explicit no maybe

* We could imagine asking a question like "What are my actual preferences?" to get an explicit answer, or just ask AI assistants to do something (in which case the output of deliberation is not explicit).

† Paul says "Rather than learning a reward function from human data, we also train it by amplification (acting on the same representations used by the generative model). Again, we can distill the reward function into a neural network that acts on sequences of observations, but now instead of learning to predict human judgments it’s predicting a very large implicit deliberation." The "implicit" in this quote seems to refer to the process (rather than output) of deliberation. See also the paragraph starting with "To summarize my own understanding" in this comment [LW(p) · GW(p)] (which I think is talking about RL-based IDA), which suggests that maybe we should distinguish between "understandable in theory if we had the time" vs "understandable within the time constraints we have" (in the table I went with the latter). There is also the question of whether a reward function is "explicit enough" as a representation of values.

‡ Q5 (p. 32) in the CEV paper clarifies that the computation to find CEV wouldn't be sentient, but I'm not sure if the paper says whether the cognition will resemble human thought.

Takeaways

Acknowledgments

Thanks to Wei Dai for suggesting the point about solving meta-ethics. (However, I may have misrepresented his point, and this acknowledgment should not be seen as an endorsement by him.)


  1. From the CEV paper: "Do we want our coherent extrapolated volition to satisfice, or maximize? My guess is that we want our coherent extrapolated volition to satisfice […]. If so, rather than trying to guess the optimal decision of a specific individual, the CEV would pick a solution that satisficed the spread of possibilities for the extrapolated statistical aggregate of humankind." (p. 36)

    And: "This is another reason not to stand in awe of the judgments of a CEV—a solution that satisfices an extrapolated spread of possibilities for the statistical aggregate of humankind may not correspond to the best decision of any individual, or even the best vote of any real, actual adult humankind." (p. 37) ↩︎

  2. Paul says [? · GW] "So an excellent agent with a minimal understanding of human values seems OK. Such an agent could avoid getting left behind by its competitors, and remain under human control. Eventually, once it got enough information to understand human values (say, by interacting with humans), it could help us implement our values." ↩︎

5 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2019-10-30T01:54:42.408Z · LW(p) · GW(p)

I think my main confusion is that Paul talks about many different ways deliberation could work (e.g. RL-based IDA and human-in-the-counterfactual-loop seem pretty different), and it’s not clear what approach he thinks is most plausible.

I have similar questions, and I'm not sure how much of it is that Paul is uncertain himself, and how much is Paul not having communicated his thinking yet. Also one thing to keep in mind is that different forms of deliberation could be used at different levels of the system, so for example one method can be used to model/emulate/extrapolate the overseer's deliberation and another one for the end-user.

On a more general note, I'm really worried that we don't have much understanding of how or why human deliberation can lead to good outcomes in the long run. It seems clear that an individual human deliberating in isolation is highly likely to get stuck or go off the rails, and groups of humans often do so as well. To the extent that we as a global civilization seemingly are able to make progress in the very long run, it seems at best a fragile process, which we don't know how to reliably preserve, or reproduce in an artificial setting.

comment by John_Maxwell (John_Maxwell_IV) · 2019-10-23T06:30:52.721Z · LW(p) · GW(p)

Speed. In AI takeoff scenarios where a bunch of different AIs are competing with each other, the deliberation process must produce some answer quickly or produce successive answers as time goes on (in order to figure out which resources are worth acquiring). On the other hand, in takeoff scenarios where the first successful project achieves a decisive strategic advantage, the deliberation can take its time.

I suspect a better way to think about this is the quality of the deliberation process as a function of time available for deliberation, but time available for deliberation might itself vary over time (pre- vs post- acquisition of decisive strategic advantage).

Replies from: riceissa
comment by riceissa · 2019-10-26T01:54:29.363Z · LW(p) · GW(p)

Thanks, I think I agree (but want to think about this more). I might edit the post in the future to incorporate this change.

comment by Charlie Steiner · 2019-10-23T07:43:42.827Z · LW(p) · GW(p)

Thanks for the post!

You definitely highlight that there's a continuum here, from "most deliberation-like" being actual humans sitting around thinking, to "least deliberation-like" being highly-abstracted machine learning schemes that don't look much at all like a human sitting around thinking, and in fact extend this continuum past "doing meta-philosophy" and towards the realm of "doing meta-ethics."

However, I'd argue against the notion (just arguing in general, not sure if this was your implication) that this continuum is about a tradeoff between quality and practicality - that "more like the humans sitting around is better if we can do it." I think deliberation has some bad parts - dumb things humans will predictably do, subagent-alignment problems humans will inevitably cause, novel stimuli humans will predictably be bad at evaluating. FAI schemes that move away from pure deliberation might not just do so for reasons of practicality, they might be doing it because they think it actually serves their purpose best.

Replies from: riceissa
comment by riceissa · 2019-10-26T01:51:15.590Z · LW(p) · GW(p)

I agree with this, and didn't mean to imply anything against it in the post.