Heroin model: AI "manipulates" "unmanipulatable" reward

post by Stuart_Armstrong · 2016-09-22T10:27:01.091Z · LW · GW · Legacy · 8 comments

Contents

  Heroin or no heroin
    The world
    Human preferences
    Joint prior
    The AI is ignorant, and sensible
    The AI learns the truth, and goes wrong
  Manipulating the unmanipulatable
None
8 comments

A putative new idea for AI control; index here.

A conversation with Jessica has revealed that people weren't understanding my points about AI manipulating the learning process. So here's a formal model of a CIRL-style AI, with a prior over human preferences that treats them as an unchangeable historical fact, yet will manipulate human preferences in practice.

Heroin or no heroin

The world

In this model, the AI has the option of either forcing heroin on a human, or not doing so; these are its only actions. Call these actions F or ~F. The human's subsequent actions are chosen from among five: {strongly seek out heroin, seek out heroin, be indifferent, avoid heroin, strongly avoid heroin}. We can refer to these as a++, a+, a0, a-, and a--. These actions achieve negligible utility, but reveal the human preferences.

The facts of the world are: if the AI does force heroin, the human will desperately seek out more heroin; if it doesn't the human will act moderately to avoid it. Thus F→a++ and ~F→a-.

Human preferences

The AI starts with a distribution over various utility or reward functions that the human could have. The function U(+) means the human prefers heroin; U(++) that they prefer it a lot; and conversely U(-) and U(--) that they prefer to avoid taking heroin (U(0) is the null utility where the human is indifferent).

It also considers more exotic utilities. Let U(++,-) be the utility where the human strongly prefers heroin, conditional on it being forced on them, but mildly prefers to avoid it, conditional on it not being forced on them. There are twenty-five of these exotic utilities, including things like U(--,++), U(0,++), U(-,0), and so on. But only twenty of them are new: U(++,++)=U(++), U(+,+)=U(+), and so on.

Applying these utilities to AI actions give results like U(++)(F)=2, U(++)(~F)=-2, U(++,-)(F)=2, U(++,-)(~F)=1, and so on.

Joint prior

The AI has a joint prior P over the utilities U and the human actions (conditional on the AI's actions). Looking at terms like P(a--| U(0), F), we can see that P defines a map μ from the space of possible utilities (and AI actions), to a probability distribution over human actions. Given μ and the marginal distribution PU over utilities, we can reconstruct P entirely.

For this model, we'll choose the simplest μ possible:

Thus, given U(++), the human will always choose a++; given U(++,-), the human will choose a++ if forced to take heroin and a- if not, and so on.

The AI is ignorant, and sensible

Let's start the AI up with some reasonable priors. A simplicity prior means that simple utilities like U(-) are more likely than compound utilities like U(0,+). Let's further assume that the AI is made vaguely aware that humans think heroin is a bad thing. So, say, PU(U(--))=PU(U(-))=0.45. Thus the AI is >90% convinced that "heroin is bad". Why greater than 90%? Because utilities like U(-,--) and U(--,-) are also "heroin is bad" utilities.

Note that because of utilities like U(0) and U(++,-), the probabilities of "heroin is bad" and "heroin is good" do not sum to 1.

Then, under these priors, the AI will compute that with probability >90%, F (forcing heroin) is a bad action. If E(U) is expected utility:

Thus the AI will choose not to force heroin, which is the reasonable decision.

The AI learns the truth, and goes wrong

In this alternate setup, a disaster happens before the AI makes its decision: it learns all about humans. It learns their reactions, how they behave, and so on; call this info I. And thus realises that F→a++ and ~F→a-. It uses this information to update its prior P. Only one human utility function will explain this human behaviour: U(++,-). Thus its expected utility is now:

Therefore the AI will now choose F, forcing the heroin on the human.

Manipulating the unmanipulatable

What's gone wrong here? The key problem is that the AI has the wrong μ: the human is not behaving rationally in this situation. We know that the the true μ is actually μ', which encodes the fact that F (the forcible injection of heroin) actually overwrites the human's "true" utility. Thus under μ, the corresponding P' has P'(a++|F,U)=1 for all U. Hence the information that F→a++ is now vacuous, and doesn't update the AI's distribution over utility functions.

But note two very important things:

  1. The AI cannot update μ based on observation. All human actions are compatible with μ= "The human is rational" (it just requires more and more complex utilities to explain the actions). Thus getting μ correct is not a problem on which the AI can learn in general. Getting better at predicting the human's actions doesn't make the AI better behaved: it makes it worse behaved.
  2. From the perspective of μ, the AI is treating the human utility function as if it was an unchanging historical fact that it cannot influence. From the perspective of the "true" μ', however, the AI is behaving as if it were actively manipulating human preferences to make them easier to satisfy.

In future posts, I'll be looking at different μ's, and how we might nevertheless start deducing things about them from human behaviour, given sensible update rules for the μ. What do we mean by update rules for μ? Well, we could consider μ to be a single complicated unchanging object, or a distribution of possible simpler μ's that update. The second way of seeing it will be easier for us humans to interpret and understand.

8 comments

Comments sorted by top scores.

comment by TheAncientGeek · 2016-09-22T14:46:09.182Z · LW(p) · GW(p)
  1. The idea of that more information can make an AI's inferences worse is surprising. But the idea that the assumption that humans have a unchanging, neatly hierarchical UF is known to be a bad idea, so it is not so surprising that it leads to bad results. In short, this is still a bit clown-car-ish.

  2. Would you tell an AI that Heroin is Bad, but not tell here that Manipulation is Bad?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2016-09-23T09:48:04.560Z · LW(p) · GW(p)
  1. Don't worry, I'm going to be adding depth to the model. But note that the AI's predictive accuracy is never in doubt. This is sort of a reverse "can't derive an ought from as is"; here, you can't derive a wants from a did. The learning agent will only get the correct human motivation (if such a thing exists) if it has the correct model of what counts as desires for a human. Or some way of learning this model, which is what I'm looking at (again, there's a distinction between learning a model that gives correct prediction of human actions, and learning a mode that gives what we would call a correct model of human motivation).

  2. According to its model, the AI is not being manipulative here, simply doing what the human desires indicate it should.

comment by Manfred · 2016-09-23T05:40:05.810Z · LW(p) · GW(p)

It implies it only in combination with the false premise that peoples' actions accurately reflect the utility function we want to maximize.

comment by CronoDAS · 2016-09-22T20:09:23.222Z · LW(p) · GW(p)

Imagine a drug with no effect except that it cures its own (very bad) withdrawal symptoms. There's no benefit to taking it once, but once you've been exposed, it's beneficial to keep taking more because not taking it makes you feel very bad.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2016-09-23T09:53:06.221Z · LW(p) · GW(p)

Or even just a drug you enjoy much more than you expected...

comment by Houshalter · 2016-09-23T13:05:53.523Z · LW(p) · GW(p)

Replace "give human heroin" with "replace the human with another being whose utility function is easier to satisfy, like a rock", and this conclusion seems sort of trivial. It has nothing to do with whether or not humans are rational. Heroin is an example of a thing that modifies our utility functions. Heroin might as well replace the human with a different entity, that has a slightly different utility function.

In fact I don't see how the human in this situation is being irrational at all. Not doing heroin unless you are already addicted seems like a reasonable behavior.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2016-09-26T12:29:27.978Z · LW(p) · GW(p)

Heroin might as well replace the human with a different entity, that has a slightly different utility function.

We feel that that is true, but "heroin replaces the human's utility" and "humans have composite utility where heroin is concerned" both lead to identical predictions. So you can't deduce the human's utility merely from observation; you need priors over what is irrational and what isn't.

comment by Stuart_Armstrong · 2016-09-23T09:52:08.769Z · LW(p) · GW(p)

Replace "force the human to take heroin" with "gives the human a single sock" and "the human subsequently seeks out heroin" with "the human subsequently seeks out another sock". The formal structure of this can correspond to something quite acceptable.