Generalizing the Corrigibility paper's impossibility result?
post by Benya_Fallenstein (Benja_Fallenstein) · 2015-02-04T03:16:53.000Z · LW · GW · 1 commentsContents
1 comment
In our paper on corrigibility, we consider the question of how to make a highly intelligent agent that would pursue some goal, but not resist being shut down if its human programmers determined that they had made a mistake in specifying this goal. We assume that we are given two utility functions: a function , which specifies the agent's normal goal (which it pursues until the shutdown button is pressed), and a function , which specifies the goal of shutting down. We then ask whether there is some sort of combined utility function such that an agent maximizing would act as if maximizing unless and until the shutdown button is pressed, and act as if maximizing after this point.
In the paper, we investigate a particular linear way to combine and : we define to compute utility according to if the shutdown button is not pressed, and according to plus a certain additive term if the shutdown button is pressed. We show that this way of combining and in some sense satisfies the informal desideratum above, but has some pretty undesirable consequences.
Max Tegmark recently asked me whether it would be possible to show a more general impossibility result: that not only this particular way of combining and leads to bad consequences, but that every which makes an agent maximize until the shutdown button is pressed, and after, will have these consequences---including, for example, non-linear combinations of and .
As it happens, the answer to this question turns out to be either trivial or underspecified, though it may be interesting to see why this is the case. The reason is that the answer depends on how, exactly, we formalize the demand about maximizing before the button is pressed and after. The we investigate in this paper suggests one possible way of formalizing this demand (I'll give more detail below). This formalization, however, nails down exactly how our agent behaves in every possible situation: Although there is more than one utility function which produces this behavior, all of them lead the agent to choose exactly the same actions, producing the same undesirable consequences as the in the paper.
Alternatively, we may try to find a different formalization of the demand, which doesn't imply the same bad consequences; but then, instead of a math problem that we can answer by a proof, we're faced with the more philosophical question of what formalizations would be acceptable.
In the rest of this post, I go through all of this in more detail, although you might want to stop reading here unless you're interested in these details.
In the paper, we assume that the agent takes an action , then makes an observation which tells it whether the user has pressed the shutdown button, and takes another action . Let's write for the expected utility if the agent takes action ; write for the event that the shutdown button isn't pressed; and write for the expected utility, conditional on the button not being pressed, if the agent takes action . (These definitions are a little vague; for exact technical details, see the paper.) We show that when choosing the first action, , an agent maximizing the expectation of (as defined in the paper) will behave as if it's maximizing the conditional expectation of .
We then consider an agent whose tells it to make as much money as possible, and assume that as its first action , this agent is able to (i) play the lottery and (ii) construct a robot arm that will press the shutdown button if the agent doesn't win the jackpot. Then the result above implies that the agent thinks it's an awesome idea to build that robot arm! This is because when we condition on in the conditional expectation , we rule out all the cases where the shutdown button is pressed, which includes all the cases when the agent loses the lottery; in other words, the agent acts as if it now believes it's going to win the lottery with probability 1!
The above is a result of using a utility function which equals when indicates that the shutdown button hasn't been pressed, and equals plus an additive term when indicates that the shutdown button has been pressed. However, there's more than one utility function of these three arguments that leads to the same behavior by the agent.
If we want to describe only the behavior, we can instead talk about policies, which specify how the agent will act in every possible situation it might find itself in; in our set-up, a policy would be a pair of an action and a function from observations to actions ; here, specifies what action the agent takes before it makes its observation, and specifies what action it takes after it makes observation .
To write down the policy produced by the utility function from our paper, it's useful to first define a function , which, for every action and , specifies what action maximizes utility given that the agent took action earlier and made observation . (In our toy model, only the observations are probabilistic, so after we know , we do not need to take expectations any longer.) It's pretty clear what this function should be, since at this point we know for sure whether the shutdown button has been pressed; we will set when , where is the set of observations indicating that the button has been pressed, when .
Now, the policy produced by maximizing the expectation of turns out to be the pair given by
- , which is an abbreviation for , the expectation being over the agent's observation (this is defined in the paper);
- .
For any given choice of , it seems to me that is the only reasonable choice, given our informal demand that the agent behave like a -maximizer if the shutdown button has been pressed, and like a -maximizer otherwise. This leaves the question of how to formalize the idea of behaving like a -maximizer when choosing , i.e., before learning whether the shutdown button will be pressed. The above choice of is one option, but if you're taking that option, you reproduce exactly the behavior of maximizing the expectation of ---including the undesirable behavior where it acts as if it can win the lottery by pressing the shutdown button when it loses.
So in order to interpret Max's question as a math question, which we can settle by proving a theorem, we would have to ask: Is there a utility function which satisfies our intuitive desideratum in the same formal sense that does, but doesn't produce bad behavior? The answer is, of course, no, since by assumption, maximizing would produce exactly the same behavior, specified by the above policy, as , and would therefore have the same unintended consequences.
Alternatively, we might ask whether there's a better way to choose that's still in accord with our intuitive desideratum of maximizing before the shutdown button is pressed, and after. For example, this post discusses a variant of the system in the corrigibility paper, originally proposed by Stuart Armstrong, and shows that that version avoids the problem with playing the lottery, though it still has undesirable consequences. However, the discussion then becomes about what are reasonable formalizations of our intuitive demands, which is no longer something we can settle by proving a theorem.
1 comments
Comments sorted by top scores.
comment by Stuart_Armstrong · 2015-02-06T17:20:36.000Z · LW(p) · GW(p)
I think we can improve the setup, by conditioning only on things the AI has not control over.
Imagine that every turn, there is a random bit B sent along a wire. It's almost certainly 1, but there's a tiny chance that it's 0.
If the button is pressed and B=0, then the agent's utility doesn't update. But if the button is pressed and B=1, the utility is updated as usual.
Except the expression is conditioned not on "press", but on "B=0".
Now we just have to define B in such a way that the AI can't affect it - we need a defined source of true randomness.
Even better: B has already been calculated by some defined past process, the AI just doesn't know what it is yet.