Corrigibility through stratified indifference and learning

post by Stuart_Armstrong · 2017-05-23T16:33:01.000Z · LW · GW · 10 comments

Contents

    Half a chance of winning the 1:9 lottery
  Indifference
  The lottery manipulation
  The outcome is not the problem
    Stratified indifference
  Assumptions
  Probability flow
  Stratification
    Examples
  Applying stratification to the lottery
  Same outcome, different stratified value
    Indifference, learning, and reflexive stability
None
10 comments

A putative new idea for AI control; index here.

Note: This is a simpler version of a previous idea.

Corrigibility through indifference has a few problems. One of them is that the AI is indifferent between the world in which humans change its utility to , and world in which humans try to change its utility, but fail.


Now the try-but-fail world is going to be somewhat odd - humans will be reacting by trying to change the utility again, trying to shut the AI down, panicking that a tiny probability event has happened, and so on.

That's not what we really want for indifference. We'd really want the AI to be indifferent between worlds where humans choose utility , and worlds where they choose utility . This kind of counterfactual has a problem: it allows infinite improbability drives.

This paper presents a method for avoiding these problems --- a method that is also not truly indifferent, but has a certain urge towards non-interference in human choices (we'll see that property in detail at the end of this post). But first, what is the problem with the standard counterfactual?

Half a chance of winning the lottery

Let's illustrate the problem with these counterfactuals by considering a lottery scenario. An AI is currently hesitating between utilities and . Which one it will maximise will be determined at a later date, by human choice. Currently, it estimates that there is a chance of humans deciding for either option.

However, the AI can influence human decision - let's say, for simplicity, that it could, if it wanted, force the human to pick either or with certainty. Its to this effect that we want to make it indifferent.

The AI has a lottery ticket, which will win with probability . If the ticket does win, the AI will be able to invest the cash to set or to the value ; it can do so after the humans decide which utility is correct. If not, utility will remain at while utility will be at .

Thus, currently, the expected utility of is ( will only increase if the AI both wins the lottery and the humans select as the utility to maximise), while the expected utility of is .

Obviously, this requires the use of some form of indifference, or else the AI will simply force humans to choose , giving it utility (not , as the AI is no longer uncertain over the human's choice).

Indifference

Standard indifference would be something like giving the AI compensatory returns, if is chosen, that makes up the difference between that and if was chosen.

This breaks the symmetry between and , though. To make it more symmetric, one could instead use these compensatory returns to move the expectation of both and to some common weighted average. One way of doing that would be to use the relative likelihood of each being chosen.

It we use the relative likelihood at the moment when the choice is made, however, we have the problem that this likelihood is very close to either or , making the compensatory return likely to be tiny and unimportant.

Instead, we should add the returns at every action the AI takes. Assuming a default action, we can adjust the return for any other action the AI takes, so that it gets expected return as if the probability of being chosen was unchanged.

Unfortunately, both the "set utility of to " and the "adjust for every change of probability" are vulnerable to same trick for lottery manipulation.

The lottery manipulation

The manipulation is simple: the AI does an action that sets things up so that the human choice will be if and only if the AI wins the lottery. Then the expectation of , conditional on being chosen, is . If we set the value of to that, then this ensures the AI an expected utility of .

If we instead for go for the "adjust for every change of probability", this would mean that the AI would now expect a utility of , still higher than anything else it could get.

The outcome is not the problem

An important thing to note is that it's not the final situation that is problematic. It ends up with a situation where the probability of choosing is , the utility of given that is chosen is , and the utility of given that is chosen is .

But it is not intrinsically problematic. Suppose that instead action had set and , and, as a completely separate and independent process, the probability of choosing had reduced itself to . The first part is something positive for the agent, and we would want it to be indifferent to the second.

So the problem is not where the agent ends up, but the way it sends up there. The problem is the way the probability flows:

Essentially the probability flows from to (given the loss of the lottery) and from to (given the winning of the lottery). Because losing is more likely than winning, the overall flow (the red arrow) is from to , so undoing this will result in bad effects. It would be different if each blue arrow could be undone independently. That's what stratified indifference will attempt to do.

Stratified indifference

Stratified indifference is an attempt to deal with the problem above. I initially conceived of the idea of conditioning on various outcomes (such as the outcome of the lottery), but it wasn't clear what we could condition on. Instead, we can look at utility values.

Assumptions

Let be the default action, and model the AI as taking in observations followed by actions.

Probability flow

Let's define what it means for probability to flow from certain values to others. Let be the indexing variable that is if the humans choose and otherwise (. Model the universe as causal Bayes net.

Then if we set the AI's action to and assume (hence that will be chosen), we get a conditional probability distribution over all the variables in the causal graph. Restrict this distribution to all nodes that are not descendants of , and define this as and corresponding expectation . Similarly, we can define for .

Stratification

In order to stratify, we'd want the AI to assume that any probability that flows from the situation under , is still treated as if is the correct utility under .

Therefore the indifference stratified expected utility under , for the events flowing from and , is

Examples

Applying stratification to the lottery

How would this work in the lottery example? In that case, the causal net has three nodes: , the AI's action, , the choice of or by the human, and , whether or not the lottery is won.

Under , and are independent of each other, meaning that the probability distributions and are equal, with probability for and for .

If the AI doesn't win the lottery, then . If the AI does win the lottery, then the utility the human chooses gets set to instead. So if the human chooses , the expected utility of is . If they choose , the expected utility of is . So the AI's expected utility is the average of those two, namely .

Under action "the AI forces if and only if it wins the lottery", becomes dependent on : if and only if . The stratified expected utility of is:

The only difference between expressions like and is the subsequent AI decisions. In two situations, the AI will be given a different utility to maximise: where, under , it would have lost the lottery and maximised (under it still loses, but maximises ), and where, under , it would have won the lottery and maximised (under it still wins, but maximises ).

Only in the second situation does it decide anything differently. Note that that situation comes under - as humans would have chosen under . Thus while .

Thus, since , , which is less than the expected utility under .

Same outcome, different stratified value

But what if we end up in the same situation, but for different reasons? Let be the expected value of under , and assume action increases this deterministically to with the probability of being chosen now set to . Since is irrelevant in the expression of , then , the clearly superior alternative.

What is the difference with the lottery? Simply that the expected value of is still even if is not chosen, which makes a difference in the counterfactual.

Indifference, learning, and reflexive stability

The agent is not reflectively stable (though it could be made so with compensatory returns). It has a strange kind of indifference: it is indifferent to the actual value of , so long as it can control the behaviour of its future copy. Indeed, what it cares about is the value of if it took the default action . It is a kind of counterfactual agent. So it has no particular desire to keep the mechanism that allows humans to set or , but it wants to learn what the humans would have set those to, given .

10 comments

Comments sorted by top scores.

comment by IAFF-User-256 (Imported-IAFF-User-256) · 2017-09-24T23:10:22.000Z · LW(p) · GW(p)

Edit: found the new post and it doesn't suffer from any of these =P

There's a few calculation errors:

One in the paragrahp "Half a chance of winning the 1:9 lottery": v utility is calculated as 0.525 but should be 0.275: 50% chance v is chosen as utility function × ( 10% chance of lottery win × v utility set to 1 point with the lottery money + 90% chance of no lottery win × v utility stays at 0.5 ) = 0.5 × ( 0.1 × 1 + 0.9 × 0.5 ) = 0.275

This doesn't change anything for the argumentation, but the other error actually turns against the conclusion that "the probability flows from u to v". if you win the lottery (10% chance) then you set up the choice to be u, so this is increasing probability of u by 5%. But in case of losing (90% chance) the probability of getting v only depends on human decision, which is 50/50 so p(u)=0.1×1+0.9×0.5=0.55 and p(v)=0.1×0+0.9×0.5=0.45 and the probability flows from v to u instead.

One other thing seems strange. Like the notion "An AI is currently hesitating between utilities u and v." If its utility function is currently undefined, then why would it want anything, including wanting to optimize for any future functions? It would help to clarify the AIs motivations by stating its starting utility function because isn't that what ultimately determines the indifference compensation required to move from it to a new utility function, be it u or v?

comment by jessicata (jessica.liu.taylor) · 2016-08-29T19:30:52.000Z · LW(p) · GW(p)

In the shutdown problem, it seems like the human would not shut the AI down if it took no action. So we'd have . Is this correct?

Replies from: jessica.liu.taylor
comment by jessicata (jessica.liu.taylor) · 2016-08-29T19:34:02.000Z · LW(p) · GW(p)

Supposing humans did shut the AI down sometimes even when it takes no action:

In my model of this, the AI's objective is to shut down iff "historical" facts are such that the humans would have shut the AI down had it taken no action. For example, maybe humans are either "patient" or "impatient"; "patient" humans won't shut down an AI that does nothing, while "impatient" humans will. The AI's objective is to figure out whether the humans are patient, and shut down if they are not. The AI doesn't care about how the humans act when it does take actions, unless it can update on the humans' actions to figure out how patient they are. So in some cases it will just ignore the shutdown signal. Does this match your model?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2016-08-30T12:38:13.000Z · LW(p) · GW(p)

Yep! I wrote a (hopefully clearer) explanation here: https://agentfoundations.org/item?id=927.

It covers your example at the end.

comment by jessicata (jessica.liu.taylor) · 2016-08-24T19:48:42.000Z · LW(p) · GW(p)

Hmm... I seem to have trouble understanding this.

  1. "Restrict this distribution to all nodes that are not descendants of A". I don't get how you can define to exclude things that are causal descendents of , and then later take the expectation of using this distribution (I assume is a causal descendent of ). Also, how can you condition on the action if you just set the action to ?

What does "events flowing from " mean? I don't see or used other than in this sentence.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2016-08-26T16:44:40.000Z · LW(p) · GW(p)

"events flowing from " : oops, sorry, that was a remnant of the old version; now corrected.

"I assume is a causal descendent of ." : (the fact that is chosen by the human as the correct utility function) is a causal descendant of . But itself is simply a utility function, and we can estimate its value whether or not happens.

"Also, how can you condition on the action if you just set the action to ." : setting the action to (and to some value), you have modified the distribution (or not, if things are independent) for all nodes that are not descendants of . Then you set , and deduce what happens at other nodes, given and .

comment by jessicata (jessica.liu.taylor) · 2016-08-19T20:06:40.000Z · LW(p) · GW(p)

This kind of “causal” counterfactual has a problem: it allows infinite improbability drives.

The post about infinite improbability drives uses the evidential version, no? The causal version of utility indifference doesn't have this problem.

Replies from: Stuart_Armstrong, Stuart_Armstrong, Stuart_Armstrong
comment by Stuart_Armstrong · 2016-08-23T14:42:42.000Z · LW(p) · GW(p)

New version up

comment by Stuart_Armstrong · 2016-08-22T19:41:08.000Z · LW(p) · GW(p)

I'll have a better version of this up, as soon as I sort out some counterfactual definitions.

comment by Stuart_Armstrong · 2016-08-22T13:53:35.000Z · LW(p) · GW(p)

Ah, it seems I wasn't understanding what you meant by causal counterfactuals in that situation. I've removed the reference to that in the post.