Utility vs Probability: idea synthesis

post by Stuart_Armstrong · 2015-03-27T12:30:06.631Z · LW · GW · Legacy · 15 comments

Contents

  Utility scaling
  Utility translating
None
15 comments

A putative new idea for AI control; index here.

This post is a synthesis of some of the ideas from utility indifference and false miracles, in an easier-to-follow format that illustrates better what's going on.

 

Utility scaling

Suppose you have an AI with a utility u and a probability estimate P. There is a certain event X which the AI cannot affect. You wish to change the AI's estimate of the probability of X, by, say, doubling the odds ratio P(X):P(¬X). However, since it is dangerous to give an AI false beliefs (they may not be stable, for one), you instead want to make the AI behave as if it were a u-maximiser with doubled odds ratio.

Assume that the AI is currently deciding between two actions, α and ω. The expected utility of action α decomposes as:

u(α) = P(X)u(α|X) + P(¬X)u(α|¬X).

The utility of action ω is defined similarly, and the expected gain (or loss) of utility by choosing α over ω is:

u(α)-u(ω) = P(X)(u(α|X)-u(ω|X)) + P(¬X)(u(α|¬X)-u(ω|¬X)).

If we were to double the odds ratio, the expected utility gain becomes:

u(α)-u(ω) = (2P(X)(u(α|X)-u(ω|X)) + P(¬X)(u(α|¬X)-u(ω|¬X)))/Ω,    (1)

for some normalisation constant Ω = 2P(X)+P(¬X), independent of α and ω.

We can reproduce exactly the same effect by instead replacing u with u', such that

Then:

u'(α)-u'(ω) = P(X)(u'(α|X)-u'(ω|X)) + P(¬X)(u'(α|¬X)-u'(ω|¬X)),

2P(X)(u(α|X)-u(ω|X)) + P(¬X)(u(α|¬X)-u(ω|¬X)).    (2)

This, up to an unimportant constant, is the same equation as (1). Thus we can accomplish, via utility manipulation, exactly the same effect on the AI's behaviour as a by changing its probability estimates.

Notice that we could also have defined

This is just the same u', scaled.

The utility indifference and false miracles approaches were just special cases of this, where the odds ratio was sent to infinity/zero by multiplying by zero. But the general result is that one can start with an AI with utility/probability estimate pair (u,P) and map it to an AI with pair (u',P) which behaves similarly to (u,P'). Changes in probability can be replicated as changes in utility.

 

Utility translating

In the previous, we multiplied certain utilities by two. But by doing so, we implicitly used the zero point of u. But utility is invariant under translation, so this zero point is not actually anything significant.

It turns out that we don't need to care about this - any zero will do, what matters simply is that the spread between options is doubled in the X world but not in the ¬X one.

But that relies on the AI being unable to affect the probability of X and ¬X itself. If the AI has an action that will increase (or decrease) P(X), then it becomes very important where we set the zero before multiplying. Setting the zero in a different place is isomorphic with adding a constant to the X world and not the ¬X world (or vice versa). Obviously this will greatly affect the AI's preferences between X and ¬X.

One way of avoiding the AI affecting X is to set this constant so that u'(X)=u'(¬X), in expectation. Then the AI has no preferences between the two situations, and will not seek to boost one over the other. However, note that u(X) is an expected utility calculation. Therefore:

  1. Choosing the constant so that u'(X)=u'(¬X) requires accessing the AI's probability estimate P for various worlds; it cannot be done from outside, by multiplying the utility, as the previous approach could.
  2. Even if u'(X)=u'(¬X), this does not mean that u'(X|Y)=u'(¬X|Y) for every event Y that could happen before X does. Simple example: X is a coin flip, and Y is the bet of someone on that coin flip, someone the AI doesn't like.

This explains all the complexity of the utility indifference approach, which is essentially trying to decompose possible universes (and adding constants to particular subsets of universes) to ensure that u'(X|Y)=u'(¬X|Y) for any Y that could happen before X does.

15 comments

Comments sorted by top scores.

comment by buybuydandavis · 2015-03-27T20:35:43.334Z · LW(p) · GW(p)

I thought one of the takeaways from Bernardo + Smith in Bayesian Theory was that from a decision theory perspective, your cost function and your probability function is basically an integrated whole, any division of which is arbitrary.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2015-03-30T12:42:13.960Z · LW(p) · GW(p)

That makes sense. Do you have a reference that isn't a book?

Replies from: buybuydandavis
comment by buybuydandavis · 2015-03-30T19:27:20.322Z · LW(p) · GW(p)

Sorry, no. I never went beyond their book with those guys.

comment by royf · 2015-03-27T17:59:01.746Z · LW(p) · GW(p)

This is not unlike Neyman-Pearson theory. Surely this will run into the same trouble with more than 2 possible actions.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2015-03-30T12:46:19.888Z · LW(p) · GW(p)

No, no real connection Neyman-Pearson. And its fine with more that 2 actions - notice that each action only uses itself in the definition. And u' doesn't event use any actions in its definition.

comment by Slider · 2015-03-30T14:05:59.088Z · LW(p) · GW(p)

If one were to believe there is only one thing that agents ought to maximise could this be used as a way to translate agents that actually maximise another thing as maximising "the correct thing" but with false beliefs? If rationalism is the deep rejection of false beliefs could this be a deep error mode where agents are seen as having false beliefs instead of recognised to have different values? Then demanding "rectification" of the factual erros would actually be a form of value imperialism.

This could also be seen as divergence of epistemological and instrumental rationality in that instrumental rationality would accept falsehoods if they are useful enough. That is if you care about probabilities in order to maximise expected utility whether the uncertainty would be in the details of the specific way the goal is reached or in the desirability of the out of the process are largely interchangeable. In the extreme of low probability accuracy and high utility accuracy you would know to select the action which gets you what you want but be unsure how it makes it come about. The other extreme of high probability accuracy but low utility accuracy would be the technically capable AI which we don't know whether it is allied with or against us.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2015-03-30T14:12:08.843Z · LW(p) · GW(p)

If one were to believe there is only one thing that agents ought to maximise could this be used as a way to translate agents that actually maximise another thing as maximising "the correct thing" but with false beliefs?

Not easily. It's hard to translate a u-maximiser for complex u, into, say, a u-minimiser, without redefining the entire universe.

comment by Skeptityke · 2015-03-28T17:31:58.761Z · LW(p) · GW(p)

"But the general result is that one can start with an AI with utility/probability estimate pair (u,P) and map it to an AI with pair (u',P) which behaves similarly to (u,P')"

Is this at all related to the Loudness metric mentioned in this paper? https://intelligence.org/files/LoudnessPriors.pdf It seems like the two are related... (in terms of probability and utility blending together into a generalized "importance" or "loudness" parameter)

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2015-03-30T12:48:47.111Z · LW(p) · GW(p)

Is this at all related to the Loudness metric mentioned in this paper?

Not really. They're only connected in that they both involve scaling of utilities (but in one case, scaling of whole utilities, in this case, scaling of portions of the utility).

comment by paulfchristiano · 2015-10-28T18:45:18.371Z · LW(p) · GW(p)

You write:

it is dangerous to give an AI false beliefs (they may not be stable, for one)

But the approach described here seems to give 100% identical results (certainly as long as we talk about beliefs uncorrelated with the agent's behavior). So why do you think that one is dangerous and the other is fine?

Can you describe a situation in which the two changes lead to different outcomes?

Replies from: Stuart_Armstrong, Stuart_Armstrong
comment by Stuart_Armstrong · 2015-10-28T18:59:00.843Z · LW(p) · GW(p)

I'm thinking about what happens in the close or converging to zero case. "I am in an impossible world" seems more dangerous than "I am in a world I cannot improve or worsen".

To be honest, the real justification behind that was that the suggestions I'd heard about giving the AI false beliefs all seemed to fail, so this felt safer.

Replies from: paulfchristiano
comment by paulfchristiano · 2015-10-28T19:13:43.965Z · LW(p) · GW(p)

Again, can you describe any case where the two proposals do anything different?

Given that they do the same thing in every case, it seems highly unlikely that one is safe and the other is dangerous! At best you are obfuscating the problem.

For example, if this really dodges any failures associated with giving the AI false beliefs, that should give you an example where the two proposals do something different.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2015-10-29T18:57:26.018Z · LW(p) · GW(p)

Now that I've rested a bit, let me think about this properly. One reason I was wary of changing probability was because of all the related other probabilities - conditional probabilities, AND and OR expressions, and so on. Changing one probability would have to keep the rest consistent, while changing utility had consistency built in.

It feels like changing a prior might be equivalent. I'm not sure that there is any difference between changing a prior and changing the utility. But, again, there might be some consistency worries to think about - eg how do we change priors over correlations between events, and so on? It still seems that changing probability involves many choices while changing the utility doesn't (it seems equivalent with finding a Bayes factor that provides evidence for that specific event?)

I will think more.

Replies from: paulfchristiano
comment by paulfchristiano · 2015-10-30T15:03:12.312Z · LW(p) · GW(p)

The normal way of modifying a probability distribution to make X more likely is to increase the probability of each world where X is true, e.g. by doubling it. This is equivalent to observing evidence for X. It's also equivalent to your procedure for modifying utility functions.