Is there a difference between uncertainty over your utility function and uncertainty over outcomes?

post by Chris_Leong · 2019-03-18T18:41:38.246Z · LW · GW · 2 comments

This is a question post.

Contents

  Answers
    19 Scott Garrabrant
    2 waveman
None
2 comments

I was discussing UDT yesterday and the question came up of how to treat uncertainty over your utility function. I suggested that this could be transformed into a question of uncertainty over outcomes. The intuition is that if you were to discover that apples were twice as valuable, you could simply pretend that you instead received twice as many apples. Is this approach correct? In particular, is it transformation compatible with UDT-style reasoning?

Answers

answer by Scott Garrabrant · 2019-03-18T20:23:55.447Z · LW(p) · GW(p)

Utility functions are invariant up to affine transformation. I don't need to say how much I value a human life or how much I value a chicken life to make decisions in weird trolly problems involving humans and chickens. I only need to know relative values. However, utility uncertainty messes this up. Say I have two hypotheses: one in which human and chicken lives have the same value, and one in which humans are a million times more valuable. I assign the two hypotheses equal weight.

I could normalize and say that in both cases a human is worth 1 util. Then, when I average across utility functions, humans are about twice as valuable as chickens. But if I normalize and say that in both cases a chicken is worth 1 util, then when I average, the human is worth about 500,000 times as much. (You can still treat it like other uncertainty, but you have to make this normalization choice.)

comment by Al Truist (al-truist) · 2019-03-19T23:07:12.335Z · LW(p) · GW(p)

This is precisely the issue discussed at length in Brian Tomasik's article "Two-Envelopes Problem for Uncertainty about Brain-Size Valuation and Other Moral Questions".

comment by cousin_it · 2019-03-18T22:25:27.405Z · LW(p) · GW(p)

But if you can answer questions like "how much money would I pay to save a human life under the first hypothesis" and "under the second hypothesis", which seem like questions you should be able to answer, then the conversion stops being a problem.

Replies from: Scott Garrabrant
comment by Scott Garrabrant · 2019-03-18T23:23:03.184Z · LW(p) · GW(p)

You are just normalizing on the dollar. You could ask "how many chickens would I kill to save a human life" instead, and you would normalize on a chicken.

Replies from: cousin_it
comment by cousin_it · 2019-03-18T23:49:39.334Z · LW(p) · GW(p)

I'm normalizing on my effort - eventually, on my pleasure and pain as a common currency. That's not quite the same as normalizing on chickens, because the number of dead chickens in the world isn't directly qualia.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2019-03-19T10:32:22.950Z · LW(p) · GW(p)

The min-max normalisation of https://www.lesswrong.com/posts/hBJCMWELaW6MxinYW/intertheoretic-utility-comparison [LW · GW] can be seen as the formalisation of normalising on effort (it normalises on what you could achieve if you dedicated yourself entirely to one goal).

comment by Stuart_Armstrong · 2019-03-19T10:30:12.537Z · LW(p) · GW(p)

Indeed.

We tried to develop a whole theory to deal with these questions, didn't find any nice answer: https://www.lesswrong.com/posts/hBJCMWELaW6MxinYW/intertheoretic-utility-comparison [LW · GW]

comment by Chris_Leong · 2019-03-18T21:39:00.511Z · LW(p) · GW(p)

Thanks, very interesting. I guess when I said I was imagining a situation where oranges were twice as valuable as was imagining them as worth X utility in situation A and 2X in situation B and suggesting we could just double the number of oranges instead. So it seems like you're talking about a slightly different situation than the one I was envisaging.

answer by waveman · 2019-03-18T22:08:04.654Z · LW(p) · GW(p)
if you were to discover that apples were twice as valuable, you could simply pretend that you instead received twice as many apples

No, because twice as many apples are not usually twice as valuable. This because utility functions are not linear.

You can kind of deal with uncertainty about utility by fudging expectations about outcomes but, trust me, it is the primrose path to hell.

comment by Chris_Leong · 2019-03-18T22:51:14.133Z · LW(p) · GW(p)

If the utility function is the square root of the number of apples you could multiply the number of apples by four. The question is mainly about whether you can do that kind of adaption than about anything else.

2 comments

Comments sorted by top scores.

comment by Al Truist (al-truist) · 2019-03-20T20:07:34.006Z · LW(p) · GW(p)

What if you decided that, actually, apples had negative value? Then would you pretend you received negative apples?

Replies from: Chris_Leong
comment by Chris_Leong · 2019-04-05T10:58:56.959Z · LW(p) · GW(p)

We could take apples away.