Is there a difference between uncertainty over your utility function and uncertainty over outcomes?
post by Chris_Leong
This is a question post.
I was discussing UDT yesterday and the question came up of how to treat uncertainty over your utility function. I suggested that this could be transformed into a question of uncertainty over outcomes. The intuition is that if you were to discover that apples were twice as valuable, you could simply pretend that you instead received twice as many apples. Is this approach correct? In particular, is it transformation compatible with UDT-style reasoning?
answer by Scott Garrabrant
) · GW
Utility functions are invariant up to affine transformation. I don't need to say how much I value a human life or how much I value a chicken life to make decisions in weird trolly problems involving humans and chickens. I only need to know relative values. However, utility uncertainty messes this up. Say I have two hypotheses: one in which human and chicken lives have the same value, and one in which humans are a million times more valuable. I assign the two hypotheses equal weight.
I could normalize and say that in both cases a human is worth 1 util. Then, when I average across utility functions, humans are about twice as valuable as chickens. But if I normalize and say that in both cases a chicken is worth 1 util, then when I average, the human is worth about 500,000 times as much. (You can still treat it like other uncertainty, but you have to make this normalization choice.)
↑ comment by cousin_it ·
2019-03-18T22:25:27.405Z · LW(p) · GW(p)
But if you can answer questions like "how much money would I pay to save a human life under the first hypothesis" and "under the second hypothesis", which seem like questions you should be able to answer, then the conversion stops being a problem.
Replies from: Scott Garrabrant
↑ comment by Chris_Leong ·
2019-03-18T21:39:00.511Z · LW(p) · GW(p)
Thanks, very interesting. I guess when I said I was imagining a situation where oranges were twice as valuable as was imagining them as worth X utility in situation A and 2X in situation B and suggesting we could just double the number of oranges instead. So it seems like you're talking about a slightly different situation than the one I was envisaging.
answer by waveman
) · GW
if you were to discover that apples were twice as valuable, you could simply pretend that you instead received twice as many apples
No, because twice as many apples are not usually twice as valuable. This because utility functions are not linear.
You can kind of deal with uncertainty about utility by fudging expectations about outcomes but, trust me, it is the primrose path to hell.
↑ comment by Chris Leong (chris-leong) ·
2019-03-18T22:51:14.133Z · LW(p) · GW(p)
If the utility function is the square root of the number of apples you could multiply the number of apples by four. The question is mainly about whether you can do that kind of adaption than about anything else.
Comments sorted by top scores.