Expected utility and repeated choices

post by Marco Discendenti (marco-discendenti) · 2019-12-27T20:26:17.465Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    2 Frederik
    1 Evan Ward
None
1 comment

Maybe this is a well known kind of problem but I am a novice and it looks puzzling to me.

Here is a lottery: I have these two choices:

My utility function is .

What should I choose?

Let's compute the expected utilities:

This last computation is equal to which is greater than the utility of double (a) (i.e. 1) so in order to maximize expected utility I should actually prefer to play (b) two times rather than playing (a) two times.

So we have this apparent inconsistency:

This result is puzzling to me because I would expect that utility maximization for one single game should be enough in order to take the decision regardless of what I am allowed to do in future choices. It seems instead that the mere possibility that I could play this same lottery another time changes the convenience of the choices about what to play in the first game. If this is the case then utility theory seems almost useless: I would be forced to put in my computation the whole list of my possible future choices!

Am I missing something or is this an actual problem?



Answers

answer by Tom Lieberum (Frederik) · 2019-12-27T21:49:31.162Z · LW(p) · GW(p)

The intuitive result you would expect only holds for utility function which are linear in x (I believe..), since we could then apply the utility function at each step and it would yield the same value as if applied to the whole amount.

Another case would be if you were to receive your utility immediately after playing each game (like in a reinforcement learning algorithm). In those cases is also applied to each outcome separately and would yield the result you would expect.

Also: (b) has a better EV in terms of raw $ and due to law of large numbers we would expect the actual amount of money won by repeatedly playing (b) to approach that EV. So for many games we should expect any monotonic increasing utility function to favor (b) over (a) as the number of games approaches infinity. The only reason your U favors (a) over (b) for a single game is that it is risk-averse, i.e. sub-linear in x. As the amount of games approaches infinity the risk of choosing to play b becomes less and less until it is the choice between (essentially) winning 0.5$ for sure or 0.67$ for sure in every game. If you think about it in these terms it becomes more intuitive why the behaviour observed by you is reasonable.

In other words: Yes! You do have to think about the amount of games you play if your utility function is not linear (or you have a strong discount factor).

comment by Marco Discendenti (marco-discendenti) · 2019-12-28T07:51:19.112Z · LW(p) · GW(p)

Thank you for your insights! You say: " Yes! You do have to think about the amount of games you play if your utility function is not linear"

Let's consider the case of rational agents acting in a temporal framework where they are faced with daily decisions. If they need to consider all their future possible choices in order to decide for a single present choice then it seems they are always completely unable to make any single decision (the computation to be made seems almost never ending) and this principle of expected utility maximization would turn out to be useless. How do we make rational decisions then?

Replies from: Frederik
comment by Tom Lieberum (Frederik) · 2019-12-28T08:20:01.609Z · LW(p) · GW(p)

Well, if you assume these agents do not employ time-discounting then you indeed cannot compare trajectories, since all of them might have infinite utility (and are computationally intractable as you say) if they don't terminate.

We do run into the same problem if we assume realistic action spaces, i.e. consider all the things we could possibly do, as there are too many even for a single time step.

RL algorithms "solve" this by working with constrained action spaces and discounting future utility.. and also by often having terminating trajectories. Humans also work on (highly) constrained action spaces and have strong time discounting [citation needed], and every model of a rational human should take that into account.

I admit those points are more like hacks we've come up with for practical situations, but I suppose the computational intractability is a reason why we can't already have all the nice things ;-)

answer by Evan Ward · 2019-12-29T23:24:08.572Z · LW(p) · GW(p)

<Tried to retract this comment since I no longer agree with it, but it doesn't seem to be working>

comment by Evan Ward · 2019-12-29T23:35:03.299Z · LW(p) · GW(p)

To maximize utility when you can play any N number of games, I believe you just need to calculate the EV (not EU) through playing every possible strategy. Then, you pass all those values through your U function and go with the strategy associated with the highest utility.

1 comment

Comments sorted by top scores.

comment by philh · 2023-12-31T09:12:08.532Z · LW(p) · GW(p)

There's an unstated assumption here that you start with $0. Suppose instead you start with $0.5: then  while . So if you play game (a) first, you'd then prefer to play game (b) second.

But this doesn't fully resolve the question, because you'd still prefer (b, b) over (a, b).