The Allais Paradox and the Dilemma of Utility vs. Certainty
post by Peter Wildeford (peter_hurford) · 2011-08-04T00:50:47.616Z · LW · GW · Legacy · 30 commentsContents
Set One: Set Two: The Problem With "It is Perfectly Rational to Bet on Certainty" The Problem With "People Are Silly" None 30 comments
Related to: The Allais Paradox, Zut Allais, Allais Malaise, and Pascal's Mugging
You've probably heard the Allais Paradox before, where you choose one of the two options from each set:
Set One:
- $24000, with certainty.
- 97% chance of $27000, 3% chance of nothing.
Set Two:
- 34% chance of $24000, 66% chance of nothing.
- 33% chance of $27000, 67% chance of nothing.
The reason this is called a "paradox" is that most people choose 1 from set one and choose 2 from set two, despite set two being the same as a ~33% chance of being able to choose from set one.
U(Set One, Choice 2) = 0.97 * U($27000) = 26190
U(Set Two, Choice 2) = 0.33 * U($27000) = 8910
The Problem With "It is Perfectly Rational to Bet on Certainty"
- $24000, with certainty
- 99.99% chance of $24 million, 0.01% chance of nothing.
The Problem With "People Are Silly"
- $24000, with certainty
- 0.0001% chance of $27 billion, 99.9999% chance of nothing.
When we go solely by the expected utility calculations we get:
U(Set Three, Choice 2) = 0.000001 * U($27000000000) = 27000
So here's the real dilemma: you have to pay $10000 to play the game. The expected utility calculations now say choice 1 yields $14000 and choice 2 yields $17000.
And if your answer is that your utility for money is not linear, check to see if that's your real rejection. What would you do if you would donate the money? What would you do if you were in the least convenient possible world where your utility function for money is linear?
30 comments
Comments sorted by top scores.
comment by CarlShulman · 2011-08-04T01:22:03.014Z · LW(p) · GW(p)
And if your answer is that your utility for money is not linear
This is a very, very, very safe assumption when talking about $27 billion.
What would you do if you were in the least convenient possible world where your utility function for money is linear?
Then I would have radically different intuitions and responses to such tradeoffs and the answer would be obvious. This is like asking:
"Would you eat cow manure? No? Well what about in the least convenient possible world where eating cow manure is your sole and ultimate desire?"
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2011-08-04T01:32:16.211Z · LW(p) · GW(p)
What if the problem were phrased like this?
Set Four:
1.) Save 24000 lives, with certainty
2.) 0.0001% chance of saving 27 billion lives, 99.9999% chance of saving no lives.
Replies from: cousin_it, CronoDAS, Bongo, None↑ comment by cousin_it · 2011-08-04T11:43:05.393Z · LW(p) · GW(p)
It's not obvious that our utility for lives saved is linear, either. For example, I would confidently choose killing 50% of the world's population over killing everyone with 50% probability, because in the former case humanity is likely to recover.
That said, it seems to be close enough to linear when the numbers are sufficiently small, and I'm ready to accept the conclusion that shutting up and multiplying is better than following my unexamined intuitions.
↑ comment by CronoDAS · 2011-08-04T03:56:07.985Z · LW(p) · GW(p)
I'd need to know what the total human population is before making this decision...
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2011-08-04T03:58:43.777Z · LW(p) · GW(p)
For the purposes of this obscure hypothetical, let's say the total human population is arbitrarily 40 billion.
↑ comment by Bongo · 2011-08-04T18:33:41.702Z · LW(p) · GW(p)
You mean this?:
1.) 26986000 people die, with certainty.
2.) 0.0001% chance that nobody dies; 99.9999% chance that 27000000 people die.
And of course the answer is obvious. Given a population of 40 billion, you'd have to be a monster to not pick 2. :)
↑ comment by [deleted] · 2011-08-04T22:02:09.966Z · LW(p) · GW(p)
In this case I am much less certain of my answer, but I'm leaning toward 2.
On the other hand, in the $ question, I am quite certain that I would rather have $24000. This makes me quite confident that nonlinear utility of money is my true rejection, thankyouverymuch.
comment by Manfred · 2011-08-04T01:19:25.642Z · LW(p) · GW(p)
What would you do if you were in the least convenient possible world where your utility function for money is linear?
This is a problem, because my intuitions don't really listen to hypotheticals. So basically you're assigning my intuitions one problem (nonlinear utility function) and the rest of me another problem, which makes conflict between my intuitions and the math uninformative.
comment by Bongo · 2011-08-04T09:36:14.601Z · LW(p) · GW(p)
Reminder: the Allais Paradox is not that people prefer 1A>1B, it's that people prefer 1A>1B and 2B>2A. If you prefer 1A>1B and 2A>2B it could because of having non-linear utility for money, which is perfectly reasonable and non-paradoxical. Neither does "Shut up and multiply" have anything to do with linear utility functions for money.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2011-08-04T14:47:48.201Z · LW(p) · GW(p)
You're right and I think I touched on that a bit -- people seem to see a larger difference between 100% and 99% than between 67% and 66%. Maybe I didn't touch on that enough, though.
comment by handoflixue · 2011-08-04T23:37:42.329Z · LW(p) · GW(p)
Just by observation, it seems that 100% probability simply tends to be weighed slightly more heavily - say, an extra 20%. I'd expect that for most people, there's a point where they'd take the 99% over the 100%.
Sacrificing a guaranteed thing for an uncertain thing also has a different psychological weight, since if you lose, you now know you're responsible for that loss - whereas with the 66% vs 67%, you can excuse it as "Well, I probably would have lost anyway". This one is easily resolved by just modifying the problem so that you know what the result was, and thus if it came up 67 you know it's your own fault.
100% certainty also has certain magical mathematical properties in Bayesian reasoning - it means there's absolutely no possible way to update to anything less than 100%, whereas a 99% could later get updated by other evidence. And on the flip side of the coin, it requires infinite evidence to establish 100%, so it shouldn't really exist to begin with.
The problem with set four is that money really, seriously, does not scale at those levels, and my neurology can't really comprehend what "a million times the utility of $24K" would mean. If I ask myself "what is the smallest thing I would sacrifice $24K for a one-in-a-million chance at it", then I'll either get an answer, assign it that utility value, and take the bet, or find out that my neurology is incapable of evaluating utility on that scale. Either way it breaks the question. (For me, I'd sacrifice $24K for a one-in-a-million chance at a Friendly Singularity that leads to a proper Fun Eutopia)
comment by Bongo · 2011-08-04T09:43:14.593Z · LW(p) · GW(p)
The expected utility calculations now say choice 1 yields $14000 and choice 2 yields $17000.
The expected payoff calculations say that. Expected utility calculations say nothing since you haven't specified a utility function. Neither can you say that choice 2 must be better because of the fact that for any reasonable utility function U($14k)<U($17k), because the utility of the expected payoff is not equal to the expected utility.
EDIT: pretty much every occurrence of "expected utility" in this post should be replaced with "expected payoff".
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2011-08-04T14:46:30.890Z · LW(p) · GW(p)
You're right, but I was looking at the question in terms of the (bad) assumption of linear utility for money.
comment by Pavitra · 2011-08-04T19:39:20.108Z · LW(p) · GW(p)
I think that my decision on sets three and four is almost entirely determined by how much faith I have in the fairness of the random number generator. I've seen it suggested on LW before, and I think it's a good model, that the attractiveness of "certainty" reflects disbelief in the stated odds.
In three-card monty, your chances of picking the right card are not one in three.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2011-08-04T19:52:25.780Z · LW(p) · GW(p)
Is this really the only motivator, though? What stated percentage would you need it to say to think you've got a "real" 50%?
Replies from: Pavitra↑ comment by Pavitra · 2011-08-04T20:18:54.198Z · LW(p) · GW(p)
Depends on circumstance. If I can verify the RNG directly, pretty close to 50% plus transaction costs. If I think I have an opportunity to punish obvious defection, 100% (implying a non-stochastic outcome rule). If I have no recourse in case of defection, I would not pay anything to play regardless of stated odds.
comment by CronoDAS · 2011-08-04T04:50:16.923Z · LW(p) · GW(p)
Random musings:
I do seem to have some level of risk aversion when it comes to making these kinds of choices, but it's not an extremely large level. The risk aversion seems to kick in most strongly when dealing with small probabilities of high payoffs. (In the "Set One" choice, I'd probably accept the 3% risk, though.)
If you had a choice between "$24000 with certainty" and "90% chance of $X", is there really no value for X that would make you change your mind?
There are plenty of values of X for which I would change my mind in this case. The expected value of choosing X is X * 0.9. 24000 / 0.9 = 26666 + 1/3, so I'd take the riskier option if the payoff was $27,000 or above. (Why $27,000? No particular reason other than it's convenient to round to.)
If you had a choice between "$24000 with certainty" and "X% chance of $24001", what is the smallest value of X that would make you switch?
The value of X for which the expected utility is equal to $24,000 is 24000 / 24001 = 1 - 1/24001 = 0.999959... which is very close to 1. Possessing mere bounded rationality, I might as well round it up to 1 and say that betting $24,000 against $1, regardless of the offered odds, probably isn't worth the time to set up and resolve the bet.
Now let's look at set 4:
Choice 1: $24,000 Choice 2: A one in a million chance of $27 billion
Well... this is where my risk aversion kicks in. The decision-making heuristic that gets invoked is "events with odds of one in a million don't happen to me", the probability gets rounded down to zero, and I take the $24,000 and double my current net worth instead of taking the lottery ticket. I don't know if this makes me silly or not.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2011-08-04T14:45:26.332Z · LW(p) · GW(p)
This is very close to how I feel about it -- I'm really tempted to take Set Four, Choice 1 on the assumption that "events with odds of one in a million don't happen to me" too, but I'm not sure if that's just pure scope insensitivity or an actually rational strategy.
comment by DanielVarga · 2011-08-04T14:30:49.034Z · LW(p) · GW(p)
Personally, I would take choice one in both sets. But I think loss aversion trivially explains the paradox. In set one choice two outcome two, I would feel like a big loser. In set two choice two outcome two, not really.
Just imagine being sad in a room of 97 happy and 2 other sad people (set 1 choice 2 outcome 2), wishing you were in another room full of happy people. Set 2 choice 2 does not have this repulsiveness, the two rooms (choices) are very similar.
comment by Hyena · 2011-08-04T11:50:48.191Z · LW(p) · GW(p)
I think socially embedding the decision would actually help us understand the issue.
Say that people were going door-to-door making this offer. Which would you choose? Before you answer, consider this: everyone you know and everyone you will ever meet was given the same choice. Are you willing to be the one person in the room who "missed out on this golden opportunity"?
You probably feel uncomfortable, you're probably already rubbing the Bayesian keys in your pocket to make your escape from the question. Because you know, even if you know you're right, that this won't look good. Talking about bias won't help you, you have only three seconds to make a reply, just not enough time.
I think this is why even perfectly good, stone-cold rationalists will have trouble with the Allais Paradox. Part of you is making this social calculation as it goes.
But this might also make it meta rational: if you can't deny the offer was made, it might be better to take the certain route if your social circle is more likely to respect you for it. The $3,000 might not be worth as much as the social reward.
Replies from: MixedNuts, handoflixue↑ comment by MixedNuts · 2011-08-04T12:55:09.387Z · LW(p) · GW(p)
Well yeah, but it's a different question then. If I suggest you should fast for three days so the Sun God will make your crops flourish (or give you a raise, whatever's applicable), you're going to refuse. If you know your peers will stone you if you don't fast, you're going to accept, even though you still don't believe in the Sun God.
Replies from: Hyena↑ comment by Hyena · 2011-08-04T15:37:46.108Z · LW(p) · GW(p)
It is a different question, which is the main feature of it. The problem seems to be to that the Allais Paradox bothers people. By changing the question we can often get more traction than by throwing ourselves relentlessly at something we're having difficulty accepting.
↑ comment by handoflixue · 2011-08-04T23:41:00.086Z · LW(p) · GW(p)
In such social situations, you should choose 1A and 2A, and have a consistent preference for certainty; there's nothing irrational about a preference for certainty. The irrationality is choosing 1A and 2B.
Replies from: Hyena↑ comment by Hyena · 2011-08-05T03:58:05.537Z · LW(p) · GW(p)
But when you reframe it socially, taking 1A and 2B becomes rational: under 1A you don't lose socially, under 2B you gain more money but will still have defenders at the party. All that matters in the social situation is whether you'll meet the defender threshold.
Replies from: handoflixue↑ comment by handoflixue · 2011-08-05T18:58:19.272Z · LW(p) · GW(p)
Depends on whether it's revealed that you lost because of a bad decision or not - if there were public lists of people who took 2B and rolled a 67, thus forfeiting all their winnings, then I think you'd be right back in the same situation. If it's totally unknown then, yeah, that's the same reasoning I used to take 1A and 2B internally - 1B doesn't give me a convenient excuse to say the loss wasn't really my fault, whereas with 2B I can rationalize that I was going to lose anyways and it therefore doesn't feel as bad.
comment by rwallace · 2011-08-04T08:24:28.706Z · LW(p) · GW(p)
The paradox is adequately solved by noting the difference between claimed and actual probabilities. In other words, assume an agent promises to give you money with "97%" likelihood; the real probability is whatever the actual likelihood is, multiplied by the probability that the agent won't defect on the deal, multiplied by the probability that nothing else will go wrong.
Admittedly, claimed certainty isn't actual certainty either, but in practice "100%" tends to be much closer to 100% than "97%" to 97%.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2011-08-04T19:51:15.878Z · LW(p) · GW(p)
Could you elaborate? I don't see how it solves the paradox. What percentage chance would you need to reasonably approximate a "real" 97%?
comment by AlexMennen · 2011-08-04T17:11:23.776Z · LW(p) · GW(p)
And if your answer is that your utility for money is not linear, check to see if that's your real rejection. What would you do if you would donate the money? What would you do if you were in the least convenient possible world where your utility function for money is linear?
I am fairly confident that is my true rejection, considering that my utility is not even remotely close to linear with money on those scales. My intuitions regarding sets one and two are demonstrate certainty bias, but I can acknowledge it as irrational. I give my intuitions a rationality stamp of approval for their successful analysis of set four. The most similar mind to mine that has linear utility with money is not very similar to me at all (I'd imagine it bares more resemblance to Clippy), so I won't speak for it as "I", but I assume that it would take option 2.
Edit: It is conceivable that I could find myself in a situation in which I had a better use for a 10^-6 chance of getting $27 billion than a guaranteed $24000. If I was in such a situation and realized it, I would choose option 2.
comment by RobertLumley · 2011-08-04T01:11:42.873Z · LW(p) · GW(p)
However, relying solely on expected utility seems to make you vulnerable to a dilemma very similar to Pascal's Mugging.
I disagree. It took me a long time to figure out why I wouldn't take Pascal's Mugging, but I eventually did: Rule utilitarianism. If you generalize the rule "pay off Pascal's mugger" it becomes clear that anyone who recognizes that you operate this way will be able to abuse you - clearly a rule where you accept the mugging has negative consequences for society, because if everyone operated that way, it would lead to a complete collapse of society. Simply put, a society where people accept the mugging is not sustainable.
But this situation is different. Generalizing the rule will not lead to the destruction of societal order, and I think the rational choice is to take the 0.0001% (or whatever it was) chance of $27 billion.
Replies from: peter_hurford↑ comment by Peter Wildeford (peter_hurford) · 2011-08-04T02:58:12.692Z · LW(p) · GW(p)
If you think "don't pay off Pascal's Muggers" is a rule to follow in rule utilitarianism, then you think it's a rule that maximizes utility, and therefore you've already found a reason why paying off Pascal's Muggers is bad utility before even considering rule utilitarianism. Therefore, I don't think "it's a rule in my rule utilitarianism to not do this" is your true rejection to Pascal's Mugging.