The Allais Paradox

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-19T03:05:32.000Z · LW · GW · Legacy · 145 comments

Contents

147 comments

Choose between the following two options:

1A.  $24,000, with certainty.
1B.  33/34 chance of winning $27,000, and 1/34 chance of winning nothing.

Which seems more intuitively appealing?  And which one would you choose in real life?

Now which of these two options would you intuitively prefer, and which would you choose in real life?

2A. 34% chance of winning $24,000, and 66% chance of winning nothing.
2B. 33% chance of winning $27,000, and 67% chance of winning nothing.

The Allais Paradox - as Allais called it, though it's not really a paradox - was one of the first conflicts between decision theory and human reasoning to be experimentally exposed, in 1953.  I've modified it slightly for ease of math, but the essential problem is the same:  Most people prefer 1A > 1B, and most people prefer 2B > 2A.  Indeed, in within-subject comparisons, a majority of subjects express both preferences simultaneously.

This is a problem because the 2s are equal to a one-third chance of playing the 1s.  That is, 2A is equivalent to playing gamble 1A with 34% probability, and 2B is equivalent to playing 1B with 34% probability.

Among the axioms used to prove that "consistent" decisionmakers can be viewed as maximizing expected utility, is the Axiom of Independence:  If X is strictly preferred to Y, then a probability P of X and (1 - P) of Z should be strictly preferred to P chance of Y and (1 - P) chance of Z.

All the axioms are consequences, as well as antecedents, of a consistent utility function.  So it must be possible to prove that the experimental subjects above can't have a consistent utility function over outcomes.  And indeed, you can't simultaneously have:

These two equations are algebraically inconsistent, regardless of U, so the Allais Paradox has nothing to do with the diminishing marginal utility of money.

Maurice Allais initially defended the revealed preferences of the experimental subjects - he saw the experiment as exposing a flaw in the conventional ideas of utility, rather than exposing a flaw in human psychology.  This was 1953, after all, and the heuristics-and-biases movement wouldn't really get started for another two decades.  Allais thought his experiment just showed that the Axiom of Independence clearly wasn't a good idea in real life.

(How naive, how foolish, how simplistic is Bayesian decision theory...)

Surely, the certainty of having $24,000 should count for something.  You can feel the difference, right?  The solid reassurance?

(I'm starting to think of this as "naive philosophical realism" - supposing that our intuitions directly expose truths about which strategies are wiser, as though it was a directly perceived fact that "1A is superior to 1B".  Intuitions directly expose truths about human cognitive functions, and only indirectly expose (after we reflect on the cognitive functions themselves) truths about rationality.)

"But come now," you say, "is it really such a terrible thing, to depart from Bayesian beauty?"  Okay, so the subjects didn't follow the neat little "independence axiom" espoused by the likes of von Neumann and Morgenstern.  Yet who says that things must be neat and tidy?

Why fret about elegance, if it makes us take risks we don't want?  Expected utility tells us that we ought to assign some kind of number to an outcome, and then multiply that value by the outcome's probability, add them up, etc.  Okay, but why do we have to do that?  Why not make up more palatable rules instead?

There is always a price for leaving the Bayesian Way.  That's what coherence and uniqueness theorems are all about.

In this case, if an agent prefers 1A > 1B, and 2B > 2A, it introduces a form of preference reversal - a dynamic inconsistency in the agent's planning.  You become a money pump.

Suppose that at 12:00PM I roll a hundred-sided die.  If the die shows a number greater than 34, the game terminates.  Otherwise, at 12:05PM I consult a switch with two settings, A and B.  If the setting is A, I pay you $24,000.  If the setting is B, I roll a 34-sided die and pay you $27,000 unless the die shows "34", in which case I pay you nothing.

Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference.  The switch starts in state A.  Before 12:00PM, you pay me a penny to throw the switch to B.  The die comes up 12.  After 12:00PM and before 12:05PM, you pay me a penny to throw the switch to A.

I have taken your two cents on the subject.

If you indulge your intuitions, and dismiss mere elegance as a pointless obsession with neatness, then don't be surprised when your pennies get taken from you...

(I think the same failure to proportionally devalue the emotional impact of small probabilities is responsible for the lottery.)


Allais, M. (1953). Le comportement de l'homme rationnel devant le risque: Critique des postulats et axiomes de l'école américaine.  Econometrica, 21, 503-46.

Kahneman, D. and Tversky, A. (1979.) Prospect Theory: An Analysis of Decision Under Risk. Econometrica, 47, 263-92.

145 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Doug_S. · 2008-01-19T03:25:27.000Z · LW(p) · GW(p)

For $24,000, you can have my two cents. ;)

comment by RobinHanson · 2008-01-19T03:37:26.000Z · LW(p) · GW(p)

Yes, philosophers, and others, do often too easily accept the advice of strong intuitions, forgetting that strong intuitions often conflict in non-obvious ways.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2012-02-27T05:17:40.568Z · LW(p) · GW(p)

Yes, exactly. For instance, many philosophers invoke Parfit's "repugnant conclusion" as a decisive objection to certain forms of consequentialism, overlooking the fact that all moral theories, when applied to scenarios involving different numbers of people, have implications that are arguably similarly repugnant.

comment by Joe_Petviashvili · 2008-01-19T03:39:10.000Z · LW(p) · GW(p)

The idea is that $ amount equals your utility, while in reality the history of how you got this amount also matters (regret, emotions, etc.).

There's no paradox here - as your utility expressed in $ just doesn't match utility of the subjects. As for money pump - you just have a win win situation - you earn money, and the subjects earn good feelings.

comment by Nick_Tarleton · 2008-01-19T03:45:46.000Z · LW(p) · GW(p)

If I knew the offer wouldn't be repeated, I might take 1A because I'd really rather not have to explain to people how I lost $24,000 on a gamble.

Replies from: faul_sname, Gunnar_Zarncke, DPiepgrass
comment by faul_sname · 2011-12-10T07:32:20.112Z · LW(p) · GW(p)

This was my thought exactly. If I was given the option to keep the rest private if I lost, 1A would be a distinctly preferable choice. If I had a 1/34 chance of having to explain how I "lost" $24,000 vs an average loss of $2,200, I might well take choice 1B. (at a later time in my life, when I could afford to lose $2,200, and had significant financial risk from being perceived ask a risk-taker with money).

comment by Gunnar_Zarncke · 2014-12-16T08:46:12.108Z · LW(p) · GW(p)

I think these kinds of 'side channel' loss information are what make your intuition value 1A > 1B. In a way the implicit assumptions in the offer are what cause the trouble. Naive subjects are naive only to pure math not to real life.

Replies from: bq5020080916
comment by bq5020080916 · 2021-11-09T07:17:33.407Z · LW(p) · GW(p)

Yeah, but then you run into the problem of your intuition overwheing the social costs posed by this choice.

comment by DPiepgrass · 2020-06-01T05:08:44.495Z · LW(p) · GW(p)

I would further predict that if someone is wealthy enough, or if the winning amount is small, e.g. $24 and $27, they are much more likely to choose 1B over 1A - because of how much less emotionally devastating it would be to lose, or rather, how much less devastating the participant imagines losing to be.

I decided to Google for literature on this and found this analysis. It takes some effort to decode, but if I understand Table 1 correctly, (1) experiments testing the Allais Paradox have results that often seem inconsistent with each other, and strange at first glance (roughly speaking, more people choose 1B & 2A than you'd think), which reflects a bunch of underlying complexity described in section 3; (2) to the extent there is a pattern, I was right about the smaller bets; and (3) the decision to maximize expected financial gain (1B & 2B ≃ RR in Table 1) is the most popular choice in 43% of experiments.

comment by Nick_Tarleton · 2008-01-19T03:48:34.000Z · LW(p) · GW(p)

Actually, that makes me think of another explanation besides overreaction to small probabilities: if a person takes 1B and loses, they know they would have won if they'd chosen differently. If they take 2B and lose, they can tell themselves (and others) they probably would have lost anyway.

Replies from: ThisDan
comment by ThisDan · 2012-12-17T01:15:10.013Z · LW(p) · GW(p)

Ok that is exactly my line of thinking and why i can't understand the broader point of this argument.

Yes I can see the statistical similarity that makes it "the same"- but the situation is totally different in that one offers "certain win or risk" and the other is "risk vs risk" with a barely noticeable difference between them.

So my decision on both questions goes like this 1a > 1b because even if i was offered MUCH less, i'd still likely take that deciding that i'm not greedy and free money always feels good but giving away free money (by trying to get a bit more) always feels foolish and greedy.

2b > 2a because if the statistic played out over 100 times, the average person will think it was equal value between them- unless they logged the statistics to find the slight difference. Therefore if it takes that much attention to feel the difference it's easy to pretend they are the same risk but one is 11.12% more money- which is a lot easier to notice without logging statistics.

I don't see how these decisions conflict with each other.

Replies from: None
comment by [deleted] · 2015-03-16T17:26:38.909Z · LW(p) · GW(p)

I seem to agree with you, but I think how you arrived to 11.12% is wrong. Did you divide 3000/27000? You can´t do that, since you won´t have 27000 unless you get those 3000 dollar extra. Shouldn´t you do 3000/24000 = 12,5%?

comment by Caledonian2 · 2008-01-19T03:53:17.000Z · LW(p) · GW(p)

A bird in the hand...

Certainty is a form of utility, too.

Replies from: buybuydandavis, Bugmaster
comment by buybuydandavis · 2011-10-28T00:33:48.921Z · LW(p) · GW(p)

That goes hand in hand with his comments about complexity.

The straightforward expected utility analysis doesn't include the cost of the analysis into the analysis. Nor the increased cost to all subsequent analyses for the uncertainty.

We have limited computational power for executive functions. No doubt we have utility built into us to conserve those limited resources. Most people hate uncertainty and thinking, and they hate it much more than we do. I doubt I'm the only one here who has noticed that.

comment by Bugmaster · 2011-10-28T01:23:06.325Z · LW(p) · GW(p)

For me, the choice between 1A and 1B would depend on how badly I needed the money, which is why I disagree with Eliezer when the says that "marginal utility of the money doesn't count".

For example, let's say I needed $20,000 in order to keep a roof over my head, food on my plate, and to generally survive. In this case, my penalty for failure is quite high, and IMO it would be more rational for me to take 1A. Sure, I could win more money if I picked 1B, but I could also die in that case. Thus, my utility in case of 1B would be something like

33/34 U($27,000, alive) + 1/34 U($0, dead)

and U($anything, dead) is a very negative number.

On the other hand, if I was a billionaire who makes $20,000 per second just by existing, then I would either pick 1B, or refuse to play the game altogether, because my time could be better spent on other things.

Replies from: Vaniver
comment by Vaniver · 2011-10-28T01:57:07.847Z · LW(p) · GW(p)

Reread the post; that's not the paradox.

The paradox is that, if you need the 20k to survive, then you should prefer 2A to 2B, because the extra 3k 33% of the time doesn't outweigh an additional 1% chance of dying.

If someone prefers A in both cases, and B in both cases, they can have a consistent utility function. When someone prefers A in one case, and B in another, then they cannot have a consistent utility function.

Replies from: Bugmaster
comment by Bugmaster · 2011-10-28T02:17:54.892Z · LW(p) · GW(p)

Reread the post; that's not the paradox.

Right, I didn't mean to imply that it was. But Eliezer seemed to be saying that picking 1A is irrational in general, in addition to the paradox, which is the notion that I was disputing. It's possible that I misinterpreted him, however.

Replies from: Vaniver
comment by Vaniver · 2011-10-28T04:26:49.872Z · LW(p) · GW(p)

He makes it clearer in comments.

What Caledonian is discussing is the certainty effect- essentially, having a term in your utility function for not having to multiply probabilities to get an expected value. That's different from risk aversion, which is just a statement that the utility function is concave.

comment by Nainodelac_and_Tarleton_Nick · 2008-01-19T04:13:09.000Z · LW(p) · GW(p)

Risk and cost of capital introduce very strange twists on expected utility.

Assume that living has a greater expected utility to me than any monetary value. If I need a $20,000 operation within the next 3 hours to live, I have no other funding, and you make me offer 1, it is completely rational and unbiased to take option 1A. It is the difference between a 100% of living and a 97% chance of living.

If I have $1,000,000,000 in the bank and command of legal or otherwise armed forces, I may just have you killed - for I would not tolerate such frivolous philosophizing.

Replies from: andrew-jacob-sauer
comment by Andrew Jacob Sauer (andrew-jacob-sauer) · 2023-02-05T22:41:49.084Z · LW(p) · GW(p)

That's beside the point. In the first case you'd take 1A in the first game, and 2A in the 2nd game(34% chance of living is better than 33%). In the 2nd case, if you bothered to play at all, you'd probably take 1B/2B. What doesn't make sense is taking 1A and 2B. That policy is inconsistent no matter how you value different amounts of money (unless you don't care about money at all in which case do whatever, the paradox is better illustrated with something you do care about) so things like risk, capital cost, diminishing returns etc are beside the point.

comment by Z._M._Davis · 2008-01-19T04:29:50.000Z · LW(p) · GW(p)

I think defenses of the subject's choices by recourse to nonmonetary values is missing the point. Anything can be rational with a sufficiently weird utility function. The question is, if subjects understood the decision theory behind the problem, would they still make the same choice? After seeing a valid argument that your preferences make you a money pump, you certainly could persist in your original judgment, by insisting that your feelings make your first judgment the right one.

But seriously?---why?

comment by peco · 2008-01-19T04:47:48.000Z · LW(p) · GW(p)

Since people only make a finite number of decisions in their lifetime, couldn't their utility function specify every decision independently? (You could have a utility function that is normal except that it says that everything you hear being called 1A is preferable to 1B, and anything you hear being called 2B is preferable to 2A. If this contradicts your normal utility function, this rule is always more important. Even if 2B leads to death, you still choose 2B.)

The utility function would be impossible to come up with in advance, but it exists.

comment by DonGeddis · 2008-01-19T05:00:39.000Z · LW(p) · GW(p)

My intuitions match the stated naive intuitions, but I reject your assertion that the pair of preferences are inconsistent with Bayesian probability theory.

You really underestimate the utility of certainty. "Nainodelac and Tarleton Nick"'s example in these comments about the operation is a perfect counter.

With a 33% vs. 34% chance, the impact on your life is about the same, so you just do the straightforward probability calculation for expected value and take the maximum.

But when offered 100% of some positive outcome, vs. a probability of nothing, it seems perfectly rational to prefer the guarantee. Maximizing expected dollar winnings is not necessarily the same as maximizing utility. And you're right, the issue isn't decreasing returns. But the issue is the cost of risk.

Your money pump doesn't convince me either. I'd be happy to pay the two cents, both times, and not regret the cost at the end, just as I don't regret paying for insurance even if I happen not to get sick.

comment by Roland5 · 2008-01-19T05:25:39.000Z · LW(p) · GW(p)

Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference. The switch starts in state A. Before 12:00PM, you pay me a penny to throw the switch to B.

I don't understand why I would pay you a penny to throw the switch gefore 12:00?

comment by Constant2 · 2008-01-19T05:30:25.000Z · LW(p) · GW(p)

Since I know myself, I know what I will do after midnight (pay to switch it to A), and so I resign myself to doing it immediately (i.e., leaving the switch at A) so as to save either one cent or two, depending on what happens. I will do this even if I share Don's intuition about certainty. Why pay before midnight to switch it to B if I know that after midnight I will pay to switch it back to A?

*[if the first die comes up 1 to 34]

comment by Brownbat (Thomas_Brownback) · 2008-01-19T06:00:24.000Z · LW(p) · GW(p)

I think I missed something on the algebraic inconsistency part...

If there is some rational independent utility to certainty, the algebraic claims should be more like this:

  • U($24,000) + U(Certainty) > 33/34 U($27,000) + 1/34 U($0)
  • 0.34 U($24,000) + 0.66 U($0) < 0.33 U($27,000) + 0.67 U($0)

This seems consistent so long as U(Certainty) > 1/34 U($27,000).

I'm not committed to the notion there is a rational independent value to certainty, I'm just not seeing how it can be dismissed with quick algebra. Maybe that wasn't your goal. Forgive me if this is my oversight.

comment by anon9 · 2008-01-19T06:21:40.000Z · LW(p) · GW(p)

This reminds me of the foolish decisions on "deal or no deal". People would fail to follow their own announced utility.

comment by Z._M._Davis · 2008-01-19T06:32:50.000Z · LW(p) · GW(p)

When we speak of an inherent utility of certainty, what do we mean by certainty? An actual probability of unity, or, more reasonably, something which is merely very much certain, like probability .999? If the latter, then there should exist a function expressing the "utility bonus for certainty" as a function of how certain we are. It's not immediately obvious to me how such a function should behave. If probability 0.9999 is very much more preferable to probability 0.8999 than probability 0.5 is preferable to probability 0.4, then is 0.5 very much more preferable to 0.4 than 0.2 is to 0.1?

comment by Dr._Science · 2008-01-19T06:52:41.000Z · LW(p) · GW(p)

It's rational to take the certain outcome if gambling causes psychological stress. Notwithstanding that stress is intrinsically unpleasant, it increases your risk of peptic ulcers and stroke, which could easily cancel out the expected gain.

Replies from: ricketson
comment by ricketson · 2012-01-15T19:37:24.686Z · LW(p) · GW(p)

But such psychological stress arises from your perception of reality. If it is caused by an erroneous perception of reality, then the rational thing to do is correct your perception, not take the error for granted. If you are certain that you made the right decision, then you shouldn't feel stressed when you "lose".

comment by John · 2008-01-19T07:08:52.000Z · LW(p) · GW(p)

If you crunch the numbers differently, you can come to different conclusions. For example, if I choose 1B over 1A, I have a 1 in 34 chance of getting burned. If I choose 2B over 2A, my chance of getting burned is only 1 in 100.

comment by TGGP4 · 2008-01-19T07:15:15.000Z · LW(p) · GW(p)

James D. Miller has a proposal for Lottery Tickets that Usuallly Pay Off.

Robin, were you thinking of a certain colleague of yours when you mentioned accepting intuition too readily?

comment by tcpkac · 2008-01-19T08:36:22.000Z · LW(p) · GW(p)

Risk aversion, and the degree to which it is felt, is a personality trait with high variance between individuals and over the lifespan. To ignore it in a utility calculation would be absurd. Maurice Allais should have listened to his homonym Alphonse Allais (no apparent relation), humorist and theoretician of the absurd, who famously remarked "La logique mène à tout à condition d'en sortir". Logic leads to everything, on condition it don't box you in.

comment by Paul_Gowder · 2008-01-19T09:20:38.000Z · LW(p) · GW(p)

I confess, the money pump thing sometimes strikes me as ... well... contrived. Yes, in theory, if one's preferences violate various rules of rationality (acyclicity being the easiest), one could conceivably be money-pumped. But, uh, it never actually happens in the real world. Our preferences, once they violate idealized axioms, lead to messes in highly unrealistic situations. Big deal.

comment by GreedyAlgorithm · 2008-01-19T11:02:34.000Z · LW(p) · GW(p)

I am intuitively certain that I'm being money-pumped all the time. And I'm very, very certain that transaction costs of many forms money-pump people left and right.

Replies from: wuthefwasthat
comment by Ben_Jones · 2008-01-19T11:09:51.000Z · LW(p) · GW(p)

As long as it was only one occasion, I wouldn't make the effort to cross the room for two pennies. If I'm playing the game just once, and I feel a one-off payment of 2p tends to zero, I'll play with you, sure. £1 for a lottery ticket crosses the threshold of palpability, even playing once. I can get a newspaper for a pound. Is this irrational? I hope not.

comment by JulianMorrison · 2008-01-19T11:30:54.000Z · LW(p) · GW(p)

When I made the (predictable, wrong) choice, I wasn't using probability at all. I was using intuitive rules of thumb like: "don't gamble", "treat small differences in probability as unimportant", and "if you have to gamble against similar odds, go for the larger win".

How do you find time to use authentic probability math for all your chance-taking decisions?

Replies from: ThisDan
comment by ThisDan · 2012-12-17T01:48:30.271Z · LW(p) · GW(p)

That's exactly how i felt too.

"Don't gamble" is the key. 1a allowed me to indulge that even if i was boxed into being in the game.

So in question 2 I want to follow "don't gamble" but both are gambling. Additionally, both gambles would feel the same risk to most human who didn't record statistics (other than subconscious and normal memory effected observations) so could be cheaply rounded off to say they are the same. If they are "the same" but 1 pays more money...

Oh one more point "easy come easy go". If you can lose 2 either way you won't feel like you ever had anything. However even before you pick 1a and they physically hand you the money, it's already yours (by virtue of the ability to choose 1a ) until you choose 1b and introduce the probability that you won't be paid. I say already yours because if you are guaranteed the choice of 1a forever and unconditionally unless until you choose 1b- that's no less "having money" than when you "have money" but it's in your pocket or in your wallet in the other room. It might not be your money anymore if you fling your wallet out the window hoping it will boomerang back (1b) but it was until you introduced that gamble rather than just choosing to clutch the wallet (1a).

I feel like i must be missing the point or something because they seems so obviously right...

comment by Paul_Crowley2 · 2008-01-19T12:06:31.000Z · LW(p) · GW(p)

The large sums of money make a big difference here. If it were for dollars, rather than thousands of dollars, I'd do what utility theory told me to do, and if that meant I missed out on $27 due to a very unlucky chance then so be it. But I don't think I could bring myself to do the same for life-changing amounts like those set out above; I would kick myself so hard if I took the very slightly riskier bet and didn't get the money.

comment by Colin_Reid · 2008-01-19T12:14:32.000Z · LW(p) · GW(p)

My experience of watching game shows such as 'Deal or No Deal' suggests that people do not ascribe a low positive utility to winning nothing or close to nothing - they actively fear it, as if it would make their life worse than before they were selected to appear on the show. It seems this fear is in some sense inversely proportional to the 'socially expected' probability of the bad event - so if the player is aware that very few players win less than £1 on the show, they start getting very uncomfortable if there is a high chance of this happening to them, because winning less than £1 is somehow embarrassing, and winning 1p is somehow significantly worse than winning say 50p. In contrast, on game shows where there's a 'double or nothing' option at the end, it is socially accepted that there's a high chance of winning nothing, so players seem to be much more sanguine about the gamble. I think the psychology of 'face' has a lot to answer for when it comes to such decisions.

comment by Gray_Area · 2008-01-19T12:50:08.000Z · LW(p) · GW(p)

People don't maximize expectations. Expectation-maximizing organisms -- if they ever existed -- died out long before rigid spines made of vertebrae came on the scene. The reason is simple, expectation maximization is not robust (outliers in the environment can cause large behavioral changes). This is as true now as it was before evolution invented intelligence and introspection.

If people's behavior doesn't agree with the axiom system, the fault may not be with them, perhaps they know something the mathematician doesn't.

Finally, the 'money pump' argument fails because you are changing the rules of the game. The original question was, I assume, asking whether you would play the game once, whereas you would presumably iterate the money pump until the pennies turn into millions. The problem, though, is if you asked people to make the original choices a million times, they would, correctly, maximize expectations. Because when you are talking about a million tries, expectations are the appropriate framework. When you are talking about 1 try, they are not.

Replies from: ThisDan
comment by ThisDan · 2012-12-17T02:13:29.972Z · LW(p) · GW(p)

I was really confused about what point EY made that went over my head but i think I get it now.

It totally changes the game to play it infinite amount of times rather than 1 go to win or lose. I made my choices based on 1 game and not a hybrid between the two of them played multiple times.

If I play once, choosing 1a is just taking money that's already mine. If I play infinite times, 1b earns money faster because failing can be evened out.

comment by steven · 2008-01-19T14:28:10.000Z · LW(p) · GW(p)

tcpkac: no one is assuming away risk aversion. Choosing 1A and 2B is irrational regardless of your level of risk aversion.

comment by Unknown · 2008-01-19T15:52:31.000Z · LW(p) · GW(p)

Constant's response implies that if someone prefers 1A to 1B and 2B to 2A, when confronted with the money pump situation, the person will decide that after all, 1A is preferable to 1B and 2A is preferable to 2B. This is very strange but at least consistent.

comment by Nick_Tarleton · 2008-01-19T16:15:17.000Z · LW(p) · GW(p)

"Nainodelac and Tarleton Nick", why are you using my (reversed) name?

steven: not if you're nonlinearly risk averse. As many have suggested, what if you take a large one-time utility hit for taking any risk, but you're not averse beyond that?

comment by Caledonian2 · 2008-01-19T16:15:41.000Z · LW(p) · GW(p)
Choosing 1A and 2B is irrational regardless of your level of risk aversion.

No, only if the utility of avoiding risk is worth less than the money at risk. Duh.

comment by billswift · 2008-01-19T16:22:49.000Z · LW(p) · GW(p)

Your description is not a money pump. A money pump occurs when you prefer A > B and B > C and C > A. Then someone can trade you in a round robin taking a little out for themselves each cycle. I don't feel like typing in an illustration, so see Robyn Dawes, Rational Choice in an Uncertain World.

There is a significant difference between single and iterative situations. For a single play I would prefer 1A to 1B and 2B to 2A. If it were repeated, especially open-endedly, I would prefer 1B to 1A for its slightly greater expected payoff. This is analogous, I think, to the iterated versus one-time prisoner's dilemma, see Axelrod's Evolution of Cooperation for an interesting discussion of how they differ.

comment by Dagon · 2008-01-19T17:10:05.000Z · LW(p) · GW(p)

How trustworthy is the randomizer?

I'd pick B in both situations if it seemed likely that the offer were trustworthy. But in many cases, I'd give some chance of foul play, and it's FAR easier for an opponent to weasel out of paying if there's an apparently-random part of the wager. Someone says "I'll pay you $24k", it's reasonably clear. They say "I'll pay you $27k unless these dice roll snake eyes" and I'm going to expect much worse odds than 35/36 that I'll actually get paid.

So for 1A > 1B, this may be based on expectation of cheating. For 2A < 2B, both choices are roughly equally amenable to cheating, so you may as well maximize your expectation.

It seems likely that this kind of thinking is unconscious in most people, and therefore gets applied in situations where it's not relevant (like where you CAN actually trust the probabilities). But it's not automatically irrational.

comment by George_Weinberg2 · 2008-01-19T18:08:36.000Z · LW(p) · GW(p)

It seems to me that your argument relies on the utility of having a probability p of gaining x being equal to p times the utility of gaining x. It's not clear to me that this should be true.

The trouble with the "money pump" argument is that the choice one makes may well depend on how one got into the situation of having the choice in the first place. For example, let's assume someone prefer 2B over 2A. It could be that if he were offered choice 1 "out of the blue" he would prefer 1A over 1B, yet if it were announced in advance that he would have a 2/3 chance of getting nothing and a 1/3 chance of being offered choice 1, he would decide beforehand that B is the better choice, and he would stick with that choice even if allowed to switch. This may seem odd, but I don't see why it's logically inconsistent.

comment by Richard_Hollerith2 · 2008-01-19T18:16:54.000Z · LW(p) · GW(p)

No, only if the utility of avoiding risk is worth less than the money at risk. Duh.

Someone did not read the OP carefully enough.

Hint: re-read the definition of the Axiom of Independence.

comment by Caledonian2 · 2008-01-19T18:41:06.000Z · LW(p) · GW(p)

Someone isn't thinking carefully enough.

Hint: I did not assert that X is strictly preferred to Y.

comment by steven · 2008-01-19T19:23:01.000Z · LW(p) · GW(p)

Caledonian, Nick T: "Risk aversion" in the standard meaning is when an agent maximizes the expectation value of utility, and utility is a function of money that increases slower than linearly. When an agent doesn't maximize expected utility at all, that's something different.

comment by steven · 2008-01-19T19:29:32.000Z · LW(p) · GW(p)

Do you really want to say that it can be rational to accept a 1/3 chance of participating in a lottery, already knowing that if you got to participate you would change your mind? Risk aversion is (or at least, can be) a matter of taste, this is just a matter of not being stupid.

comment by burger_flipper2 · 2008-01-19T21:44:33.000Z · LW(p) · GW(p)

Dawes gives a very similar 2-gamble example of a money pump on pg 105 of Rational Choice.

comment by Caledonian2 · 2008-01-19T21:50:29.000Z · LW(p) · GW(p)
Caledonian, Nick T: "Risk aversion" in the standard meaning is when an agent maximizes the expectation value of utility

Oh, I agree.

I just measure utility differently than you do.

comment by steven · 2008-01-20T00:29:28.000Z · LW(p) · GW(p)

Caledonian, if utility is any function defined on amounts of money, then if you are maximizing expected utility, you cannot fall prey to the Allais paradox. You can define a utility function on gambles that is not the expected value of a utility function on amounts of money, but then that function is not expected utility, and you're outside of normal models of risk aversion, and you're violating rationality axioms like the one Eliezer gave in the OP.

comment by Caledonian2 · 2008-01-20T01:40:58.000Z · LW(p) · GW(p)
you're violating rationality axioms like the one Eliezer gave in the OP

No. Those axioms are "if => then" statements. I'm violating the "if" part.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-20T02:32:51.000Z · LW(p) · GW(p)

Nainodelac, if you prefer 1A to 1B and 2A to 2B, as you should if you need exactly $24,000 to save your life, that is a perfectly consistent preference pattern.

comment by Nick_Tarleton · 2008-01-20T03:32:28.000Z · LW(p) · GW(p)

You can define a utility function on gambles that is not the expected value of a utility function on amounts of money, but then that function is not expected utility, and you're outside of normal models of risk aversion, and you're violating rationality axioms like the one Eliezer gave in the OP.

Having a utility function determined by anything other than amounts of money is irrational? WTF?

comment by Caledonian2 · 2008-01-20T03:42:33.000Z · LW(p) · GW(p)

Upon rereading the thread and all of its comments, I suspect the person I originally quoted meant something along the lines of "preferring 1A to 1B but 2B to 2A is irrational", which seems more defensible.

There is nothing irrational about preferring 1A and 2B by themselves, it's choosing the first option in the first scenario and the second in the second that's dodgy.

comment by Richard_Hollerith2 · 2008-01-20T03:42:35.000Z · LW(p) · GW(p)

Nick is right to object, but removing the phrase "on amounts of money" makes the statement unobjectionable -- and relevant and true.

comment by Doug_S. · 2008-01-20T04:59:09.000Z · LW(p) · GW(p)

Is Pascal's Mugging the reductio ad absurdum of expected value?

comment by Joseph_Hertzlinger · 2008-01-20T05:29:10.000Z · LW(p) · GW(p)

This may be related to the phenomenon of overconfident probability estimates. I would not be surprised to find that people who claim a 97% certainty have a real 90% probability of being right. Maybe someone who hears there's 1 chance in 34 of winning nothing interprets that as coming from an overconfident estimator whereas the 34% and 33% probabilities are taken at face value.

On the other hand, the overconfidence detector seems to stop working when faced with asserted certainty.

comment by Ian_Maxwell · 2008-01-20T05:34:48.000Z · LW(p) · GW(p)

"Nainodelac and Tarleton Nick": This is not about risk aversion. I agree that if it is vital to gain at least $20,000, 1A is a superior choice to 1B. However, in that case, 2A is also a superior choice to 2B. The error is not in preferring 1A, but in simultaneously preferring 1A and 2B.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-01-20T06:05:43.000Z · LW(p) · GW(p)

Is Pascal's Mugging the reductio ad absurdum of expected value?

No. I thought it might be! But Robin gave an excellent reason of why we should genuinely penalize the probability by a proportional amount, dragging the expected value back down to negligibility.

(This may be the first time that I have presented an FAI question that stumped me, and it was solved by an economist. Which is actually a very encouraging sign.)

comment by Unknown · 2008-01-20T06:23:03.000Z · LW(p) · GW(p)

This discussion reminded me of the Torture vs. Dust Specks discussion; i.e. in that discussion, many comments, perhaps a majority, amounted to "I feel like choosing Dust Specks, so that's what I choose, and I don't care about anything else." In the same way, there is a perfectly consistent utility function that can prefer A1 to B1 and B2 to B1, namely one that sets utility on "feeling that I have made the right choice", and which does not set utility on money or anything else. Both in this case and in the case of the Torture and Dust Specks, many comments indicate a utility function which places value on the feeling of having made a right choice, without regard for anything else, especially for whether or not the choice was actually right, or for the consequences of the choice.

comment by denis_bider · 2008-01-20T09:17:00.000Z · LW(p) · GW(p)

Not sure if anyone pointed this out, but in a situation where you don't trust the organizer, the proper execution of 1A is a lot easier to verify than the proper execution of 1B, 2A and 2B.

1A minimizes your risk of being fooled by some hidden cleverness or violation of the contract. In 1B, 2A and 2B, if you lose, you have to verify that the random number generator is truly random. This can be extremely costly.

In option 1A, verification consists of checking your bank account and seeing that you gained $24,000. Straightforward and simple. Hardly any risk of being deceived.

comment by Nick_Tarleton · 2008-01-20T17:30:00.000Z · LW(p) · GW(p)

I hate to discuss this again, but...

Is Michael Vassar's variant Pascal's Mugging (with the pigs), bypassing as it does Robin's objection, the reductio of expected value? If you don't care about pigs, substitute something else really really bad that doesn't require creating 3^^^3 humans.

comment by steven · 2008-01-20T18:10:00.000Z · LW(p) · GW(p)

It's simple to show that no rational person would actually give money to a Pascal mugger, as the next mugger might threaten 4^^^4 people. I'm not sure whether this solves the problem or just sweeps it under the rug, though.

comment by Doug_S. · 2008-01-20T22:27:00.000Z · LW(p) · GW(p)

Well, if Pascal's Mugging doesn't do it, how about the St. Petersburg paradox? ;)

Oh wait... infinite set atheist... never mind.

comment by Wendy_Collings · 2008-01-20T22:37:00.000Z · LW(p) · GW(p)

I'm afraid I don't follow the maths involved, but I'd like to know whether the equations work out differently if you take this premise:

- Since 1A offers a certainty of $24,000, it is deemed to be immediately in your possession. 1B then becomes a 33/34 chance of winning $3,000 and 1/34 chance of losing $24,000.

Can someone tell me how this works out mathematically, and how it then compares to 2B?

comment by Bayesian · 2008-01-21T13:53:00.000Z · LW(p) · GW(p)

The Allais Paradox is indeed quite puzzling. Here are my thoughts:

0. Some commenters simply dismiss Bayesian reasoning. This doesn't solve the problem, it just strips us of any mathematical way to analyze the problem. On the other hand, the fact that the inconsistent choice seems ok does mean that the Bayesian way is missing something. Simply dismissing the inconsistent choice doesn't solve the problem either.

1. If I understand correctly, you argue that situation 1 can be turned into situation 2 by randomization. In other words, if you sell me situation 1, I can sell somebody else (named X) situation 2 by throwing some dies and using your offer. More specifically, I throw a 100-sided die. If it's > 34, X looses. Otherwise, I play X's option with you. However, this can't be reversed. Given only situation 2, I can't sell situation 1, assuming I have only $0 initial capital.

Hence, it seems that assuming invertibility of situations (I can both buy and sell them) and unlimited money buffers for that purpose are important for the demanded consistency.

comment by CarlShulman · 2008-01-21T16:08:00.000Z · LW(p) · GW(p)

Nick,

"Is Michael Vassar's variant Pascal's Mugging (with the pigs), bypassing as it does Robin's objection, the reductio of expected value? If you don't care about pigs, substitute something else really really bad that doesn't require creating 3^^^3 humans."
The Porcine Mugging doesn't bypass the objection. Your estimates of the frequency of simulated people and pigs should be commensurably vast, and it is vastly unlikely that your simulation (out of many with intelligent beings) will be selected for an actual Porcine Mugging that will consume vast resources (enough to simulate vast numbers of humans). These things offset to get you workable calculations.

comment by mitchell_porter2 · 2008-01-24T14:21:00.000Z · LW(p) · GW(p)

I would have chosen 1A and 2B, for the following reasons: Any sum of the order of $20,000 would revolutionize my personal circumstances. The likely payoff is enormous. Therefore, I'd pick 1A because I'd get such a sum guaranteed, rather than run the 3% risk (1B) of getting nothing at all. Whereas choice 2 is a gamble either way, so I am led to treat both options as qualitatively the same. But that's a mistake: if the value of getting either nonzero payoff at all is so great, then I should have favored the 34% chance of winning something over the 33% chance, just as I favored the 100% chance over the ~97% chance in choice 1. Interesting.

comment by Phirand_Ice · 2008-01-24T16:13:00.000Z · LW(p) · GW(p)

Surely the answer is dependednat on goal criterion. If the goal is to get 'some' money then the 100% option and the 34% options are better. If your goal is get 'the most' money then the 97% and the 33% options are better. However the goal might be socially construictued. This reminded me of John Nash whom offered one of his sectraries $15 dollars if she shared it equally with a co-worker but $10 if she kept it for her-self. She took the $15 and split it with her co-worker. She chose an option that maximised her social capital but was a weaker one economically.

comment by Michael_Osborne · 2008-09-07T18:49:00.000Z · LW(p) · GW(p)

I agree with Dagon.

This experiment assumes that the subjective probabilities of participants were identical to the stated probabilities. In reality, I feel like people are probably wary of stated probabilities due to experiences with or fears of shysters and conmen. That, is if asked to choose between 1A and 1B, 1B offers the possibility that the `randomising mechanism' that the experimenter is offering is in fact rigged.

Even if the experimenter is completely honest in their statement of their own subjective probabilities, they may simply disagree with that of the participants. Whatever `randomising mechanism' is suggested is, of course, almost certainly completely predictable given sufficient information - a die roll, or similar, predictable using Newtonian mechanics. That, is the experimenter's stated probability is purely a reflection of their own information concerning that mechanism, which may be completely at odds with the participant's knowledge.

comment by Wei_Dai2 · 2009-02-01T22:52:00.000Z · LW(p) · GW(p)

Eliezer, I see from this example that the Axiom of Independence is related to the notion of dynamic consistency. But, the logical implication goes only one way. That is, the Axiom of Independence implies dynamic consistency, but not vice versa. If we were to replace the Axiom of Independence with some sort of Axiom of Dynamic Consistency, we would no longer be able to derive expected utility theory. (Similarly with dutch book/money pump arguments, there are many ways to avoid them besides being an expected utility maximizer.)

I'm afraid that the Axiom of Independence cannot really be justified as a basic principle of rationality. Von Neumann and Morgenstern probably came up with it because it was mathematically necessary to derive Expected Utility Theory, then they and others tried to justify it afterward because Expected Utility turned out to be such an elegant and useful idea. Has anyone seen Independence proposed as a principle of rationality prior to the invention of Expected Utility Theory?

Replies from: torekp
comment by torekp · 2011-03-05T22:16:46.518Z · LW(p) · GW(p)

I'm equally afraid ;). The Axiom of Independence is intuitively appealing to me, but I don't posit it to be a basic principle of rationality, because that smells like a mind projection fallacy. I suspect you're right, also, about dutch book/money pump arguments.

I tentatively conclude that a rational agent need not evince preferences that can be represented as an attempt to maximize such a utility function. That doesn't mean Expected Utility Theory can't be useful in many circumstances or for many agents, but this still seems like important news, which merits more discussion on Less Wrong.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-03-06T10:33:36.785Z · LW(p) · GW(p)

which merits more discussion on Less Wrong.

Have you read these posts?

comment by Tim_Tyler · 2009-05-04T11:18:00.000Z · LW(p) · GW(p)

Agree with Denis. It seems rather objectionable to describle such behaviour as irrational. Humans may well not trust the experimenter to present the facts of the situation to them accurately. If the experimenter's dice are loaded, choosing 1A and 2B could well be perfectly rational.

comment by CarlShulman · 2009-05-04T12:03:00.000Z · LW(p) · GW(p)

"That is, the Axiom of Independence implies dynamic consistency, but not vice versa."
Really? A hyperbolic discounter can conform to the Axiom of Independence at any particular time and be dynamically inconsistent.

comment by JohnDavidBustard · 2010-08-25T16:59:43.525Z · LW(p) · GW(p)

I would love to know if the results are different if you repeatedly expose people to the situation rather than communicate it in a formal way. They are likely to observe the outcomes of their strategy and adapt. Perhaps what is being measured is simply the numeracy of the subjects and not their practical inability to determine optimal strategies.

The lottery is another interesting example, what is being bought is the probability of a big win, not a statistically optimal investment. Playing the lottery genuinely increases the chance of you suddenly gaining a life changing amount of money. This is a perfectly rational choice.

Replies from: AlephNeil
comment by AlephNeil · 2010-08-25T18:13:39.238Z · LW(p) · GW(p)

This is a perfectly rational choice.

What about the Allais paradox? Imagine someone who is happy to play the lottery but would refuse to play an alternative version where the ticket merely confers a slight increase on a significant pre-existing probability of winning 'life changing money'. (As I understand it, most/all lottery players would in fact refuse the 'alternative' gamble.) Do you want to say that such a person is 'perfectly rational'? Would you call them perfectly rational if they accepted both gambles (despite both of them having negative EV)?

To be fair, It is possible to tell a consistent story about a person for whom either gamble would be rational: Perhaps the Earth is going to be destroyed soon and the cost of entry into the new self-sustaining Mars colony equals the lottery jackpot.

But needless to say, most people aren't in situations remotely resembling this one.

Replies from: JohnDavidBustard, Kingreaper
comment by JohnDavidBustard · 2010-08-26T10:00:00.562Z · LW(p) · GW(p)

Thank you for your comments.

I think the Allais paradox is fascinating, however, although it is very revealing about our likely motives for playing the lottery it doesn't change the potential rationality of actual playing it. I.e. that money and value don't necessarily have a linear relationship, and so optimising for EV is not rational.

Although, I feel that the likely answer is that the brain is optimised for rapid responses to survival problems and these solutions may well be an optimal response given constraints on both processing and expected outcome.

Another perspective is that in general specifications are not accurate but instead a communication of experience. If the problem specification is viewed instead as a measurement of a system where the placing of bets is an input and the output is not random but the outcome of an unknown set of interactions. Systems encountered in the past will form a probability distribution over their behaviour, the frequency of observed consequences then act as a measurement of the likelihood that the system in question is equivalent to one of these types. This would explain the feeling of switching between the two examples (they constitute the likely outcomes of two types of system) and thus represent situations where distinct behaviours were appropriate.

I.e. as one starts to understand an existing system one gets diminishing returns for optimising interaction with it (a good example is AI programming itself), however systems may be unknown to the user. These unknown systems may demonstrate rare, but highly beneficial or unexpected events, like noticing an anomaly in a physics experiment. In this case it is rational to play/interact as doing so provides more information which may be used to identify the system and thus lead to understanding and thus an expected benefit in the future.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-08-26T10:44:27.163Z · LW(p) · GW(p)

I think the Allais paradox is fascinating, however, although it is very revealing about our likely motives for playing the lottery it doesn't change the potential rationality of actual playing it. I.e. that money and value don't necessarily have a linear relationship, and so optimising for EV is not rational.

Of course, that just means you maximise expected utility rather than expected money. (I was almost going to write "expected value" instead of "expected utility" as you used the word "value", but obviously that would be confusing in this context...)

Replies from: JohnDavidBustard
comment by JohnDavidBustard · 2010-08-26T12:55:20.300Z · LW(p) · GW(p)

Yes, absolutely, apologies for my unfamiliarity with the terms.

The point I'm trying to make is that lottery playing optimises utility (assuming utility means what is considered valuable to the person). Saying that lottery playing is irrational is making a statement about what is valuable more that it does about what is reasonable.

comment by Kingreaper · 2010-11-14T12:33:52.386Z · LW(p) · GW(p)

Imagine someone who is happy to play the lottery but would refuse to play an alternative version where the ticket merely confers a slight increase on a significant pre-existing probability of winning 'life changing money'. (As I understand it, most/all lottery players would in fact refuse the 'alternative' gamble.)

This is likely because playing the lottery gives you "hope" of a life-changing event. It means that you KNOW there is a possible life-changing event available.

If you already have that knowledge, then paying for the lottery becomes just about the money; which isn't worthwhile. If you don't, paying for the lottery is buying that knowledge, and the knowledge has value to you.

comment by Kingreaper · 2010-11-14T12:26:57.539Z · LW(p) · GW(p)

Ummm, no. The money pump fails because of the REASON for the preference difference.

The reason is, as some have already stated, that in scenario 1B if you lose you know it's your fault you got nothing. In scenario 2B if you lose, you can rationalise it easily as "Would have lost anyway"

In your money pump scenario, we have a 1/3rd chance of playing 1. If we get to play 1, we know we're playing 1. So your money pump fails, because a standard player would prefer that the switch be on A at all times.

comment by David_Gerard · 2010-12-07T17:59:55.882Z · LW(p) · GW(p)

How do I alleviate feeling pleased at myself for having read the statement of the paradox - that people preferred 1A>1B but 2B>2A - and immediately going "WHAT?" and boggling at the screen and pulling confused faces for about thirty seconds, so flabbergasted I had to reread that this choice pattern was common?

(Personally I'm really strongly biased these days toward a bird in the hand and would have chosen 1A and 2A every time. I occasionally do bits of sysadmin for dodgy dot-coms that friends are working for. There are people who offer equity; I take an hourly fee. "No, no, that's fine, I am but humble roadie." This may not always be the best life strategy, but it seems to work for me at present.)

Replies from: shokwave
comment by shokwave · 2010-12-07T18:20:16.464Z · LW(p) · GW(p)

There are people who offer equity; I take an hourly fee.

Penalise expected value of equity because probability is lower than I have been led to believe - an incredibly useful heuristic.

How do I alleviate feeling pleased at myself

In 33/34ths of the worlds where you make choice A in 1, you are mercilessly teased and mocked by your inferiors, a la this, thirty seconds in, for not picking B. Assuming counterfactual outcomes are revealed.

Replies from: David_Gerard
comment by David_Gerard · 2010-12-07T18:23:28.475Z · LW(p) · GW(p)

I'll just have to cry myself to sleep on a big bed made of $24,000!

comment by handoflixue · 2011-04-12T17:41:19.823Z · LW(p) · GW(p)

It took me 30 minutes of sitting down and doing math before I could finally accept that 1A+2B was an irrational preference. I finally realized that a lot of it came down to: with a 66% vs 67% chance of losing, I could take the riskier option and not feel as bad, because I could sweep it under the rug with "oh, I probably would have lost anyways."

Once I ran a scenario where I'd KNOW whether it was that 1% that I controlled, or the 66% that I didn't control, that comfort evaporated.

I learned a lot about myself by working through this exercise, so thank you very much :)

comment by mendel · 2011-05-26T15:15:27.980Z · LW(p) · GW(p)

The problem as stated is hypothetical: there is next to no context, and it is assumed that the utility scales with the monetary reward. Once you confront real people with this offer, the context expands, and the analysis of the hypothetical situation falls short of being an adequate representation of reality, not necessarily because of a fault of the real people.

Many real people use a strategy of "don't gamble with money you cannot afford to lose"; this is overall a pretty successful strategy (and if I was looking to make some money, my mark would be the person who likes to take risks - just make him subsequently better offers until he eventually loses, and if he doesn't, hit him over the head, take the now substantial amount of money and run). To abandon this strategy just because in this one case it looks as if it is somewhat less profitable might not be effective in the long run. (In other circumstances, people on this site talk about self-modification to counter some expected situations as one-boxing vs. dual-boxing; can we consider this strategy such a self-modification?)

Another useful real-life strategy is, "stay away from stuff you don't understand" - $24,000 free and clear is easier to grasp than the other offer, so that strategy favors 1A as well, and doesn't apply to 2A vs. 2B because they're equally hard to understand. The framing of offer two also suggests that the two offers might be compared by multiplying percentage and values, while offer 1 has no such suggestion in branch 1A.

We're looking at a hypothetical situation, analysed for an ideal agent with no past and no future - I'm not surprised the real world is more complex than that.

Replies from: wedrifid, None
comment by wedrifid · 2011-05-26T16:11:10.544Z · LW(p) · GW(p)

The problem is not with the hypothetical. It is with the intuition. Intuitions which really do prompt bad decisions in the real life circumstances along these lines.

Replies from: mendel
comment by mendel · 2011-05-27T02:11:40.881Z · LW(p) · GW(p)

You seem to have examples in mind?

Replies from: Pavitra
comment by Pavitra · 2011-05-27T02:22:24.903Z · LW(p) · GW(p)

The lottery comes immediately to mind. You can't be absolutely sure that you'll lose.

comment by [deleted] · 2011-05-26T16:17:53.728Z · LW(p) · GW(p)

it is assumed that the utility scales with the monetary reward.

Not necessarily. It is assumed that receiving $24000 is equally good in either situation. Your utility function can ignore money entirely (in which case 1A2A is irrational because you should be indifferent in both cases). You can use the utility function which prefers not to receive monetary rewards divisible by 9: in this case, 1A>1B and 2A>2B is your best bet, giving you 100% and 34% chances to avoid 9s, rather than 0% chances. In general, your utility function can have arbitrary preferences on A and B separately; but no matter what, it will prefer 1A to 1B if and only if it prefers 2A to 2B.

As for the rest of your reply -- yes, it is true that real people use strategies ("heuristic" is the word used in the original post) that lead them to choose 1A and 2B. That's sort of why it's a paradox, after all. However, these strategies, which work well in most cases, aren't necessarily the best in all cases. The math shows that. What the math doesn't tell us is which case is wrong.

My own judgment, for this particular sum of money (which is high relative to my current income), is that choice 1A is correctly better than choice 2A, in order to avoid risk. However, choice 1B is also better than choice 2B, upon reflection, even though my intuitions tell me to go with 2B. This is because my intuitions aren't distinguishing 33% and 34% correctly.

In reality, faced with the opportunity to earn amounts on the order of $20K, I should maximize my chances to walk away with something. In the first case, I can maximize them fully, to 100%, which triggers my "success!" instinct or whatever: I know I've done everything I can because I'm certain to get lots of money. In the second case, I don't get any satisfaction from the correct decision, because all I've done is improve my chances by 1%.

In general, the heuristic that 1% chances are nearly worthless is correct, no matter what's at stake: I can usually do better by working on something that will give me a 10% or 25% chance. In this case, this heuristic should be ignored, because there is no effort spent making the improvement, and furthermore, there isn't really anything else I can do.

On the other hand, suppose that the amount of money at stake is $2.40 or $2.70. Suddenly, our risk-aversion heuristic is no longer being triggered at all (unless you're really strapped for cash), and we have no problem doing the utility calculation. Here, 1A<1B and 2A<2B is the correct choice.

Replies from: mendel
comment by mendel · 2011-05-27T02:10:10.206Z · LW(p) · GW(p)

The utility function has as its input only the monetary reward in this particular instance. Your idea that risk-avoidance can have utility (or that 1% chances are useless) cannot be modelled with the set of equations given to analyse the situation (the percentage is no input to the U() function) - the model falls short because the utility attaches only to the money and nothing else. (Another example of a group of individuals for whom the risk might out-utilize the reward are gambling addicts.) Security is, all other things being equal, preferred over insecurity, and we could probably devise some experimental setup to translate this into a utility money equivalent (i.e. how much is the test subject prepared to pay for security and predictability? that is the margin of insurance companies, btw). :-P

I wanted to suggest that a real-life utility function ought to consider even more: not just to the single case, but the strategies used in this case - do these strategies or heuristics have better utility in my life than trying to figure out the best possible action for each problem? In that case, an optimal strategy may well be suboptimal in some cases, but work well re: a realistic lifetime filled with probable events, even if you don't contrive a $24000 life-or-death operation. (Should I spend two years of my life studying more statistics, or work on my father's farm? The farm might profit me more in the long run, even if I would miss out if somebody made me the 1A/1B offer, which is very unlikely, making that strategy the rational one in the larger context, though it appears irrational in the smaller one.)

Replies from: None
comment by [deleted] · 2011-05-27T18:34:08.585Z · LW(p) · GW(p)

Risk-avoidance is captured in the assignment of U($X). If the risk of not getting any money worries you disproportionately, that means that the difference U($24K) - U($0) is higher than 8 times the difference U($27K) - U($24K).

Replies from: mendel
comment by mendel · 2011-05-27T21:30:56.608Z · LW(p) · GW(p)

That's a neat trick, however, I am not sure I understand you correctly. You seem to be saying that risk-avoidance does not explain the 1A/2B preference, because you say your assignment captures risk-avoidance, and it doesn't lead to that. (It does lead to your take of the term though - your preference isn't 1A/2B, though).

Your assignment looks like "diminishing utility", i.e. a utility function where the utility scales up subproprotionally with money (e.g. twice the money must have less than twice the utility). Do you think diminishing utility is equivalent to risk-avoidance? And if yes, can you explain why?

Replies from: None
comment by [deleted] · 2011-05-27T22:31:04.046Z · LW(p) · GW(p)

I think so, but your question forces me to think about it harder. When I thought about it initially, I did come to that conclusion -- for myself, at least.

[I realized that the math I wrote here was wrong. I'm going to try to revise it. In the meantime, another question. Do you think that risk avoidance can be modeled by assigning an additional utility to certainty, and if so, what would that utility depend on?]

Also, thinking about the paradox more, I've realized that my intuition about probabilities relies significantly on my experience playing the board game Settlers of Catan. Are you familiar with it?

Replies from: mendel
comment by mendel · 2011-05-28T11:05:59.144Z · LW(p) · GW(p)

One way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise. I doubt that this is a useful model of reality, but mathematically, it would do the trick. My stated opinion is that this special case should be looked at in the light of more general startegies/heuristics applied over a variety of situations, and this approach would still fall short of that.

I know Settlers of Catan, and own it. It's been awhile since I last played it, though.

Your point about games made me aware of a crucial difference between real life and games, or other abstract problems of chance: in the latter, chances are always known without error, because we set the game (or problem) up to have certain chances. In real life, we predict events either via causality (100% chance, no guesswork involved, unless things come into play we forgot to consider), or via experience / statistics, and that involves guesswork and margins of error. If there's a prediction with a 100% chance, there is usually a causal relationship at the bottom of it; with a chance less than 100%, there is no such causal chain; there must be some factor that can thwart the favorable outcome; and there is a chance that this factor has been assessed wrong, and that there may be other factors that were overlooked. Worst case, a 33/34 chance might actually only be 30/34 or less, and then I'd be worse off taking the chance. Comparing a .33 with a .34 chance makes me think that there's gotta be a lot of guesswork involved, and that, with error margins and confidence intervals and such, there's usually a sizeable chance that the underlying probabilities might be equal or reversed, so going for the higher reward makes sense.

[rewritten] Imagine you are a mathematical advisor to a king who asks you to advise him of a course of action and to predict the outcome. In situation, you can pretty much advise whatever, because you'll predict a failure; the outcome either confirms your prediction, or is a lucky windfall, so the king will be content with your advice in hindsight. In situation 2, you'll predict a gain; if you advised A, your prediction will be confirmed, but if you advised B, there's a chance it won't be, with the king angry at you because he didn't make the money you predicted he would. Your career is over. -- Now imagine a collection of autonomous agents, or a bundle of heuristics fighting for Darwinist survival, and you'll see what strategy survives. [If you like stereotypes, imagine the "king" as "mathematician's non-mathematical spouse". ;-)]

Replies from: None
comment by [deleted] · 2011-05-29T14:02:43.376Z · LW(p) · GW(p)

One way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise.

The problem with this is that dealing with p=1 is iffy. Ideally, our certainty response would be triggered, if not as strongly, when dealing with 99.99% certainty -- for one thing, because we can only ever be, say, 99.99% certain that we read p=1 correctly and it wasn't actually p=.1 or something! Ideally, we'd have a decaying factor of some sort that depends on the probabilities being close to 1 or 0.

The reason I asked is that it's very possible that a correct model of "attaching a utility to certainty" would be equivalent to a model with diminishing utility of money. If that were the case, we would be arguing over nothing. If not, we'd at least stand a chance of formulating gambles clarifying our intuitions if we knew what the alternatives are.

Comparing a .33 with a .34 chance makes me think that there's gotta be a lot of guesswork involved, and that, with error margins and confidence intervals and such, there's usually a sizeable chance that the underlying probabilities might be equal or reversed, so going for the higher reward makes sense.

If the 33% and 34% chances are in the middle of their error margins, which they should be, our uncertainty about the chances cancels out and the expected utility is still the same. Going for the higher expected value makes sense.

I brought up Settlers of Catan because, if I imagine a tile on the board with $24K and 34 dots under it, and another tile with $27K and 33 dots, suddenly I feel a lot better about comparing the probabilities. :) Does this help you, or am I atypical in this way?

Imagine you are a mathematical advisor to a king who asks you to advise him of a course of action and to predict the outcome.

Obviously with the advisor situation, you have to take your advisee's biases into account. The one most relevant to risk avoidance is, I think, the status quo bias: rather than taking into account the utility of the outcomes in general, the king might be angry at you if the utility becomes worse, and not as picky if the utility becomes better (than it is now). You have to take your own utility into account, which depends not on the outcome but on your king's satisfaction with it.

comment by Luke_A_Somers · 2011-08-26T11:39:18.591Z · LW(p) · GW(p)

I wonder how the results would change if the experiment changes so that the outcomes of 2B are, "You have a 33% chance of receiving $27k, a 66% chance of not getting anything, and a 1% chance of having someone laugh in your face for not picking 2A"

comment by Surunveri · 2011-12-13T10:01:45.126Z · LW(p) · GW(p)

If you'd ask any person capable of doing the math whether they would want to play 1A or 1B a thousand times you'd probably get a different answer, but not an answer that's more correct.

Also the utility value of money is not directly relative to the amount of money. Imagine that you would need a 1000$ dollars of money to save your dying relative with certainty by paying for his/her treatment. Good enough for explaining 1A > 1B, but doesn't resolve the contradiction with 2B > 2A.

But even a more revealing edit is based exactly onto the certainty. If you would be presented with these two questions, in such a fashion that you would get the money and get to know the result in 1 month after being presented with it. By selecting 1A you would have 0% chance that the plans you make would fail, and with 1B you would have a 1/34 chance that they would fail. Meanwhile regardless of whether you select 2A or 2B you will have to face uncertainty. So you would be frustrated while trying to make plans that are conditionally dependent with you getting the money.

As these conditions are not present in the presentation it's possible to rule these kind of instinctive judgments as flawed, but as it turns out, they're not foolish, on a general level. You could even make a claim that it's costly to perform the calculation that tells you whether the assurance is worth it - but of course instead of saying that you should just figure out how much value this assurance has in each given situation.

Replies from: Vaniver
comment by Vaniver · 2011-12-13T12:37:56.989Z · LW(p) · GW(p)

You're right that certainty helps out with planning, and so certainty can be valuable sometimes. It's still a bias to unconsciously add in a value for certainty if you don't need it in this case, even if it sometimes pays off, and so it's worth thinking through the 'paradox.'

Replies from: Surunveri
comment by Surunveri · 2011-12-13T18:30:38.149Z · LW(p) · GW(p)

I wanted to point out that this flaw is not a foolish flaw. That's how we create plans, we project and create expectations, and the anticipated feeling of loss is frustrating to plan for. In a theoretical example you might make a bad decision, but isn't it also that this flaw causes you to make good decisions in actual real-world situations? Since they don't tend to occur in such theoretical forms where you have all the required information available and which lack context.

If you'd actually encounter this problem in a real-world situation, you might end up making a bad decision because of handling it with a too theoretical approach - what if I told you get to play both games and actually get to choose between both, when you come to visit me? But you didn't have money to pay for the ticket to fly over? What if you took a loan? And without the certainty of A1 you might end up in a bad situation where you'll lack the means to pay back your loan - in other words a decision making agent with this flaw handles the situation well. But of course you can take all that into account. And as it's a problem dealing with rationality, I think it's pretty important to note these things.

Anyway I agree with you, Vaniver =)

comment by William_Kasper · 2011-12-29T15:46:04.682Z · LW(p) · GW(p)

Please correct me if any of my assumptions are innacurate, and I apologize if this comment comes off as completely tautological.

Expected utility is explicity defined as the statistic

U(x)})

where X is the set of all possible outcomes associated with a particular gamble, p(x) is the proportion of times that outcome x occurs within the gamble, and U(x) is the utility of outcome x, a function that must be strictly increasing with respect to the monetary value of outcome x.

To reduce ambiguity:

  • 1A, 1B, 2A, and 2B are instances of gambles.

  • For 1B, the possible outcomes are $27000 and $0.

  • For 1B, the expected utility is p($27000) * U($27000) + p($0) * U($0) = 33/34 * U($27000) + 1/34 * U($0).

If you choose 1A over 1B and 2B over 2A, what can we conclude?

  • that you are not using the rule "maximize expected utility" to make your decisions. Thus you do not fit the definition, as given by the Axiom of Independence, of consistent decision making.

If you choose 1A over 1B and 2B over 2A, what can we not conclude?

  • that your decision rule changes arbitrarily. You could, for example, always follow the rule, "Maximize minimum net utility. In the case of a tie, maximize expected utility." In this case, you would choose 1A and 2B.

  • that you would be wrong or stupid for using a different decision rule when you only get to play one time, than the rule you would use when you get to play 100 times.

Replies from: thomblake
comment by thomblake · 2011-12-29T20:08:32.811Z · LW(p) · GW(p)

That all seems pretty uncontroversial.

comment by ricketson · 2012-01-15T19:28:45.682Z · LW(p) · GW(p)

I initially chose 1A and 2B, but after reading the analysis of those decisions, I agree that they are inconsistent in a way that implies that one choice was irrational (in the context of this silly little game). So I did some introspection to figure out where I went wrong. Here's what I found:

1) I may have misjudged how small 1/34 is, and this only became apparent when the question was phased as it is in example 2.

2) I think I assumed an implicit costs in these gambles. The first cost is a delay in learning the outcome of these gambles; the second is the implicit need to work to earn this money. I think that these assumptions are reasonable because there is essentially no realistic condition in which I would instantly see the results of a decision that might earn me $27,000; there would probably be a delay of several months (if working) or years (if investing) between making the decision and learning whether I got the money or not. This prolonged uncertainty has a negative utility, since I am unable to make firm plans for the money during that interval. This negative utility would apply to all options except 1A. Furthermore, earning $24,000 would realistically require several months of work on my part. However, a project that had a 1/3 chance of paying out $24,000 might only take a month. The implicit difference in opportunity cost between scenario 1 and scenario 2 has implications for the marginal utility of money in each scenario (making me more risk-averse in scenario 1, which implicitly has a higher opportunity cost).

These implicit costs are not specified in this game, so it is technically "irrational" to incorporate them into my decision-making. However, in any realistic scenario, such costs will exist (regardless of what the salesman says), so it is good that I/we intuitively include them in my/our decision-making.

comment by jsalvata · 2012-04-18T00:40:42.741Z · LW(p) · GW(p)

While Elezier's argument is still correct (that you should multiply to make decisions based on probabilistic knowledge), I see a perfectly rational and utilitarian explanation for choosing 1A and 2B in the stated problem.

The clue lies in Colin Reid's comment: "people do not ascribe a low positive utility to winning nothing or close to nothing - they actively fear it". This fear is explained by Kingreaper: "in scenario 1B if you lose you know it's your fault you got nothing".

That makes the two cases, stated as they are, different. In game 1 the utility of U1($0) has negative value: a sense of guilt (or shame) over having made the bad choice, which doesn't seem possible in game 2 (because game 2 is stated in terms of abstract probabilities, see below).

This makes the inequations compatible:

U($24,000)   >   33/34 U($27,000) + 1/34 U1($0)

e.g. 24 > 33/34 · 27 + 1/34 · -1000

0.34 U($24,000) + 0.66 U2($0)   <   0.33 U($27,000) + 0.67 U2($0)

e.g. 0.34 · 24 + 0.66 · 0 < 0.33 · 27 + 0.67 · 0

Note that stating the game with the "switch" rule turns game 2 into one (let's call it 3) in which the guilt/shame reappears, making U3=U1 -- so a rational player with the described negative U1 would choose A in game 3 and there would be no money pump.

This solution to the paradox is less valid if it is made clear that the subject will be allowed to play the game many times.

Another interesting way to remove this as a possible solution would be to restate case 2 in more concrete terms, to make it clear that you won't get away not knowing that "it was your fault" if you loose:

4A. If a 100-face dice falls on <=34, win $24,000, otherwise win nothing.

4B. If a 100-face dice falls on <=33, win $27,000, otherwise win nothing.

Just to prevent the subject being pattern-matching and not thinking, we should add the phrase "note that if the dice falls on a 34 and you've chosen A, you win 24k, but if you've chosen B, you get nothing".

I believe game 4 is pretty equivalent to game 3 (the one with the switch).

I've checked Allais' document and it suffers the same flaw: it's not an actual experiment in which people are asked to choose A or B and actually allowed to play the game, but a questionnaire asking subjects what they would choose. This is not the same, among other reasons because it doesn't force the experimenter or subject to detail the mechanics of the game (and hence it is not stated whether the subject will be given that sense of shame or even allowed to "chase the rabbit").

It would be interesting to know the result of an actual experiment with this design, possibly with smaller figures to reduce the non-linearity of the utility functions -- since that's not what's being discussed here --, and with subjects filtered against innumeracy (since those are out of hope anyway).

Replies from: Vaniver
comment by Vaniver · 2012-04-18T01:28:34.964Z · LW(p) · GW(p)

That makes the two cases, stated as they are, different. In game 1 the utility of U1($0) has negative value: a sense of guilt (or shame) over having made the bad choice, which doesn't seem possible in game 2 (because game 2 is stated in terms of abstract probabilities, see below).

If you could choose whether or not to have this guilt, would you choose to have it? Does it make you better off?

comment by avichapman · 2012-05-01T05:53:49.121Z · LW(p) · GW(p)

I know this was posted 4 years ago, but I had a thought. If I was offered a certainty of $24,000 vs a 33/34 chance of $27,000, my preference would depend on whether this was a once-off. If this was a once-off, my primary concern would be securing the money and being able to put food on the table tonight. Option 1 will put food on the table with 100% certainty, while Option 2 will not.

If, however, the option was to be offered many times, I would optimise for greatest return - Option 2. If I miss out this month, I'll just scrape for food until next month, when chance are I'll get the money.

I think I just answered my own question. If my goal can be reached with $24,000, then Option 1 is the best one because it reaches the goal in one guaranteed fell swoop. However, if my goal is to make lots of money, then Option 2 is the way to go, because it makes the most over time.

That make sense to anyone?

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2012-05-01T06:22:16.849Z · LW(p) · GW(p)

It absolutely can make sense to prefer option 1A over option 1B (which I think is what you mean). What does not make sense is to prefer option 1A over 1B, AND prefer 2B over 2A. It's worth reading the two followup articles before you get into this further: Zut Allais and Allaise Malaise. Welcome to Less Wrong!

comment by drnickbone · 2012-05-01T07:29:10.917Z · LW(p) · GW(p)

This is an old post, but I guess one resolution is that:

U($24,000) > 33/34 U($27,000) + 1/34 U($0 & Regret that I didn't take the $24000)

Which is consistent with:

0.34 U($24,000) + 0.66 U($0) < 0.33 U($27,000) + 0.67 U($0)

It's an interesting psychological fact that the regret is triggered in one case, but not the other.

comment by A1987dM (army1987) · 2012-08-29T22:52:20.272Z · LW(p) · GW(p)

I wonder if this bias is somehow trying to compensate for some other bias. Suppose you think the experimenter is overconfident, i.e., their log-odds are twice as much as they should; so, when they say 100% they do mean 100%, but when they say 97.1% they actually mean 85.2% (and when they say 34% they mean 41.8%, and when they say 33% they mean 41.2%). Now, Option 1B suddenly looks much uglier, doesn't it? (I'm not claiming this happens consciously.)

comment by Flipnash · 2012-10-15T18:58:56.732Z · LW(p) · GW(p)

If flipping the switch before 12:00 pm has no effect on the amount of money one acquires why would one pay anything to do it? why not just flip the switch only once after 12:00 pm and before 12:05PM?

comment by Elithrion · 2013-03-02T22:25:33.902Z · LW(p) · GW(p)

Question: do the rest of you actually find the choice of 1A clearly intuitive?

I think my intuition for examples like this has been safely killed off, so my replacement intuition instead says: "hm, clearly 34*(27-24) > 27, so 1B!" (without actually evaluating 27-24, just noting it's ≥1). Which mainly suggests that I've grown accustomed to calculating expectations out explicitly where they're obvious, not that I'm necessarily good at avoiding real life analogues of the problem.

Replies from: Martin-2
comment by Martin-2 · 2013-03-07T00:32:33.701Z · LW(p) · GW(p)

do the rest of you actually find the choice of 1A clearly intuitive?

I chose 1B. I seem to be an outlier in that I chose 1B and 2B and did no arithmetic.

Replies from: None
comment by [deleted] · 2015-06-26T18:10:41.512Z · LW(p) · GW(p)

Me too! We're just two greedy people!:)

comment by christopherj · 2013-11-28T15:11:54.889Z · LW(p) · GW(p)

1A. $24,000, with certainty.

1B. 33/34 chance of winning $27,000, and 1/34 chance of winning nothing.

2A. 34% chance of winning $24,000, and 66% chance of winning nothing.

2B. 33% chance of winning $27,000, and 67% chance of winning nothing.

I would choose 1A over 1B, and 2B over 2A, despite the 9.2% better expected payout of 1B and the small increased risk in 2B. If the option was repeatable several times, I'd choose 1B over 1A as well (but switch back to 1A if I lost too many times).

This does not make me susceptible to a money pump or a Dutch book (you're welcome to try, but note that I don't accept trades with negative expected utility). I simply think that my utility function at this time is such that Utility($24,000)>Utility(97% chance $27,000 + 3% chance $0), yet also Utility(34% chance $24,000 + 66% chance $0)<Utility(33% chance $27,000 + 67% chance $0)

I acknowledge that in one case, I trade expected payout for certainty, and in the other, I trade increased risk (not certainty) for expected payout. I'm not sure I see anything wrong with this, unless you're offended that I am willing to pay for certainty. Certainty is valuable in this world of overconfident people, accidents, and cheaters.

Replies from: Vaniver
comment by Vaniver · 2013-11-29T01:20:29.956Z · LW(p) · GW(p)

This does not make me susceptible to a money pump or a Dutch book (you're welcome to try, but note that I don't accept trades with negative expected utility). I simply think that my utility function at this time is such that Utility($24,000)>Utility(97% chance $27,000 + 3% chance $0), yet also Utility(34% chance $24,000 + 66% chance $0)<Utility(33% chance $27,000 + 67% chance $0)

This... means you're vulnerable to the Dutch Book described in the post. Why do you think otherwise?

I'm not sure I see anything wrong with this, unless you're offended that I am willing to pay for certainty.

Basically, this. The point of utility is that it's linear in probability, which disallows a premium for certainty. If I know your utility for $27,000, and your utility for $24,000, and $0, then I can calculate your preferences over any gamble containing those three outcomes. If your decision procedure is not equivalent to a utility function, then there are cases where you can be made worse off even though it looks to you like you're being made better off.

Certainty is valuable in this world of overconfident people, accidents, and cheaters.

Isn't certainty impossible in a world of overconfident people, accidents, and cheaters?

Replies from: christopherj
comment by christopherj · 2013-11-29T15:32:08.947Z · LW(p) · GW(p)

This... means you're vulnerable to the Dutch Book described in the post. Why do you think otherwise?

I'm really not. You mean, "This means that according to my theory you're vulnerable to the Dutch Book described in the post" Like I said though, I'm not accepting trades with negative utility, and being money pumped and Dutch Booked both have negative utility.

As for the "money pump" described in the post, I gain $23,999.98 if it happens as described. Also, there would have been no need to pay the first penny as the state of the switch was not relevant at that time. Also the game was switched from "34% for 24,000 and 33% for 27,000" to "34% chance to play game 1, at which time you may choose"

Basically, this. The point of utility is that it's linear in probability, which disallows a premium for certainty. If I know your utility for $27,000, and your utility for $24,000, and $0, then I can calculate your preferences over any gamble containing those three outcomes. If your decision procedure is not equivalent to a utility function, then there are cases where you can be made worse off even though it looks to you like you're being made better off.

I agree that if you take the probability out of my utility function, then I am directly altering my preference in the exact same situation. Even so, there is in reality at least one difference: if someone is cheating or made a miscalculation, option 1A is cheat-proof and error-proof but none of the other options are. And I've definitely attached utility to that. This aspect would disappear if probabilities were removed from my utility function.

Replies from: Vaniver
comment by Vaniver · 2013-11-29T19:55:29.205Z · LW(p) · GW(p)

Like I said though, I'm not accepting trades with negative utility, and being money pumped and Dutch Booked both have negative utility.

You've expressed that 1A>1B, and 2B>2A. The first deal is "Instead of 2A, I'll give you 2B for a penny." By your stated preference, you agree. The second deal is "Instead of 1B, I'll give you 1A." By your stated preference, you agree. You are now two pennies poorer. So either you do not actually hold those stated preferences, or you are vulnerable to Dutch booking. (What does it mean to actually prefer one gamble to another? That you're willing to pay to trade gambles. Suppose you hate selling things; then your preferences depend on the order you received things, which makes you vulnerable to the order in which other people present you options!)

Also the game was switched from "34% for 24,000 and 33% for 27,000" to "34% chance to play game 1, at which time you may choose"

What is the difference between those two games? The outcome probabilities are the same (multiply them out and check!). Or are you willing to pay hundreds of dollars (in expectation) to have him roll two dice instead of one?

Even so, there is in reality at least one difference: if someone is cheating or made a miscalculation, option 1A is cheat-proof and error-proof but none of the other options are.

But, don't you have some numerical preference for this? If it were a certain 24,000 against a 33/34ths chance of 27 million, I hope you'd pick the latter, even if there's some chance of the die being loaded in the second option. What this suggests, then, is that you need to adjust your probabilities- but if the probabilities are presented to you as your estimate after cheating is taken into account, then it doesn't make sense to double-count the risk of cheating!

(One useful heuristic that people often have when evaluating gambles is imagining the person on the other side of the gamble. If something looks really good on your end and really bad on their end, then this is suspicious- why would they offer you something so bad for them? Keep in mind, though, that gambles are done both against other people and against the environment. If there's gold sitting in the ground underneath you, and you have a 97% chance of successfully extracting it and becoming a millionaire, you shouldn't say "hmm, what's in it for the ground? Why would it offer me this deal?")

Replies from: christopherj
comment by christopherj · 2013-11-30T06:20:50.359Z · LW(p) · GW(p)

You've expressed that 1A>1B, and 2B>2A. The first deal is "Instead of 2A, I'll give you 2B for a penny." By your stated preference, you agree. The second deal is "Instead of 1B, I'll give you 1A." By your stated preference, you agree.

Note that it becomes a different problem this way than my stated preferences (and note again that my stated choices (not preferences) were context-dependent) -- there is the additional information that the dealmaker had a good chance to cheat and didn't take it. This information will reduce my disutility calculation for the uncertainty in the offer, as it increases my odds of winning 1B from [33/34 - good chance of cheating] to [33/34 - small chance of cheating]

You are now two pennies poorer.

Or 23,999.98 dollars richer.

So either you do not actually hold those stated preferences, or you are vulnerable to Dutch booking

If I did hold those preferences, I would not be vulnerable to Dutch booking, nor money pumping. Money pumping is infinite, whereas by giving me two pairs of different choices you can make me choose twice (and it's not a preference reversal, though it would be exactly a preference reversal if you multiply the first choice's odds by 0.34 and pretend that changes nothing).

For me to be vulnerable to Dutch booking, you'd have to somehow get money out of me as well. But how? I can't buy game 1 for less than 24,000 minus the cost of various witnesses if I intend to choose 1A, and you can't sell game 1 for less than 26,200. You'd have an even worse time convincing me to buy game 2. You can't convince me to bid against either of the theoretically superior choices 1B and 2B. If you change my situation I might change my choice, as I already stated several conditions that would cause me to abandon 1A.

What is the difference between those two games?

Option 1A has a 0% chance of undetected cheating. Options 1B, 2A, and 2B all have a 100% chance of undetected cheating. In Game 3, you can pay to change your default choice twice, and the dealmaker shows a willingness to eliminate his ability to cheat before your second choice.

But, don't you have some numerical preference for this?

Not currently. There would be a lot of factors determining how likely I think a miscalculation or cheating might be, and there is no way to determine this in the abstract.

comment by Jiro · 2013-11-30T04:33:58.682Z · LW(p) · GW(p)

I don't like many of the standard arguments against capital punishment. In particular, I'm tired of the argument "if you just put an innocent person in jail, they might be exonerated later. If you execute an innocent person, and they are exonerated later, it's too late."

Of course, I then point out that people can be exonerated in the time between being convicted and being executed (which can be quite long sometimes), and the response is generally that in the life sentence there's always some chance of being freed due to exoneration while in the capital punishment case, there's a segment of time where there's no chance of being freed.

My response is that a chance X of being freed due to exoneration when sentenced to life in prison is, for some Y, equivalent to having a chance Y of being freed due to exoneration before your execution and zero chance of being freed after being executed. Since there are values of X that are considered acceptable, there are values of Y that must be acceptable too and therefore this argument cannot be used as a basis for an absolutist anti-capital-punishment stance.

I have yet to have anyone understand my response (the few times I've tried it, anyway). But it seems to me that I've stumbled onto something equivalent to the Allais problem. People don't think of "chance X of being freed" and "chance Y of being freed before execution and no chance of being freed after execution" as statements that can ever be equivalent, because they really don't like the certain failure in the last example, even though the two may be mathematically equivalent.

Replies from: hyporational
comment by hyporational · 2013-11-30T13:06:59.857Z · LW(p) · GW(p)

Since there are values of X that are considered acceptable, there are values of Y that must be acceptable too and therefore this argument cannot be used as a basis for an absolutist anti-capital-punishment stance.

I agree.

Have you considered that life in prison has more value than being dead? Also, why compare capital punishment to life sentences? What if there were no life sentences? Of course you can still die in prison for whatever that's worth, but the chance is significantly smaller.

Replies from: Jiro
comment by Jiro · 2013-11-30T16:48:38.758Z · LW(p) · GW(p)

Have you considered that life in prison has more value than being dead?

I didn't post that because it was about capital punishment, I posted it because I thought this particular anti-capital punishment argument was relevant to the Allais problem. I don't see how life in prison being more valuable than being dead is relevant to the Allais problem.

What if there were no life sentences? Of course you can still die in prison for whatever that's worth, but the chance is significantly smaller.

Insofar as that's relevant, it just changes the values of X and Y; the absolutist "we can't do it because an innocent may be exonerated only after he is killed" position still has the same flaw.

Replies from: hyporational
comment by hyporational · 2013-11-30T17:16:46.304Z · LW(p) · GW(p)

Ok, good to know you weren't trying to sneak in politics. I agree it's not relevant.

Insofar as that's relevant, it just changes the values of X and Y; the absolutist "we can't do it because an innocent may be exonerated only after he is killed" position still has the same flaw.

Yes, if we're strictly logical this is true.

comment by Quill_McGee · 2014-04-14T01:20:46.060Z · LW(p) · GW(p)

My resolution to this, without changing my intuitions to pick things that I currently perceive as 'simply wrong', would be that I value certainty. A 9/10 chance of winning x dollars is worth much less to me than a 10/10 chance of winning 9x/10 dollars. However, a 2/10 chance of winning x dollars is worth only barely less than a 4/10 chance of winning x/2 dollars, because as far as I can tell the added utility of the lack of worrying increases massively as the more certain option approaches 100%. Now, this becomes less powerful the closer the odds, are, but slower than the dollar difference between the two change. So a 99% chance of x is barely effected by this compared to a 100% chance of .99x, but still by a greater value than .01x, and the more likely option still dominates. I might take a 99% chance of x over a 100% chance of .9x, however, and I would definitely prefer a 99% chance of x over a 100% chance of 0.8x.

EDIT: Upon further consideration, this is wrong. If presented with the actual choice, I would still prefer 1A to 1B, but to maintain consistency I will now choose 2A > 2B.

comment by [deleted] · 2015-03-16T17:22:39.235Z · LW(p) · GW(p)

I don´t really see how me chosing 1A > 1b and 2b >2A is a flaw of mine. First of all, my utility function, which i have inherited from millions of years of evolution, tells me to SOMETIMES take risks IF I CAN AFFORD IT, especially when the increasing stake outweighs the increasing risk.

This is how I see it: If it was my life at stake, I would of course try to raise the odds. But this is extra money. I don´t even starve if i don´t get the money.

If I am not certain I can get the money in case 2, I think that lowering my win-chance with 1/100 is worth to raise the stake with 3000 dollars, which is 3000/24000 = 1/8 of the original stake. When I lower my odds with 1 % I raise the stake with 12,5 %.

Since the outcome is random anyhow, AND not in my favor, and the risk increase is only 1/100, I take my chances.

comment by [deleted] · 2015-06-26T12:24:28.044Z · LW(p) · GW(p)

The Allais "Paradox" and Scam Vulnerability by Karl Hammer is a much needed update for anyone who reads the OP.

comment by Epictetus · 2015-08-18T14:30:34.037Z · LW(p) · GW(p)

Would I pay $24k to play a game where I had a 33/34 probability of winning an extra $3k? Let's consult our good friend the Kelly Criterion.

We have a bet that pays 1/8:1 with a 33/34 probability of winning, so Kelly suggests staking ~73.5% of my bankroll on the bet. This means I'd have to have an extra ~$8.7k I'm willing to gamble with in order to choose 1b. If I'm risk-averse and prefer a fractional Kelly scheme, I'd need to start with ~$20k for a three-fourths Kelly bet and ~$41k for a one-half Kelly bet. Since I don't have that kind of money lying around, I choose 1a.

In case 2, we come across the interesting question of how to analyze the costs and benefits of trading 2a for 2b. In other words, if I had a voucher to play 2a, when would I be willing to trade it for a voucher to play 2b? Unfortunately, I'm not experienced with such analyses. Qualitatively, it appears that if money is tight then one would prefer 2a for the greater chance of winning, while someone with a bigger bankroll would want the better returns on 2b. So, there's some amount of wealth where you begin to prefer 2b over 2a. I don't find it obvious that this should be the same as the boundary between 1a and 1b.

This is a problem because the 2s are equal to a one-third chance of playing the 1s. That is, 2A is equivalent to playing gamble 1A with 34% probability, and 2B is equivalent to playing 1B with 34% probability.

Equivalence is tricky business. If we look at the winnings distribution over several trials, the 1s look very different from the 2s and it's not just a matter of scale. The distributions corresponding to the 2s are much more diffuse.

Surely, the certainty of having $24,000 should count for something. You can feel the difference, right? The solid reassurance?

A certain bet has zero volatility. Since much of the theory of gambling has to do with managing volatility, I'd say certainty counts for a lot.

comment by Starglow · 2015-10-22T16:59:53.391Z · LW(p) · GW(p)

Forgive me if I'm misunderstanding something, but the way I see it, if I choose 1A, it means that I am willing to forgo (i.e. pay) 3000$ for an additional 1/34 ~ 3% chance of getting money. Then if I choose 2B, if means I am unwilling to forgo an additional 3000$ in exchange for an additional 1% chance of getting money. So what I learn from this is that the value I assign an extra percentage chance of getting money is somewhere between 1000$ and 3000$.

comment by lolbifrons · 2016-08-18T00:23:34.396Z · LW(p) · GW(p)

So here's why I prefer 1A and 2B after doing the math, and what that math is.

1A = 24000
1B = 26206 (rounded)
2A = 8160
2B = 8910

Now, if you take (iB-iA)/iA, which represents the percent increase in the expected value of iB over iA, you get the same number, as you stated.

(iB-iA)/iA = .0919 (rounded)

This number's reciprocal represents the number of times greater the expected value of iA is than the marginal expected value of iB

iA/(iB-iA) = 10.88 (not rounded)

Now, take this number and divide it by the quantity p(iA wins)-p(iB wins). This represents how much you have to value the first $24000 you receive over the next $3000 to pick iA over iB. Keep in mind that 24/3 = 8, so if $1 = 1 utilon in all cases, you should pick iA only when this quotient is less than 8.

1A/(1B-1A)/[p(1A wins)-p(1B wins)] = 369.92
2A/(2B-2A)/[p(2A wins)-p(2B wins)] = 1088

I have liabilities in excess of my assets of around $15000. That first $15000 is very important to me in a very quantized, thresholdy way, but it is not absolute. I can make the money some other way, but not needing to - having it available to me right now because of this game - represents more utility than a linear mapping of dollars to utility suggests, by a large factor.

The next threshold like this in my life that I can think of is "enough money to buy a house in Los Angeles without taking out a mortgage," of which $3000 is a negligible portion.

I'd say that the utility I assign the first $24000 because of this lies between 370 and 1080 times the utility I assign the next $3000. This is why I take 1A and 2B given that this entire thing is performed only once. Once my debts are paid, all bets (on 1A) are off.

If we're dealing with utilons rather than dollars, or I have repeated opportunity to play (which is necessary for you to "money pump" me) iB is the obvious choice in both cases.

comment by xSciFix · 2019-07-08T17:59:59.646Z · LW(p) · GW(p)

Assuming this is a one off and not a repeated iteration;

I'd take 1A because I'd be *really* upset if I lost out on $27k due to being greedy and not taking the sure $24k. That 1/34 is a small risk but to me it isn't worth taking - the $24k is too important for me to lose out on.

I'd take 2B instead of 2A because the difference in odds is basically negligible so why not go for the extra $3k? I have ~2/3rds chance to walk away with nothing either way.

I don't really see the paradox there. The point is to win, yes? If I play game 1 and pick B and hit that 1/34 chance of loss and walk away with nothing I'll be feeling pretty stupid.

Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference.  The switch starts in state A.  Before 12:00PM, you pay me a penny to throw the switch to B.  The die comes up 12.  After 12:00PM and before 12:05PM, you pay me a penny to throw the switch to A.

But why would I pay to switch it back to A when I've already won given the conditions of B? And as Doug_S. mentions, you can take my pennies if I'm getting paid out tens of thousands of dollars.

I do see the point in it being difficult to program this type of decision making, though.

comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2019-08-19T00:18:36.720Z · LW(p) · GW(p)

Oh, here I come again, I've already commented in similar fashion elsewhere, and several people said the same here: nothing vs. non-nothing as a binary switch may work better if the situation is not repeated to "add up to normality" but only played once. One can argue that repeats may seem as being played once each time, but, being creatures gifted with memory, we can notice a catch of encountering such situations often and modify behaviour.

comment by Robert Williams (robert-williams) · 2020-05-21T18:11:33.529Z · LW(p) · GW(p)

I would set up an insurance company that pays people to $24,500 to pick 1B and keeps their earning if they win. They get slightly more risk-free money and I profit massively. Isn't that the whole point of insurance?

comment by fiddler · 2020-06-26T06:49:11.665Z · LW(p) · GW(p)

I think this might just be a rephrasal of what several other commenters have said, but I found this conception somewhat helpful.

Based on intuitive modeling of this scenario and several others like it, I found that I ran into the expected “paradox” in the original statement of the problem, but not in the statement where you roll one dice to determine the 1/3 chance of me being offered the wager, and then the original wager. I suspect that the reason why is something like this:

Loosing 1B is a uniquely bad outcome, worse than its monetary utility would imply, because it means that I blame myself for not getting the 24k on top of receiving $0. (It seems fairly accepted the chance of getting money in a counterfactual scenario may have a higher expected utility than getting $0, but the actual outcome of getting $0 in this scenario is slightly utility-negative.)

Now, it may appear that this same logic should apply to the 1% chance of loosing 2B in a scenario where the counterfactual-me in 2A receives 24000 dollars. However, based on self-examination, I think this is the fundamental root of the seeming paradox: not an issue of value of certainty, but an issue of confusing counterfactual with future scenarios. While in the situation where I loose 1B, switching would be guaranteed to prevent that utility loss in either a counterfactual or future scenario, in the case of 2B, switching would only be guaranteed to prevent utility loss in the counterfactual, while in the future scenario, it probably wouldn’t make a difference in outcome, suggesting an implicit substitution of the future as counterfactual. I think this phenomenon is behind other commenters preference changes if this is an iterated vs one-shot game: by making it an iterated game, you get to make an implicit conversion back to counterfactual comparisons through law of large numbers-type effects.

I only have anecdotal evidence for this substitution existing, but I think the inner shame and visceral reaction of “that’s silly” that I feel when wishing I had made a different strategic choice after seeing the results of randomness in boardgames is likely the same thought process.

I think that this lets you dodge a lot of the utility issues around this problem, because it provides a reason to attach greater negative utility to loosing 1B than 2B without having to do silly things like attach utility to outcomes: if you view how much you regret not switching back through a future paradigm, switching in 1B is literally certain to prevent your negative utility, whereas switching in 2B probably won’t do anything. Note that this technically makes the money pump rational behavior, if you incorperate regret into your utility function: after 12:00, you’d like to maximize money, and have a relatively low regret cost, but after 12:05, the risk of regret is far higher, so you should take 1A.

I’d be really interested to see whether this expeirement played out differently if you were allowed to see the number on the die, or everything but the final outcome was hidden.

comment by Ian Televan · 2021-04-09T23:00:20.388Z · LW(p) · GW(p)

It seems that the mistake that people commit is imagining the the second scenario is a choice between 0.34*24000 = 8160 and 0.33*27000 = 8910. Yes, if that was the case, then you could imagine a utility function that is approximately linear in the region 8160 to 8910, but sufficiently concave in the region 24000 to 27000  s.t. the difference between 8160 and 8910 feels greater than between 24000 and 27000... But that's not the actual scenario with which we are presented. We don't actually get to see 8160 or 8910. The slopes of the utility function in the first and second scenarios are identical. 

"Oh, these silly economists are back at it again, asserting that my utility function ought to be linear, lest I'm irrational. Ugh, how annoying! I have to explain again, for the n-th time, that my function actually changes the slope in such a way that my intuitions make sense. So there!" <- No, that's not what they're saying! If you actually think this through carefully enough, you'll realize that there is no monotonically increasing utility function, no matter the shape, that justifies 1A > 1B and 2A < 2B simultaneously. 

comment by Vasco Grilo (vascoamaralgrilo) · 2022-04-18T08:47:26.095Z · LW(p) · GW(p)

"These two equations are algebraically inconsistent". Yes, combining them results into "0 < 0", which is false.

comment by ViktoriaMalyasova · 2022-10-30T18:57:02.060Z · LW(p) · GW(p)

It seems that the axiom of independence doesn't always hold for instrumental goals when you are playing a game.

Suppose you are playing a zero-sum game against Omega who can predict your move - either it has read your source code, or played enough games with you to predict you, including any pseudorandom number generator you have. You can make moves a or b, Omega can make moves c or d, and your payoff matrix is:
   c  d

a 0  4

b 4  1

U(a) = 0, U(b) = 1.

Now suppose we got a fair coin that Omega cannot predict, and can add a 0.5 probability of b to each:

U(0.5 a + 0.5 b) = min(0.5*0  + 0.5*4, 0.5*4 + 0.5*1) = 2

U(0.5 b + 0.5 b) = U(b) = 1

The preferences are reversed. However, the money-pumping doesn't work:

I have my policy switch at b. You offer me to throw a fair coin and switch it to a if the coin comes up heads, for a cost of 0.1 utility. I say yes. You throw a coin, it comes up heads, you switch to a and offer me to switch it back to b for 0.1 utility. I say no thanks.

You could say that the mistake here was measuring utilities of policies. Outcomes have utility and policies only have expected utility. VNM axioms need not hold for policies. But money is not a terminal goal! Getting money is just a policy in the game of competing for scarce resources.

I wonder if there is a way to tell if someone's preferences over policies is irrational, without knowing the game or outcomes. 

Replies from: andrew-jacob-sauer
comment by Andrew Jacob Sauer (andrew-jacob-sauer) · 2022-10-30T21:15:01.510Z · LW(p) · GW(p)

In this case the only reason the money pumping doesn't work is because Omega is unable to choose its policy based on its prediction of your second decision: If it could, you would want to switch back to b, because if you chose a, Omega would know that and you'd get 0 payoff. This makes the situation after the coinflip different from the original problem where Omega is able to see your decision and make its decision based on that.

In the Allais problem as stated, there's no particular reason why the situation where you get to choose between $24,000, or $27,000 with 33/34 chance, differs depending on whether someone just offered it to you, or if they offered it to you only after you got <=34 on a d100.

Replies from: ViktoriaMalyasova
comment by ViktoriaMalyasova · 2022-10-30T22:58:27.875Z · LW(p) · GW(p)

Well, Omega doesn't know which way the coin landed, but it does know that my policy is to choose a if the coin landed heads and b if the coin landed tails. I agree that the situation is different, because Omega's state of knowledge is different, and that stops money pumping. 

It's just interesting that breaking the independence axiom does not lead to money pumping in this case. What if it doesn't lead to money pumping in other cases too?

comment by [deleted] · 2020-10-12T14:18:25.542Z · LW(p) · GW(p)