Zeckhauser's roulette

post by cousin_it · 2012-01-19T19:22:50.716Z · LW · GW · Legacy · 48 comments

Imagine you're playing Russian roulette. Case 1: a six-shooter contains four bullets, and you're asked how much you'll pay to remove one of them.  Case 2: a six-shooter contains two bullets, and you're asked how much you'll pay to remove both of them. Steven Landsburg describes an argument by Richard Zeckhauser and Richard Jeffrey saying you should pay the same amount in both cases, provided that you don't have heirs and all your remaining money magically disappears when you die. What do you think?

48 comments

Comments sorted by top scores.

comment by DanielVarga · 2012-01-19T23:36:48.273Z · LW(p) · GW(p)

This thing rhymes with the Allais paradox, and my loss-aversion based resolution applies here also. In Case 2, I imagine myself being the dumb cheapskate who could have avoided death if only he was a bit more generous. In Case 1, I have to face death either way, and I see myself as a victim, not as a bargainer. And this is perfectly rational, because this is exactly how people will later think about my situation.

Basically, the extra term in my utility function that resolves the paradox is the same that makes me prefer to die in an accident that's someone else's fault, as opposed to my fault.

By the way, I would pay all the money I have, in both cases, so I guess there is something wrong with this exact formulation of the question.

comment by Scott Alexander (Yvain) · 2012-01-22T03:01:32.911Z · LW(p) · GW(p)

Yes. Suppose you had a gun with one trillion chambers. 1 is empty, 2 is empty, 3 contains a "negotiable" bullet, and 4 through one trillion all have non-negotiable bullets. How much would you pay to remove the bullet from 3?

Your first reaction is that forget it, it doesn't matter, you're quite certainly going to die anyway. Your second reaction is eh, what the heck, spend all your money on it, you're going to die anyway and you can't take it with you.

The only thing preventing you from spending all your money on bullet 3 is the infinitesimal chance that the gun will land on chamber 1 or 2, and then you'll regret having to live the rest of your life in poverty when you would have survived in any case. So although you don't really care because you're overwhelmingly likely to die either way, you balance your tiny chance of landing on bullet 3 and regretting you didn't buy it off against your tiny chance of landing on bullets 1 or 2 and regretting that you did.

This calculation remains the same whether we decrease the non-negotiable chambers to 0 or increase them to 3^^^3.

comment by prase · 2012-01-20T12:08:19.669Z · LW(p) · GW(p)

Let's say you pay x in the first scenario and y in the second, and normalise the utilities so that U(death) = -1 and U(alive with your current wealth) = U(0) = 0. Then case 1 yields expected utility either -2/3 or -1/2 + 1/2 U(-x) and case 2 has either -1/3 or U(-y). On assumption that utilities are linear in money, U(z) = qz, a rational agent will pay the solutions to the linear equations

  1. -2/3 = -1/2 - qx/2 , x = 1/(3q)
  2. -1/3 = - qy, y = 1/(3q)

Thus, the standard utilitarian reasoning gives that you should pay the same and the linked argument is correct.

The incorrect argument relies on a bit different calculation: it includes the negative utility form lost money even in the case you die. In such a case (assuming the disutility of lost money simply adds to the disutility of death), we get

  1. -2/3 = -1/2 - qx , x = 1/(6q)
  2. -1/3 = - qy, y = 1/(3q)

paying twice as much in case 2. The problem is that counting monetary loss after death as additional disutility contradicts the explicit assumptions (no heirs and all money disappearing after your death) and a reasonable implicit assumption (you have no chance to spend money after you realise your participation in the roulette and before the outcome is known).

Edit: the assumption of linear utilities is not necessary for the conclusion that you should pay the same.

comment by DanielVarga · 2012-01-20T00:39:27.366Z · LW(p) · GW(p)

There is a clever reformulation by a Michael Dekker at Landsburg's blog:

You’re on a game show. You can’t leave with negative money. There are 3 doors, 2 with $6,000 cash, 1 with a goat (worth nothing…). You can pick a door, but first you can offer the host $x from your winnings (currently $0) to replace the goat with the prize. What do you offer?

Same situation, 6 doors, 2 prizes, 4 goats. What do you offer from your winnings to replace 1 goat with a prize?

Note that you only pay if you win. It's very clever, but slightly incomplete. When he wrote "what do you offer?", he really meant "at what $x amount are you indifferent between offering and not not offering the money?". Is there a way to fix this annoyance, and really formulate the question as a "what do you offer?".

Of course, there are many plausible utility functions that make it cease to be an equivalent reformulation. For example, if you don't like giving money to murderers and kidnappers. Or the kind of loss aversion that I discussed.

Replies from: cousin_it
comment by cousin_it · 2012-01-20T00:49:33.181Z · LW(p) · GW(p)

In this reformulation it feels obvious to me that I should pay the same amount in both cases. But it's not obvious to me that the reformulation is equivalent to the original problem, because dying is not necessarily the same as surviving but losing all your utility, if some of the utility is due to experiences you can only get when you're alive.

Replies from: DanielVarga
comment by DanielVarga · 2012-01-20T02:13:25.192Z · LW(p) · GW(p)

I agree. But am I wrong to think that your exchange with steven0461 already cleared this up as much as it is possible?

By the way, it is amusing that Michael Dekker got his version by getting rid of the blood in Zeckhauser's version, and Zeckhauser got his version by adding some blood to Allais' version.

comment by timtyler · 2012-01-19T20:38:09.153Z · LW(p) · GW(p)

You don't have heirs and all your remaining money magically disappears when you die?!?

If you assume weird things then human intuition doesn't always work so well.

comment by Manfred · 2012-01-19T19:54:19.096Z · LW(p) · GW(p)

Okay, so "russian roulette" meaning the gun is held to your head and the trigger is pulled once.

And "pay" means in terms of candy bars (utility that's only realized if you live), not malaria victims (utility that gets realized even if you're shot).

Okay, sure. Seems reasonable. I think the intuitiveness has a lot to do with the phrasing.

Replies from: cousin_it
comment by cousin_it · 2012-01-19T22:02:38.464Z · LW(p) · GW(p)

I'm still not sure that the reasoning is correct. It may depend on your life goals. For example, if your only goal in life is saving weasels from avalanches, which requires you to be alive but doesn't require any money, then case 2 lets you save 2x more future weasels than case 1, so I guess you'd pay more. On the other hand, if your utility function doesn't mention any weasels and you care only about candy bars eaten by the surviving version of you, then I'm not sure why you'd want to pay to survive at all. In either case Landsburg's conclusion seems to be wrong. Or am I missing something?

Replies from: steven0461
comment by steven0461 · 2012-01-19T23:23:40.372Z · LW(p) · GW(p)

In the former case you'd pay infinity (or all you have) either way. In the latter case you'd pay zero either way. I don't see how that contradicts Landsburg.

Replies from: cousin_it
comment by cousin_it · 2012-01-19T23:54:14.383Z · LW(p) · GW(p)

You're right and I'm being stupid, thanks. But what if you value both weasels (proportional to the probability of survival) and candy bars (proportional to remaining money in case of survival)? Then each bullet destroys a fixed number of weasels and no candy bars, so you should pay 2x more candy bars to remove two bullets instead of one, no?

Replies from: steven0461
comment by steven0461 · 2012-01-20T00:16:42.205Z · LW(p) · GW(p)

The bullet does destroy candy bars. Unless you're introducing some sort of quantum suicide assumption, where you average only over surviving future selves? I suppose then you're correct: the argument cited by Landsburg fails, because it must be assuming somewhere that your utility function is a probability-weighted sum over future worlds.

Replies from: cousin_it
comment by cousin_it · 2012-01-20T00:36:19.513Z · LW(p) · GW(p)

You're right again, thanks again :-) I was indeed using a sort of quantum suicide assumption because I don't understand why I should care about losing candy bars in the worlds where I'm dead. In such worlds it makes more sense to care only about external goals like saving weasels, or not getting your relatives upset over your premature quantum suicide, etc.

Replies from: steven0461
comment by steven0461 · 2012-01-20T00:40:00.386Z · LW(p) · GW(p)

Specifically, I think the middle part of the argument would fail, because you'd go, "eh, if they're executing half of my future selves, I can only save half the weasels at a given cost in average candy bars, so I'll spend more of the money on candy bars".

comment by FAWS · 2012-01-21T03:57:36.086Z · LW(p) · GW(p)

Obviously correct under the deeply weird assumptions made. In both times the probability of survival is increased by 50% and the value of your money is proportional to your base survival chance, so both sides of the equation are changed by the same factor.

comment by wuthefwasthat · 2012-01-20T02:38:25.365Z · LW(p) · GW(p)

I agree that you should pay the same amount.

It feels as though you should be willing to pay twice as much in case 2, since you remove twice as much "death mass". At this point, one might be confused by the apparent contradiction. Some are chalking it up to intuition being wrong (and the problem being misinformed) and others are rejecting the argument. But both seem clearly correct to me. And the resolution is simple - notice that your money is worth half as much in case 1, since you are living half as often!

comment by GuySrinivasan · 2012-01-20T00:05:23.754Z · LW(p) · GW(p)
  • if you just set up the lottery equivalences they simplify to U(life - $X) = U(life - $Y) pretty much implying that whatever the correct choices of X and Y happen to be, they're equal.
  • the setup is very dependent on the exact probabilities: if you change the cases to 10-shooters where you can remove one from five bullets or where you can remove two from two bullets, there's not a proof that the amounts you should pay are equal.
comment by DanielLC · 2012-01-19T23:40:49.711Z · LW(p) · GW(p)

provided that you don't have heirs and all your remaining money magically disappears when you die. What do you think?

Just have it so you only have to pay if you win. That makes it less confusing.

Case 1: You have a 50% chance of paying $x to give a 16.7% increase in the probability of surviving. Case 2: You have a 100% chance of paying $x to give a 33.3% increase in the probability of surviving.

In either case, the amount you pay would be 1/3 the value of your life.

comment by Richard_Kennaway · 2012-01-20T09:45:30.249Z · LW(p) · GW(p)

Suppose the utility of living is 1 and of dying is 0. (Since these are the only two possible outcomes, it doesn't matter what value you take, as long as dead < alive.) In case 1 you're purchasing 1/6 of a utilon and in case 2 1/3, therefore (assuming linear value of money to simplify things) you should pay twice as much in the second case.

The cited argument goes wrong, so obviously that at first I had difficulty in understanding how it could be seriously put forward, in their comparision between case B (pay to remove the one bullet from a 3-shooter) and case C (half a chance of execution and half a chance of case B). In case B you're buying 1/3 of a utilon, in case C 1/6, hence pay twice as much in case B.

So their cases B and C don't work as an intuition pump for me. I think the point is that in case C, you should only consider the utility of the branch in which you are not executed, since if you are executed, then you don't have any use for the money anyway, so paying before the chance of execution is equivalent to being offered the choice of paying after escaping execution, but I think we're stepping outside standard decision theory in considering the agent's mortality. Quantum suicide anyone? If I leave that consideration aside, then clearly a certainty of something is worth twice a 50% chance of it, which is one of the basic assumptions of the utility theorem, and B is worth twice C.

BTW, here is an original paper by Jeffrey on the problem, but paywalled, and I can't get to it.

Note that "Zeckhauser's problem" sometimes refers to a different version, with just one bullet in case 2.

ETA: The different version is here, page 11, and differs in case 2, which has just one bullet in the six-shooter, and you can pay to remove it. The author there says that the two cases have the same value, because in both cases you are removing a 1/6 chance of dying. But if he is right, and Jeffrey is right about the original problem, then both versions of case 2 have the same value: you should pay the same amount to remove all the bullets whether there is one or two. Sounds like there's a divide by zero error somewhere, because there seems a straight road from there to proving that you should pay exactly the same amount to avoid any non-zero chance of death.

ETA2: In Dekker's game-show version referenced elsewhere in the comments, the same amount should be paid in both cases, but that's because you're not actually paying with money, but money you only have a 50% chance of having. You're getting half the benefit but paying with money you only have a 50% chance of, so the numbers come out the same. I can see the correspondence with the Russian roulette version, but unintuitive hypotheses (in this case about the lack of value of anything to you when you're dead) make for unintuitive intuition pumps.

Replies from: Tyrrell_McAllister, Quinn
comment by Tyrrell_McAllister · 2012-01-20T22:45:06.055Z · LW(p) · GW(p)

The cited argument goes wrong, so obviously that at first I had difficulty in understanding how it could be seriously put forward, in their comparision between case B (pay to remove the one bullet from a 3-shooter) and case C (half a chance of execution and half a chance of case B). In case B you're buying 1/3 of a utilon, in case C 1/6, hence pay twice as much in case B.

Cases B and C are equivalent according to standard decision theory.

Let L be the difference in utility between living-and-not-paying and dying. Let the difference in utility between living-and-paying and living-and-not-paying be X. Assume that you have no control over what happens if you die, so that the utility of dying is the same no matter what you decided to do. Normalize so that the utility of dying is 0.

In Case B, the expected utility of not-paying is 2/3 L + 1/3 0 = 2/3 L. The expected utility of paying is L − X. Thus, you agree to pay if and only if 2/3L ≤ L − X. That is, you pay if and only if X ≤ 1/3 L.

In Case C, the expected utility of not-paying is 1/2 0 + 1/2 2/3 L = 1/3 L. The expected utility of paying is 1/2 * 0 + 1/2 (L − X) = 1/2 (L − X). Thus, you agree to pay if and only if 1/3 L ≤ 1/2 (L − X). That is, you pay if and only if X ≤ 1/3 L.

Thus, in both cases, you will agree to pay the same amounts.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-01-21T22:01:53.255Z · LW(p) · GW(p)

I understand the argument, but it absolutely depends on your estate being of no value when you're dead. Ok, that's simply one of the rules given in the problem, but in reality, people generally care very much) about the posthumous disposal of their assets. The gameshow version makes the problem much clearer, because one can very easily imagine a gameshow run exactly according to those rules.

But then, by making it much clearer, the paradox is reduced: it is easy to understand the equality of the two cases, even if one's first guess was wrong.

comment by Quinn · 2012-01-20T22:01:36.682Z · LW(p) · GW(p)

I also reject the claim that C and B are equivalent (unless the utility of survival is 0, +infinity, or -infinity). If I accepted their line of argument, then I would also have to answer the following set of questions with a single answer.

Question E: Given that you're playing Russian Roulette with a full 100-shooter, how much would you pay to remove all 100 of the bullets?

Question F: Given that you're playing Russian Roulette with a full 1-shooter, how much would you pay to remove the bullet?

Question G: With 99% certainty, you will be executed. With 1% certainty you will be forced to play Russian Roulette with a full 1-shooter. How much would you pay to remove the bullet?

Question H: Given that you're playing Russian Roulette with a full 100-shooter, how much would you pay to remove one of the bullets?

Replies from: wuthefwasthat, Tyrrell_McAllister
comment by wuthefwasthat · 2012-01-22T03:50:01.062Z · LW(p) · GW(p)

You reject the claim, but can you point out a flaw in their argument?

I claim that the answers to E, F, and G should indeed be the same, but H is not equivalent to them. This should be intuitive. Their line of argument does not claim H is equivalent to E/F/G - do the math out and you'll see.

Replies from: Quinn
comment by Quinn · 2012-01-22T06:20:07.711Z · LW(p) · GW(p)

Actually my revised opinion, as expressed in my reply to Tyrell_McAllister, is that the authors' analysis is correct given the highly unlikely set-up. In a more realistic scenario, I accept the equivalences A~B and C~D, but not B~C.

I claim that the answers to E, F, and G should indeed be the same, but H is not equivalent to them. This should be intuitive. Their line of argument does not claim H is equivalent to E/F/G - do the math out and you'll see.

I really don't know what you have in mind here. Do you also claim that cases A, B, C are equivalent to each other but not to D?

Replies from: wuthefwasthat
comment by wuthefwasthat · 2012-01-22T12:16:17.290Z · LW(p) · GW(p)

Oops, sorry! I misread. My bad. I would agree that they are all equivalent.

comment by Tyrrell_McAllister · 2012-01-20T22:48:08.994Z · LW(p) · GW(p)

I also reject the claim that C and B are equivalent (unless the utility of survival is 0, +infinity, or -infinity).

What do you make of my argument here?

Replies from: Quinn
comment by Quinn · 2012-01-21T01:10:18.336Z · LW(p) · GW(p)

After further reflection, I want to say that the problem is wrong (and several other commenters have said something similar): the premise that your money buys you no expected utility post mortem is generally incompatible with your survival having finite positive utility.

Your calculation is of course correct insofar as it stays within the scope of the problem. But note that it goes through exactly the same for my cases F and G. There you'll end up paying iff X ≤ L, and thus you'll pay the same amount to remove just 1 bullet from a full 100-shooter as to remove all 100 of them.

comment by Paul Crowley (ciphergoth) · 2012-01-19T20:59:52.878Z · LW(p) · GW(p)

Yes, the key is that you only pay if you live.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-01-20T19:28:44.099Z · LW(p) · GW(p)

The key is that you have no control over what happens if you die. That is, the utility of dying is the same regardless of what you do.

comment by Vaniver · 2012-01-20T01:47:10.676Z · LW(p) · GW(p)

Hmm. Case 1, if I pay, my expected lifespan extends by 50%. Case 2, if I pay, my expected lifespan extends by 50%. As pointed out by shminux, it's not clear to me that percentages are the right way to go about this. In Case 1, my expected lifespan increases by 10 years (assuming 60 years left), and in Case 2 my expected lifespan increases by 20 years (same assumption).

All of the work is being done by the "money is worthless if you die" assumption. If you ask a question like "how many years of your life would you pay to remove 1 bullet / 2 bullets?" then the answer is obviously different.

comment by Vladimir_Nesov · 2012-01-19T22:46:26.854Z · LW(p) · GW(p)

How do you convert this question into a decision problem? It seems like expected utility is being compared for the two situations, but only within the events of higher probability of survival (50% in 4-bullet case and 100% in 2-bullet case), which is a rather arbitrary choice. On the other hand, if we consider the value of whole 100% event for the first situation, it already has immutable 50% death component in it, so it automatically loses expected utility comparison with the second situation.

So it seems like the equivalence in value is produced by the impulse to make the problem "fair", while the problem isn't well-defined. It seems clear what's going on in the situations as described, but we are given no basis for comparing them.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-01-20T01:04:46.684Z · LW(p) · GW(p)

Here is how I understood the problem:

Let L be the difference in utility between living-and-not-paying and dying. Fix one of the scenarios — say, the first one*, where you can pay to have no bullets. For each positive number X, consider the following decision problem:

Let the difference in utility between living-and-paying and living-and-not-paying be X. (Dying is assumed to have the same utility regardless of whether you paid.) Should you pay to change the probability of dying as described? For each X, answering this is just a matter of computing the expected utilities of paying and not-paying, respectively.

Now determine the maximum value of X (in terms of L) such that you decide to pay.

Now repeat the above for the other scenario.

It turns out that, in both scenarios, the maximum value of X such that you decide to pay is the same: X = 1/3 L. That is the meaning of the claim that "you should pay the same amount in both cases".


* ... as enumerated at the Landsburg link, not in the OP ...

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-01-20T01:52:20.317Z · LW(p) · GW(p)

I see. So the problem should be not "How much you'd pay to remove bullets?", but "How much you'd precommit to paying if you survive, to remove bullets?"

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-01-20T19:35:31.312Z · LW(p) · GW(p)

Yes. It's assumed that you have no control over the value of what happens if you die.

comment by shminux · 2012-01-19T21:57:03.014Z · LW(p) · GW(p)

A really bad example, since they didn't tell you how much your life is worth to you.

I value my life high enough to pay ALL I HAVE (and try to borrow some) to increase my survival odds from 66% to 100% or from 33% to 50%. (If you don't value your life high enough, substitute it for that of your child in the question.) The only time actually estimating cost comes into play is when the risk change is small enough to be close to the noise level. For example, deciding whether to pay more for a safer car, because the improved collision survival odds increase your life expectancy by 1 day (I made up the number, not sure what the real value is).

Now, if you frame the question as "Q1: you have a 66% chance of winning $1000, how much would you pay to increase it to 100%? vs Q2: you have a 33% chance of winning $1000, how much would you pay to increase it to 50%?". In this example your life is worth $1000, a number small enough to be affordable. The answer is clear: your expected win increases 33% in Q1 and half that in Q2, so you should pay $333 or less in Q1 and $166 or less in Q2.

So, where does the author go wrong?

Question A: You’re playing with a six-shooter that contains two bullets. How much would you pay to remove them both? (This is the same as Question 1.)

Question B: You’re playing with a three-shooter that contains one bullet. How much would you pay to remove that bullet?

Question C: There’s a 50% chance you’ll be summarily executed and a 50% chance you’ll be forced to play Russian roulette with a three-shooter containing one bullet. How much would you pay to remove that bullet?

QA: 66%->100%. QB: 66%->100%, QC: 33%->50%. The author's logic "In Question C, half the time you’re dead anyway. The other half the time you’re right back in Question B. So surely questions C and B should have the same answer." breaks down, because they mix finite costs ($1000) in this case with infinite ones ("dead anyway", i.e. infinite loss), leading to a contradiction.

Replies from: DanielLC, orthonormal
comment by DanielLC · 2012-01-19T23:52:07.606Z · LW(p) · GW(p)

A really bad example, since they didn't tell you how much your life is worth to you.

It only changes what you'd pay proportionately, so it wouldn't make a difference.

The real problem is that they didn't tell you how much you're capable of paying. Let's assume you can pay an infinite amount. Perhaps they torture you for a period of time.

So, where does the author go wrong?

provided that you don't have heirs and all your remaining money magically disappears when you die.

Your money is only valuable if you survive. Think of it as them reducing your winnings. It doesn't matter if you don't win. In that case, you should be willing to have them reduce it by $333 in either case.

because they mix finite costs ($1000) in this case with infinite ones ("dead anyway", i.e. infinite loss)

If your utility function works like this, you can just abandon the finite part. It's effectively impossible for it to come up, and it's not really worth thinking about.

Also, you seemed to imply that it was a finite (though high) cost earlier.

The only time actually estimating cost comes into play is when the risk change is small enough to be close to the noise level.

Why would noise level matter?

Replies from: shminux
comment by shminux · 2012-01-20T00:24:40.383Z · LW(p) · GW(p)

A really bad example, since they didn't tell you how much your life is worth to you.

It only changes what you'd pay proportionately, so it wouldn't make a difference.

No, because, as you say:

The real problem is that they didn't tell you how much you're capable of paying.

I implied the same ("pay ALL I HAVE (and try to borrow some)"), if maybe not as succinctly.

Your money is only valuable if you survive. Think of it as them reducing your winnings. It doesn't matter if you don't win. In that case, you should be willing to have them reduce it by $333 in either case.

all your remaining money magically disappears when you die.

Right, I ignored this last condition, which breaks the assumption of "your life is worth $1000" if you have more than that in your bank account. However, in that case there is no way to limit your bet, and the problem becomes meaningless:

If your utility function works like this, you can just abandon the finite part. It's effectively impossible for it to come up, and it's not really worth thinking about.

It's not mine, it's theirs (you lose everything you own, no matter how much). Which supports my point of a badly stated problem.

comment by orthonormal · 2012-01-19T22:41:05.233Z · LW(p) · GW(p)

If dying were an infinite loss to you, you'd never drive an extra mile just to save money on buying something (let alone all the other small risks you take).

Replies from: jmmcd, saturn, shminux
comment by jmmcd · 2012-01-19T22:57:45.973Z · LW(p) · GW(p)

I think this answers your comment:

The only time actually estimating cost comes into play is when the risk change is small enough to be close to the noise level. For example, deciding whether to pay more for a safer car, because the improved collision survival odds increase your life expectancy by 1 day (I made up the number, not sure what the real value is).

comment by saturn · 2012-01-19T23:25:57.098Z · LW(p) · GW(p)

He said avoiding a 33% chance of death is worth more money than he has, which doesn't necessarily imply that the amount is infinite.

comment by shminux · 2012-01-19T22:47:58.374Z · LW(p) · GW(p)

Not to me, to the author of the question. This is the contradiction I am pointing out: a failure to put a value on the 50% chance of death in Question C.

comment by falenas108 · 2012-01-19T21:14:23.549Z · LW(p) · GW(p)

In Case 2, it's guaranteed free money. I would pay 1 cent less than what I get for winning, which is clearly not what I would do in Case 1.

Replies from: DanielLC
comment by DanielLC · 2012-01-19T23:32:56.867Z · LW(p) · GW(p)

You're assuming that you can decide not to play.

comment by D_Alex · 2012-01-20T04:58:00.145Z · LW(p) · GW(p)

I think the argument is wrong. Proof by counterexample: say you have $10 and you value your life at $5. Note that according to the terms of the question, the value of the money is lost as well as the value of the life if you get shot. Then:

Case 1: expected value of playing the game with 4 bullets = -4/6($10+$5) = -$10, 3 bullets: -3/6($10+$5) = -$7.5, delta = $2.5

Case 2: ... 2 bullets: -2/6*($10+$5) = -$5, 0 bullets = $0, delta = $5

So you should pay up to $2.5 to take the option in Case 1, but up to $5 in case 2.

I'm pretty sure this generalises, and only in some cases is the expected amount the same.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-01-20T19:43:54.067Z · LW(p) · GW(p)

Case 1: expected value of playing the game with 4 bullets = -4/6($10+$5) = -$10, 3 bullets: -3/6($10+$5) = -$7.5, delta = $2.5

[...]

So you should pay up to $2.5 to take the option in Case 1

This does not follow. Rather, you should be willing to pay an amount $X such that -3/6($10+$5) - 3/6 $X > -$10. That means that you are willing to pay as much as $5 in Case 1.

Replies from: D_Alex
comment by D_Alex · 2012-01-23T06:59:41.617Z · LW(p) · GW(p)

I hate to admit this, but you seem to be right. My mistake was that I did not allow for the fact that when you pay, the stake you are risking gets reduced by the amount you have paid.

This leads to bizarre follow-ons... for example, you should pay only half as much to remove the last bullet (1/6 --> 0/6) as you would to remove 1 bullet from 4 (4/6 --> 3/6)... and you'd pay 6 times as much to remove the first bullet (6/6 --> 5/6) as you would to remove the last bullet (1/6 --> 0/6). Which incidentally contradicts the original version of the paradox as stated here:

http://catdir.loc.gov/catdir/samples/cam033/2002035199.pdf - page 11

That paper states that most people would pay MORE to remove the last bullet than the first, but the right number should be the SAME in both cases. However, it does not account for the the loss of utility when you pay the money, and the subsequent lowering of the stake if you lose. So, the paradox is even more extreme than originally envisaged...!

Edit: and furthermore, under the assumptions made, you'd pay the same amount exactly to remove one bullet out of 6 (6/6 --> 5/6) as to remove all six bullets (6/6 --> 0/6), this amount being all the money you have plus debt to the point where life would hardly be worth living, so your utility is pretty much zero in any event... I guess this shows that weird assumptions lead to bizarre conclusions.

Replies from: Tyrrell_McAllister
comment by Tyrrell_McAllister · 2012-01-23T22:22:06.320Z · LW(p) · GW(p)

Which incidentally contradicts the original version of the paradox as stated here:

http://catdir.loc.gov/catdir/samples/cam033/2002035199.pdf - page 11

That paper states that most people would pay MORE to remove the last bullet than the first, but the right number should be the SAME in both cases.

I don't think that there is any contradiction here. The scenario in Schick's book is different from the one in the OP. Schick is considering the case where the decision to pay ends up costing you the same amount whether or not you end up getting shot.

It's not clear to me that you were claiming otherwise, but I just wanted to emphasize that you weren't contradicting Schick in the sense that at least one of you had to be making a mistake.

comment by PECOS-9 · 2012-01-20T06:05:23.128Z · LW(p) · GW(p)

The argument is wrong: Questions B and C are not equivalent. It takes advantage of a bias called the pseudocertainty effect:

The pseudocertainty effect is a concept from prospect theory. It refers to people's tendency to perceive an outcome as certain while in fact it is uncertain (Kahneman & Tversky, 1986).[1] It is observed in multi-stage decisions, in which evaluation of outcomes in previous decision stage is discarded when making an option in subsequent stages.

It is easy to see that B and C are not equivalent: in B, you have a choice between a 1/3 chance of dying and no chance of dying (a difference of 1/3). In C, you have a choice between a 4/6 chance of dying and a 3/6 chance of dying (a difference of 1/6).