I'm confused. Could someone help?
post by CronoDAS · 2009-03-23T05:26:24.617Z · LW · GW · Legacy · 12 commentsContents
13 comments
Imagine that I'm offering a bet that costs 1 dollar to accept. The prize is X + 5 dollars, and the odds of winning are 1 in X. Accepting this bet, therefore, has an expected value of 5 dollars a positive expected value, and offering it has an expected value of -5 dollars. It seems like a good idea to accept the bet, and a bad idea for me to offer it, for any reasonably sized value of X.
Does this still hold for unreasonably sized values of X? Specifically, what if I make X really, really, big? If X is big enough, I can reasonably assume that, basically, nobody's ever going to win. I could offer a bet with odds of 1 in 10100 once every second until the Sun goes out, and still expect, with near certainty, that I'll never have to make good on my promise to pay. So I can offer the bet without caring about its negative expected value, and take free money from all the expected value maximizers out there.
What's wrong with this picture?
See also: Taleb Distribution, Nick Bostrom's version of Pascal's Mugging
(Now, in the real world, I obviously don't have 10100 +5 dollars to cover my end of the bet, but does that really matter?)
Edit: I should have actually done the math. :(
12 comments
Comments sorted by top scores.
comment by mattnewport · 2009-03-23T06:14:19.910Z · LW(p) · GW(p)
Treating money as a linear measure of value breaks down when the amounts get sufficiently large. The marginal utility of $10,000,000 is not simply 10 x the marginal utility of $1,000,000 for one thing (for someone who is not already wealthy). Also, for really large amounts of money such that they represent a significant fraction of the total money supply the linear relationship does not even hold ignoring the marginal utility - owning all the money in the world is not simply 100 x more valuable than owning 1% of all the money in the world.
Then of course there is the problem that nobody would take the bet with you since they would know you can't possibly pay if they were to win. Unless it's Goldman Sachs taking the bet and they know the government will print the money and bail you out if they win.
comment by Roko · 2009-03-23T06:16:45.526Z · LW(p) · GW(p)
The expected value calculation is wrong.
A 1 in a million chance of winning 1,000,005 gives an expectation of 1.000005, not 5.
You need to offer a prize of 5X to get an expectation of 5, and 6X to get a net gain of 5.
Using monetary prizes bigger than the value of all the wealth of the planet is the cause of the confusion. You cannot offer the bet because there just isn't that much money in the world.
Replies from: bentarm, Emile↑ comment by bentarm · 2009-03-23T16:57:34.222Z · LW(p) · GW(p)
Using monetary prizes bigger than the value of all the wealth of the planet is the cause of the confusion.
I don't think it is. The cause of the confusion is just that the sums are wrong (and the conclusion is wrong). Replace the opening statement with "I'm Bill Gates, I'm offering you a bet - the cost to take the bet is $1, the prize for winning is $58 billion. The odds of winning are 1 in 57.99999 billion".
Now we're no longer talking about unrealistic amounts of money, but it still isn't good bet for Bill to offer, because it's expected value is negative. You do need to invoke the fact that wealth is finite to explain why martingales) don't work, but this "system" isn't nearly as complicated as a martingale.
↑ comment by Emile · 2009-03-23T10:10:43.862Z · LW(p) · GW(p)
Well, technically you can offer a bet of a million billion zillion dollars, it's just that anybody who calculates it's expected utility as "chances of winning" x "one million billion zillion dollars" is a gullible fool.
If your premises go violate common sense, don't be surprised if your conclusions violate common sense too.
Replies from: Roko↑ comment by Roko · 2009-03-23T15:06:32.099Z · LW(p) · GW(p)
Money is just a promise to deliver certain things that exist in the real world: things like goods and services.
The GDP of the USA is of the order of 10^12 dollars per year. Therefore the total wealth of the entire world today can't be more than, say, 10^20 dollars.
10^100 dollars dwarfs this. What does this mean? It means that any dollar amount greater than 10^20 breaks our intuitive notions of what money is. 10^100 dollars is like a promise to make one plus one equal three.
Replies from: Annoyance, steven0461↑ comment by Annoyance · 2009-03-23T15:11:03.935Z · LW(p) · GW(p)
"10^100 dollars is like a promise to make one plus one equal three."
It's more like a promise to dilute the value-to-money ratio by a factor of 10^80. Even if that much money could be printed, all that would be accomplished would be to put all the world's wealth in one person's hands and reduce everyone else to beggars.
The correct response to the question is, of course, to lynch the person threatening to print/mint that much excess money as a danger to the well-being of human civilization. Even if you aren't a fan of human civilization, such a procedure is quite likely to damage everything else on the planet in the process of humanity's destruction.
↑ comment by steven0461 · 2009-03-23T15:17:50.506Z · LW(p) · GW(p)
OK, so in the least convenient possible world, where Crono said 10^100, he meant 10^20. It seems to me the real issue here is that if you cannot (nearly) cover your end of the bet, negative utility is flat for very large negative dollar values, so you become risk-seeking.
comment by byrnema · 2009-03-24T04:24:15.352Z · LW(p) · GW(p)
I think I can help. You have set up a game of chance so that the expected value for the house (yourself) is negative. That means that on average you would have to pay out more than you would receive. However, while the payout is very big the chances of winning are very tiny so you wonder if this changes the game. In some sense, you are asking about the expected value of the game when you know the law of large numbers is not going to apply, because you are not going to play enough times for the ratio of wins to losses to average out.
This is a problem about sampling. The number of times you play the game will be much smaller than the number of games needed to yield the expected average. Suppose you conduct the game (only!) a million times. How reasonable is it to expect that you would collect a million dollars and not have to pay anything? In other words, we just need to calculate the probability of not having any "win" in the sample size of a million. The probability of a win in such a small sample size is tiny (epsilon) - so you wonder if you could consider it effectively zero and if it would be worthwhile to play the game.
The answer is that the chances are extremely high that you will not have to pay out anything (1-epsilon) so in almost every case it is lucrative to play the game. However, when you do lose, you lose so big that it (really does) cancel out the winnings you would be making in most case. So the expected value still holds -- it's not profitable to play the game.
My brain -- and your brain too, probably -- keeps buzzing that it is profitable to play the game because in almost every conceivable scenario, we can expect to make a million dollars. Human beings can't correctly think intuitively about very small and very large numbers. Every time your brain buzzes on this problem -- remind yourself it is because you're not really weighing the enormity of the pay-off you'd have to pay. Your brain keeps saying the probability is small, but the product of the probability and the payout is a finite, non-zero number.
As several comments below have eluded, perhaps the impracticality of such a pay-off is detracting from the abstract understanding of the problem. However, this is a fascinating question, and should be addressed squarely. (I'm pretty certain you didn't mean that you would just claim bankruptcy if you lost. Then your game would really be a scam, though I suppose we could argue about whether it is a scam in a sample where no one wins.)
comment by Johnicholas · 2009-03-23T13:55:15.510Z · LW(p) · GW(p)
Imagine this post with the problems that other commenters have pointed out fixed. In effect you're saying: Suppose I multiply something that's REALLY small (probability of having to pay out) by something that's REALLY big (amount that you would have to pay out). Further, suppose that the product (the expected payout) is 5 dollars. Can I just claim that the small probability is "practically zero" and get a different answer for the payout (that is, 0 dollars)?
There's nothing in your problem to prefer "small is approximately zero" over "big is approximately infinite". By making the other approximation, it seems just as reasonable for someone to pay a small amount for a small but finite chance of an infinite payout.
This question reminds me of the "immovable force and unstoppable barrier" problem that some of us encountered in middle school. One's intuition is destroyed by the extremes involved, and you can easily get your thinking into a circular rut focused on one half of the problem without noticing that your debate partner is going in a symmetrical circle on the other side.
comment by SarahNibs (GuySrinivasan) · 2009-03-23T06:30:44.670Z · LW(p) · GW(p)
There are several problems. You're really looking to take free money from expected utility maximizers, not "expected value maximizers", and the equation from an expected utility maximizer's point of view is:
Expected change in utility given that I have N dollars = [U(N-1) P(no pay|X) + U(N+X+5-1) P(pay|X)] - U(N)
Key points here are the transformation of dollar winnings to utility (diminishing marginal utility of money), the fact that the expected value looks more like 5/X (not 5) dollars, and the fact that the expected utility maximizer cares about P(pay|X), not P(win the bet|X) - its estimation of your ability to pay cannot be swept under the rug, so p quickly becomes much smaller than 1/X when X is 10^100.
comment by thomblake · 2009-03-24T20:10:27.764Z · LW(p) · GW(p)
I think this is one intuitive leap that could help:
You're supposing it's reasonable to assume that you'll never have to pay out in your lifetime. Anyone taking your bet, then, can just as much assume that they'll never win in your lifetime.
So it's as balanced as one should expect - if you obviously should offer the bet, then anyone else just as obviously shouldn't take it.
comment by Vladimir_Nesov · 2009-03-23T17:48:33.292Z · LW(p) · GW(p)
One way to approach the derivation of expected utility is to say that any outcome is equivalent (in the preference order) to some combination of the worst possible and the best possible outcomes. So, you pick your outcome, like [eating a pie], and say that there exists a probability P such that, say, the lottery P [life in eutopia] + (1-P) [torture and death] is as preferable. As it turns out, you can use that P as utility of your event.
So, yes, if there are no problems with the possibility to pay up on the promise, playing an incredibly risky lottery is the right thing to do. Just make sure that you calculated your odds and utility correctly, which becomes trickier and trickier as the values go extreme.