Iterated Gambles and Expected Utility Theory
post by Sable · 2016-05-25T21:29:27.645Z · LW · GW · Legacy · 44 commentsContents
The Setup The Problem A Tangent That Led Me to an Idea The Math The Analysis Conclusions None 44 comments
The Setup
I'm about a third of the way through Stanovich's Decision Making and Rationality in the Modern World. Basically, I've gotten through some of the more basic axioms of decision theory (Dominance, Transitivity, etc).
As I went through the material, I noted that there were a lot of these:
Decision 5. Which of the following options do you prefer (choose one)?
A. A sure gain of $240
B. 25% chance to gain $1,000 and 75% chance to gain nothing
The text goes on to show how most people tend to make irrational choices when confronted with decisions like this; most strikingly was how often irrelevant contexts and framing effected people's decisions.
But I understand the decision theory bit; my question is a little more complicated.
When I was choosing these options myself, I did what I've been taught by the rationalist community to do in situations where I am given nice, concrete numbers: I shut up and I multiplied, and at each decision choose the option with the highest expected utility.
Granted, I equated dollars to utility, which Stanovich does mention that humans don't do well (see Prospect Theory).
The Problem
In the above decision, option B clearly has the higher expected utility, so I chose it. But there was still a nagging doubt in my mind, some part of me that thought, if I was really given this option, in real life, I'd choose A.
So I asked myself: why would I choose A? Is this an emotion that isn't well-calibrated? Am I being risk-averse for gains but risk-taking for losses?
What exactly is going on?
And then I remembered the Prisoner's Dilemma.
A Tangent That Led Me to an Idea
Now, I'll assume that anyone reading this has a basic understanding of the concept, so I'll get straight to the point.
In classical decision theory, the choice to defect (rat the other guy out) is strictly superior to the choice to cooperate (keep your mouth shut). No matter what your partner in crime does, you get a better deal if you defect.
Now, I haven't studied the higher branches of decision theory yet (I have a feeling that Eliezer, for example, would find a way to cooperate and make his partner in crime cooperate as well; after all, rationalists should win.)
Where I've seen the Prisoner's Dilemma resolved is, oddly enough, in Dawkin's The Selfish Gene, which is where I was first introduced to the idea of an Iterated Prisoner's Dilemma.
The interesting idea here is that, if you know you'll be in the Prisoner's Dilemma with the same person multiple times, certain kinds of strategies become available that weren't possible in a single instance of the Dilemma. Partners in crime can be punished for defecting by future defections on your own behalf.
The key idea here is that I might have a different response to the gamble if I knew I could take it again.
The Math
Let's put on our probability hats and actually crunch the numbers:
Format - Probability: $Amount of Money | Probability: $Amount of Money
Assuming one picks A over and over again, or B over and over again.
Iteration A--------------------------------------------------------------------------------------------B
1 $240-----------------------------------------------------------------------------------------1/4: $1,000 | 3/4: $0
2 $480----------------------------------------------------------------------1/16: $2,000 | 6/16: $1,000 | 9/16: $0
3 $720---------------------------------------------------1/64: $3,000 | 9/64: $2,000 | 27/64: $1,000 | 27/64: $0
4 $960------------------------1/256: $4,000 | 12/256: $3,000 | 54/256: $2,000 | 108/256: $1,000 | 81/256: $0
5 $1,200----1/1024: $5,000 | 15/1024: $4,000 | 90/256: $3,000 | 270/1024: $2,000 | 405/1024: $1,000 | 243/1024: $0
And so on. (If I've ma de a mistake, please let me know.)
The Analysis
It is certainly true that, in terms of expected money, option B outperforms option A no matter how many times one takes the gamble, but instead, let's think in terms of anticipated experience - what we actually expect to happen should we take each bet.
The first time we take option B, we note that there is a 75% chance that we walk away disappointed. That is, if one person chooses option A, and four people choose option B, on average three out of those four people will underperform the person who chose option A. And it probably won't come as much consolation to the three losers that the winner won significantly bigger than the person who chose A.
And since nothing unusual ever happens, we should think that, on average, having taken option B, we'd wind up underperforming option A.
Now let's look at further iterations. In the second iteration, we're more likely than not to have nothing having taken option B twice than we are to have anything.
In the third iteration, there's about a 57.8% chance that we'll have outperformed the person who chose option A the whole time, and a 42.2% chance that we'll have nothing.
In the fourth iteration, there's a 73.8% chance that we'll have matched or done worse than the person who has chose option A four times (I'm rounding a bit, $1,000 isn't that much better than $960).
In the fifth iteration, the above percentage drops to 63.3%.
Now, without doing a longer analysis, I can tell that option B will eventually win. That was obvious from the beginning.
But there's still a better than even chance you'll wind up with less, picking option B, than by picking option A. At least for the first five times you take the gamble.
Conclusions
If we act to maximize expected utility, we should choose option B, at least so long as I hold that dollars=utility. And yet it seems that one would have to take option B a fair number of times before it becomes likely that any given person, taking the iterated gamble, will outperform a different person repeatedly taking option A.
In other words, of the 1025 people taking the iterated gamble:
we expect 1 to walk away with $1,200 (from taking option A five times),
we expect 376 to walk away with more than $1,200, casting smug glances at the scaredy-cat who took option A the whole time,
and we expect 648 to walk away muttering to themselves about how the whole thing was rigged, casting dirty glances at the other 377 people.
After all the calculations, I still think that, if this gamble was really offered to me, I'd take option A, unless I knew for a fact that I could retake the gamble quite a few times. How do I interpret this in terms of expected utility?
Am I not really treating dollars as equal to utility, and discounting the marginal utility of the additional thousands of dollars that the 376 win?
What mistakes am I making?
Also, a quick trip to google confirms my intuition that there is plenty of work on iterated decisions; does anyone know a good primer on them?
I'd like to leave you with this:
If you were actually offered this gamble in real life, which option would you take?
44 comments
Comments sorted by top scores.
comment by Viliam · 2016-05-26T10:21:55.920Z · LW(p) · GW(p)
Utility is approximately the logarithm of money. Pretend otherwise, and you will get results that go against the intuition, duh.
Utility is linear to money only if we take such a small part of the logarithmic curve that it is more or less linear at the given interval. But this is something you cannot extrapolate to situations where the part of the logarithmic curve is significantly curved. Two examples of linearity:
1) You are a millionaire, so you more or less don't give a fuck about getting or not getting $1000. In such case you can treat small money as linear and choose B. If you are not a millionaire, imagine that it is about certainty of 24¢ versus 25% chance of $1.
2) You are an effective altruist and you want to donate all the money to a charity that saves human lives. If $1000 is very small compared with the charity budget, we can treat the number of human lives saved as a linear function of extra money given. (See: Circular Altruism.)
Replies from: Vaniver, HungryHobo↑ comment by Vaniver · 2016-05-26T13:27:51.253Z · LW(p) · GW(p)
Utility is approximately the logarithm of money. Pretend otherwise, and you will get results that go against the intuition, duh.
To be clearer, utility is approximately the logarithm of your wealth, not of the change to your wealth. So there's a hidden number lurking in each of those questions--if you have $100k (5) of wealth, then option A brings it up to $100240 (5.00104) and option B brings it up to either $101000 (5.00432) with 25% probability and leaves it where it is with 75% probability, which works out to a weighted average log wealth of 5.00108, which is higher, so go with B.
But if your wealth is $1k (3), then option A brings you up to a weighted average of 3.09 and B brings you up to a weighted average of 3.07. So go with A!
(The breakeven point for this particular option is a starting wealth of $8800.)
Replies from: Pimgd↑ comment by HungryHobo · 2016-05-26T12:41:07.354Z · LW(p) · GW(p)
Yep, for me 100 dollars provides a nice chunk of utility. 10000 does not provide 100 times as much utility and anything more than a couple hundred K provides little utility at all.
In theory that 1.6 billion powerball lottery had a (barely) positive expected return (depending on how taxes work out) and thus rationalists should throw money at it but in reality a certainty of 1 dollar is better than a 1 in a billion chance of getting 1.6 billion. (I know these numbers aren't exact)
Replies from: Lumifer, entirelyuseless↑ comment by Lumifer · 2016-05-26T15:14:58.154Z · LW(p) · GW(p)
anything more than a couple hundred K provides little utility at all
How do you know?
Replies from: HungryHobo↑ comment by HungryHobo · 2016-05-27T10:22:46.770Z · LW(p) · GW(p)
Because at that point I'm tapdancing on the top of Maslow's Hierarchy of Needs, extremely financially secure with lots of reserves.
It doesn't go to zero but it's like the difference between the utility of an extra portion of truffle desert when I'm already stuffed vs the utility of a few bags of plumpynut when I have a starving child.
Replies from: Lumifer↑ comment by Lumifer · 2016-05-27T14:25:16.667Z · LW(p) · GW(p)
extremely financially secure with lots of reserves
With a couple of hundred thousand dollars? They don't make you financially independent (defined as "don't have to work"), you can't even buy an apartment in SF or NYC, etc...
Replies from: HungryHobo↑ comment by HungryHobo · 2016-05-27T16:30:56.057Z · LW(p) · GW(p)
Ya but I don't want to buy an apartment in new York.
Again, I didn't say utility goes to zero. it just drops off dramatically. The difference between 0 and 250K is far bigger in terms of utility than the difference between 250k and 500k. You still can't buy a new york apartment and having 500K is better than having only 250K but in terms of how it changes your life the first increment is far more significant.
Replies from: Lumifer↑ comment by Lumifer · 2016-05-27T16:48:59.091Z · LW(p) · GW(p)
in terms of how it changes your life the first increment is far more significant
Yes, of course, no one is arguing against that. However you make a couple of rather stronger statements e.g.
anything more than a couple hundred K provides little utility at all
which don't strike me as true for most people.
There are large individual differences -- for a monk who has taken a vow of poverty even your "first increment" is pretty meaningless, while for someone with a life dream of flying her own plane around the world a few hundred $K aren't that much. But I would expect that a large majority of people would be able to extract significant utility out of a few hundred thousands of dollars beyond the first couple.
Replies from: Good_Burning_Plastic, Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2016-05-28T10:53:06.721Z · LW(p) · GW(p)
which don't strike me as true for most people.
You might want to read again the second and third word of HungryHobo's original comment.
Replies from: Lumifer↑ comment by Lumifer · 2016-05-31T14:43:43.993Z · LW(p) · GW(p)
Yes and my question was how does he know? If he never had that amount of money available to him, his guesstimate of how much utility he will be able to gain from it is subject to doubt. People do change, especially when their circumstances change.
Replies from: Good_Burning_Plastic, michaelsullivan↑ comment by Good_Burning_Plastic · 2016-06-07T07:02:40.307Z · LW(p) · GW(p)
If you did realize you two were talking specifically about HungryHobo rather than/as well as about people in general, you might want to edit your comments to make it clearer.
↑ comment by michaelsullivan · 2016-06-06T14:29:35.350Z · LW(p) · GW(p)
Of course, but in relative terms he's still right, it's just easier to see when you are thinking from the point of the hungry hobo (or peasant in the developing world).
Standing from the point of view of a middle class person in a rich country looking at hypothetical bets where the potential loss is usually tiny relative to our large net worth+human capital value of >4-500k, then of course we don't feel like we can mostly dismiss utility over a few hundred thousand k, because we're already there.
Consider a bet with the following characteristics: You are a programmer making 60k ish a year a couple years out of school. You have a 90% probability of winning. If you win, you will win 10 million dollars in our existing world. If you lose (10%) you will swapped into parallel universe where your skills are completely worthless, you know no-one, and you would essentially be in the position of the hungry hobo. You don't actually lose your brain, so you could potentially figure out how to make ends meet and even become wealthy in this new society, but you start with zero human capital -- you don't know how to get along in it, any better than someone who was raised in a mumbai slum to typical poor parents does in this world.
So do you take that bet? I certainly wouldn't.
Is there any amount of money we could put in the win column that would mean you take the bet?
When you start considering bets where a loss actually puts you in the Hungry hobo position, it becomes clearer that utility of money over a few hundred thousand dollars is pretty small beer, compared to what's going on at the lower tiers of Maslow's hierarchy.
Which is another way of saying that pretty much everyone who can hold down a good job in the rich world has it really freaking good. The difference between $500k and $50 million (enough to live like an entertainer or big-time CEO without working) from the point of view of someone with very low human capital looks a lot like the famed academics having bitter arguments over who gets the slightly nicer office.
This also means that even log utility or log(log) utility isn't risk averse enough for most people when it comes to bets with a large probability mass of way over normal middle class net worth + human capital values, and any significant probability of dropping below rich-country above-poverty net worth+ human capital levels.
Fortunately, for most of the bets we are actually offered in real life, linear is a good enough approximation for small ones, and log or log-log utility is a plenty good enough approximation for even the largest swings (like starting a startup vs. a salaried position), as long as we attach some value to directing wealth we would not consume, and there is a negligible added probability of the kind of losses that would take us completely out of our privileged status.
In most real life cases any problems with the model are overwhelmed by our uncertainties in mapping the probability distribution.
Replies from: Lumifer↑ comment by Lumifer · 2016-06-06T14:55:59.252Z · LW(p) · GW(p)
So do you take that bet? I certainly wouldn't.
Beware of the typical mind fallacy :-) I will take the bet.
Note that, say, a middle-class maker of camel harnesses who is forced to flee his country of Middlestan because of a civil war and who finds himself a refugee in the West is more or less in the position of your "hungry hobo".
This also means that even log utility or log(log) utility isn't risk averse enough for most people
This is true, but that's because log utility is not sufficient to explain risk aversion.
Fortunately, for most of the bets we are actually offered in real life, linear is a good enough approximation for small ones, and log or log-log utility is a plenty good enough approximation for even the largest swings
I disagree. Consider humans outside of middle and upper-middle classes in the sheltered West, that is, the most of humanity.
In most real life cases any problems with the model are overwhelmed by our uncertainties in mapping the probability distribution.
That is also true.
Replies from: gjm, gjm↑ comment by gjm · 2016-06-06T15:07:36.139Z · LW(p) · GW(p)
log utility is not sufficient to explain risk aversion.
In fact it's pretty well established that typical levels of risk aversion cannot be explained by any halfway-credible utility function. A paper by Matthew Rabin shows, e.g., that if you decline a bet where you lose $100 or gain $110 with equal probability (which many people would) and this is merely because of the concavity of your utility function, then subject to rather modest assumptions you must also decline a bet where you lose $1000 or gain all the money in the world with equal probability.
Replies from: gjm, Lumifer, Good_Burning_Plastic↑ comment by gjm · 2016-06-06T15:23:53.638Z · LW(p) · GW(p)
There was some discussion of that paper and its ideas on LW in 2012. Vaniver suggests that the results may be more a matter of eliciting people's preferences in a lazy way that doesn't get at their real, hopefully better thought out, preferences. (But I fear people's actual behaviour matches that lazy preference-elicitation pretty well.) There are some other interesting comments there, too.
↑ comment by Good_Burning_Plastic · 2016-06-07T07:01:13.011Z · LW(p) · GW(p)
any halfway-credible utility function
Some of those conclusions are not as absurd as Rabin appears to believe; I think he's typical-minding. Most people will pick a 100% chance of $500 over a 15% chance of $1M.
with equal probability
Prior or posterior to the evidence provided by the other person's willingness to offer the bet? ;-)
rather modest assumptions
Such as assuming that that person would also decline the bet even if they had 10 times as much money to start with? That doesn't sound like a particularly modest assumption.
↑ comment by gjm · 2016-06-06T15:02:35.040Z · LW(p) · GW(p)
I'm pretty sure I would also take that bet.
I don't think I'd take an equivalent bet now, though. Compared with the hypothetical twentysomething earning $60k/year I'm older, hence less time to recover if I get unlucky, and richer, hence gaining $10M is a smaller improvement, and I have a family who would suffer if transported with me into the parallel world and whom I would miss if they weren't.
↑ comment by Good_Burning_Plastic · 2016-05-28T10:42:34.237Z · LW(p) · GW(p)
which don't strike me as true for most people.
You might want to read again the two words right before the ones you quoted in HungryHobo's comment.
↑ comment by entirelyuseless · 2016-05-26T13:32:49.601Z · LW(p) · GW(p)
I don't think this works out, if you think you are agreeing with Villiam. Suppose your net worth is $20,000. Then the utility increase represented by $100 is going to be [proportional to] 0.00498. On the other hand, the utility increase represented by $10,000 is going to be [proportional to] 0.40546. That is, $10,000 will be 81 times as valuable as $100.
In other words, it is less than 100 times as valuable. But not by that much, and certainly not by enough to explain the degree to which people prefer the certain $100.
Replies from: HungryHobo↑ comment by HungryHobo · 2016-05-26T14:38:33.173Z · LW(p) · GW(p)
Using your net worth as part of the calculation doesn't feel right.
Even if my net worth is quite high much of that may be inaccessible to me short term.
If I have 100,000 in liquid cash then 100 has lower utility to me than if I have 100,000 in something non liquid like a house and no cash.
comment by Lumifer · 2016-05-26T15:13:52.920Z · LW(p) · GW(p)
You're ignoring risk aversion. Just maximising expected utility does not take it into account and humans do care about risk. Your consciousness just discovered that your gut cares :-)
Replies from: Good_Burning_Plastic↑ comment by Good_Burning_Plastic · 2016-05-27T13:12:41.488Z · LW(p) · GW(p)
Just maximising expected utility does not take [risk aversion] into account
If the utility function whose expectations you take is concave enough it does.
Replies from: Lumifer↑ comment by Lumifer · 2016-05-27T14:20:36.670Z · LW(p) · GW(p)
Why do you think so?
Replies from: Pfft↑ comment by Pfft · 2016-06-01T21:19:21.978Z · LW(p) · GW(p)
It is very standard in economics, game theory, etc, to model risk aversion as a concave utility function. If you want some motivation for why, then e.g. the Von Neumann–Morgenstern utility theorem shows that a suitably idealized agent will maximize utility. But in general, the proof is in the pudding: the theory works in many practical cases.
Of course, if you want to study exactly how humans make decisions, then at some point this will break down. E.g. the decision process predicted by Prospect Theory is different from maximizing utility. So in general, the exact flavour of risk averseness exhibited by humans seems different from what Neumann-Morgenstern would predict.
But at that point, you have to start thinking whether the theory is wrong, or the humans are. :)
Replies from: Lumifer↑ comment by Lumifer · 2016-06-02T14:31:33.226Z · LW(p) · GW(p)
But in general, the proof is in the pudding: the theory works in many practical cases.
Show me. We are talking about real life ("works", "practical"), right?
Note that in finance where miscalculating risk can be a really expensive mistake that you pay for with real money, no one treats risk as a trivial consequence of a concave utility function.
But at that point, you have to start thinking whether the theory is wrong, or the humans are.
You might. It should take you about half a second to decide that the theory is wrong. If it takes you longer, you need to fix your thinking :-P
Replies from: Pfft↑ comment by Pfft · 2016-06-03T15:23:43.600Z · LW(p) · GW(p)
I'm not sure what you have in mind for treatment of risk in finance. People will be concerned about risk in the sense that they compute a probablility distribution of the possible future outcomes of their portfolio, and try to optimize it to limit possible losses. Some institutional actors, like banks, have to compute a "value at risk" measure (the loss of value in the portfolio in the bottom 5th percentile), and have to put up a collateral based on that.
But those are all things that happen before a utility computation, they are all consistent with valuing a portfolio based on the average of some utiity function of its monetary value. Finance textbooks do not talk much about this, they just assume that investors have some preference about expected returns and variance in returns.
comment by michaelsullivan · 2016-06-06T14:55:25.293Z · LW(p) · GW(p)
So one of the major issues I've identified with why our gut feelings don't always match with good expected utility models is that we don't live in a hypothetical universe. I typically use log utility of end state wealth to judge bets where I am fairly confident of my probability distributions as per Vaniver in another comment.
But there are reasons that even this doesn't really match with our gut.
Our "gut" has evolved to like truly sure things, and we have sayings like "a bird in the hand is worth two in the bush" partly because we are not very good at mapping probability distributions, and because we can't always trust everything we are told by outside parties.
When presented with a real life monty haul bet like this, except in very strange and arbitrary circumstances, we usually have reason to be more confident of our probability map on the sure bet than on the unsure one.
If someone has the $240 in cash in their hand, and is saying that if you take option B, they will hand it you right now and you can see it, you can usually be pretty sure that if you take option B you will get the money -- there is no way they can deny you the money without it being obvious that they have plain and simply lied to you and are completely untrustworthy.
OTOH, if you take the uncertain option -- how sure can you really be that the game is fair? How will the chance be determined? The person setting up the game understands this better than you, and may know tricks they are not telling you. If the real chance is much lower than promised, how will you be able to tell? If they have no intention of paying you for a "win", how could you tell?
The more uncertainty is promised, the more uncertainty we will and should have in our trust and other unknown considerations. That's a general rule of real life bets that's summed up more perfectly than I ever could have in this famous quote from Guys and Dolls:
"One of these days in your travels, a guy is going to show you a brand new deck of cards on which the seal is not yet broken. Then this guy is going to offer to bet you that he can make the jack of spades jump out of this brand new deck of cards and squirt cider in your ear. But, son, do not accept this bet, because as sure as you stand there, you’re going to wind up with an ear full of cider."
So for these reasons, this gamble, where the difference in expected value is fairly small compared to the value of the sure win -- even though a log expected utility curve says to take the risk at almost any reasonable level of rich country wealth, unless you have a short term liquidity crunch -- I'd probably take the 240. The only situations under which I would even consider taking the best are ones where I was very confident in my estimate of the probability distribution (we're at a casino poker table and I have calculated the odds myself for example), and either already have nearly complete trust or don't require significant trust in the other bettor/game master to make the numbers work.
In the hypothetical where we can assume complete trust and knowledge of the probability distribution, then yes I take the gamble. The reason my gut doesn't like this, is because we almost never have that level of trust and knowledge in real life except in artificial circumstances.
Replies from: Pimgd↑ comment by Pimgd · 2016-06-06T15:05:46.881Z · LW(p) · GW(p)
I could pick the jack of spades out of a new deck of cards too; they tend to come pre-sorted. All it takes is studying another brand new deck of cards. (I'd have to do this studying before I would be able to pull this trick off, though.)
My guess is that you'd want to flip it backside-up, and from the then bottom, pull 10 cards and flip the top one of that one over. Then you'll have either a Jack (if it starts with an ace, I assume this to be very likely, 50%?), a 9 (if it starts with a placeholder card, like a rule card, I presume this to be probable... like, 20%?), a Queen (if the ace is last, a la Jack Queen King Ace, 20% for this one as well) an 8 (two jokers at the front of the deck? 8%?), a 7 (2 jokers AND a rule card?! 1%) or 1% of me just being wrong entirely. It sounds like a weird distribution, but rule cards and jokers tend to be at the back, and an ace with fancy artwork tends to be at the front of the deck because it looks good.
A bit of googling reveals that a new deck usually starts with the Ace of Spades, so I'd guess that flipping the deck over, then drawing 10 cards from the bottom of the deck (what used to be the front) and then flipping the 10th card over will give you a Jack of Spades.
comment by ChristianKl · 2016-06-06T16:38:24.902Z · LW(p) · GW(p)
If you were actually offered this gamble in real life, which option would you take?
That reminds me of me calculating the expected value of Ethereum around half a year ago and writing on LW about it. I was pretty certain that there's a >50% chance of 10x returns. However I was to lazy to actually go and buy Ethereum.
Was it because of risk aversion? I don't think so, it was pure lazinss.
comment by DanArmak · 2016-05-26T18:39:07.541Z · LW(p) · GW(p)
Dollars are fungible. If you expect to sometimes trade off risk vs. expected value, then you can aggregate all such events and act as if choosing your strategy for many or all of them at once.
So even if you don't think you'll be offered this particular gamble, as long as you expect to encounter other vaguely similar decisions, you should choose more high-risk, high-reward options. Unless, as others have pointed out, you can't afford to take the financial risk in the short term.
Replies from: Lumifercomment by Dagon · 2016-05-26T14:23:19.184Z · LW(p) · GW(p)
Iteration isn't the only thing you're changing - you're also increasing the top payout and smoothing the results to reduce the probability of getting nothing. This is going to trigger your biases differently.
As you note, money does not actually translate to utility very well. For a whole lot of people, the idea of $240 feels almost as good as the idea of $1000.
It would help to play with the problem and translate it to something more linear in utility. But such things are really hard to find, almost everything gets weighted weirdly by brains. In fact, if you phrase it as "certain death for 250 people vs 24% of death of 1000" you often get different answers than "certain saving of 24 people vs 25% of saving 1000".
IMO, part of instrumental rationality is recognizing that your intuitions of mapping outcomes to utility are wrong in a lot of cases. Money really is close to linear in value for small amounts, and you are living a suboptimal life if you don't override your instincts.
comment by ygrt · 2016-05-30T23:48:12.845Z · LW(p) · GW(p)
The option A could be justified if you take into account emotional utility. Therefore you might trade 10$ for avoiding the regret you will feel in 75% of cases if you choose option B. This could hold even more true for larger sums.
By the way, I would choose option B because I don't think this self-indulging attitude is beneficial in the long run.
comment by Good_Burning_Plastic · 2016-05-28T10:51:11.020Z · LW(p) · GW(p)
Decision 5. Which of the following options do you prefer (choose one)? A. A sure gain of $240 B. 25% chance to gain $1,000 and 75% chance to gain nothing
This is mathematically (though not necessarily psychologically) equivalent to being given $240 for free and then being offered a bet that wins $760 with 25% chance and loses $240 with 75% chance. Let's leave the former aside. As for the latter, in many circumstances I would turn it down (even if I had several tens of thousand dollars to begin with) because I would take the other party's willingness to offer me that bet as evidence that there's something that they know and I don't.
comment by TheAncientGeek · 2016-05-27T12:27:46.153Z · LW(p) · GW(p)
Utlity isn't dollars, so people may be not-equating dollars and utility well.
comment by OrphanWilde · 2016-05-26T20:29:28.564Z · LW(p) · GW(p)
Imagine a machine that created $100 every time you (and only you, you can't hire somebody to do it for you, or give it to somebody else) push a button; more, this is a magical $100 imbued with anti-munchkin charms (such that any investments purchased never gain value, any raw materials transformed or skills purchased remain at most the same value (so research is out), and any capital machines purchased with it provoke the same effect on any raw materials they themselves process, and so on and so forth; and no, burning the money or anything else for fuel doesn't subvert the charm, or even allow you to stop the heat death of the universe). Assuming you can push it three times per second, fifteen minutes of effort buys you a really nice car. An hour buys you a nice house. A solid workday buys you a nice mansion. A week, and you could have a decent private island. A few months and you might be able to have a house on the moon - but because of the anti-munchkin charms, it won't jump-start space exploration or the space industry or anything like that.
You have, in effect, as much personal utility as you want.
There are three things about this scenario:
First, you consume utility in this process, you don't create it. The anti-munchkin charms mean you are a parasite on society; the space travel you purchase is taking resources away from some other enterprise.
Second, even without the anti-munchkin charms wrt capital goods, money is a barter token, not utility in and of itself. This machine cannot be used to make society better off directly, it merely allows you to redistribute resources in an inefficient manner. Insofar as you might be able to make society better off, that is dependent on you being about to distribute resources more efficiently than they already are - and if you can recognize global market inefficiencies with sufficient clarity to correct them, there are more productive ways for you to spend your time than pushing a button that dispenses what is relatively chump change.
Third, the first thousand times you push the button will make a substantively larger difference in your life than the second thousand times. Each iterative $100 has diminishing returns over the previous $100, even without taking the inflationary effect this will have on society into account.
comment by Tommi_Pajala · 2016-05-26T07:49:29.571Z · LW(p) · GW(p)
Like GuySrinivasan said, it depends on the stakes. With the numbers you have, I'd take the risky bet based on the higher expected utility.
Why I should choose the risky option every time seems intuitive: even though I don't expect to face the exactly same problem very often, life in general consists of many gambles. And the sum of those decisions is going to be higher with the risky option chosen consistently every time.
comment by SarahSrinivasan (GuySrinivasan) · 2016-05-25T22:10:35.375Z · LW(p) · GW(p)
tl;dr: If you have less than ~$13k saved and have only enough income to meet expenses, picking B might legitimately make you sad even if it's correct.
I'd take B every time. But it depends on your financial situation. If the stakes are small relative to my reference wealth I maximize expected dollars, no regrets, and if you can't without regrets then maybe try playing poker for a while until you can be happy with good decisions that result in bad outcomes because of randomness. You may not make that exact decision again, but you make decisions like it plenty often.
If the stakes are large relative to my reference wealth then the situation changes for two reasons. One, I probably won't have the opportunity to take bets with stakes large relative to my wealth very often. Two, change in utility is no longer approximately proportional to change in dollars. Perhaps $240 is a non-trivial amount to you? For a hypothetical person living in an average American city with $50k saved and an annual income of $50k, an additional $240 is in no way life changing, so dU(+$240) ~= 0.24 dU(+$1000) and they should pick B. But with 1000x the stakes, it's entirely possible that dU(+$240,000) >> 0.25 dU(+$1,000,000).
Another way of looking at this is investing with Kelly Criterion (spherical cow but still useful), which says if you start with $50k and no other annual income and have the opportunity to periodically pay $24x for a 25% chance at $100x, you should start by betting ~$657 a pop for maximum growth rate, which is within shouting distance of the proposed $240 - and KC is well known to be too risky for individuals without massive wealth, as people actually have to spend money during low periods. This is proportional to wealth, so the breakeven wealth before you're sad that you have to bet so much at once ($240) is, under the too-risky KC, about $18k, which means actually it's probably like $10k-$15k.
I have very little intuition how this translates if, for example, you have heaps of student load debt and are still trying to finish education in hopes of obtaining a promised-but-who-knows well paying job in a few years.
Replies from: Vaniver, Pimgd↑ comment by Vaniver · 2016-05-26T13:30:04.904Z · LW(p) · GW(p)
If you have a log utility function (which the KC maximizes), you can calculate the breakeven starting wealth with this formula.25%2Blog(x).75%3Dlog(x%2B240)).
↑ comment by Pimgd · 2016-05-26T08:48:52.895Z · LW(p) · GW(p)
Pretty much this; if we adjust the numbers to "A: 20 cents or B: 25% chance for 100 cents" then I'd take the option B, but scale it up to "A: $200,000 or B: 25% chance for $1,000,000", and I'd take option A. Because $0 is 0 points, $1 million is something like 4 points, and $200,000 is about 2 points.
Human perception of scale for money is not linear (but not logarithmic either... not log 10, anyway, maybe log somethingelse). And since I'm running this flawed hardware...
Some of it was pointed out already as "prospect theory" but that seems to be more about perception of probability rather than the perception of the actual reward.
Replies from: gjm↑ comment by gjm · 2016-05-26T11:32:56.850Z · LW(p) · GW(p)
Log to base 10 and log to base anything_else differ only by a scale factor, so if you find log-to-base-10 unsatisfactory you won't like any other sort of log much better.
It sounds like you're suggesting that utility falls off slower than log(wealth) or log(income). I think there's quite good evidence that it doesn't -- but if you're looking at smallish changes in wealth or income then of course you can get that smaller falloff in change in utility. E.g., if you start off with $66,666 and gain $200k then your wealth has gone up by a factor of 4; if you gain $1M instead then your wealth has gone up by a factor of 16. If your logs are to base 2, you get exactly the numbers you described.
The right figure to use for "wealth" there may well not be exactly your total net wealth; it should probably include some figure for "effective wealth" arising from your social context -- family and friends, government-funded safety net, etc. It seems like that should probably be at least a few tens of kilobucks in prosperous Western countries.