Risk aversion does not explain people's betting behaviours
post by Stuart_Armstrong · 2012-08-20T12:38:22.785Z · LW · GW · Legacy · 36 commentsContents
36 comments
Expected utility maximalisation is an excellent prescriptive decision theory. It has all the nice properties that we want and need in a decision theory, and can be argued to be "the" ideal decision theory in some senses.
However, it is completely wrong as a descriptive theory of how humans behave. Those on this list are presumably aware of oddities like the Allais paradox. But we may retain some notions that expected utility still has some descriptive uses, such as modelling risk aversion. The story here is simple: each subsequent dollar gives less utility (the utility of money curve is concave), so people would need a premium to accept deals where they have a 50-50 chance of gaining or losing $100.
As a story or mental image, it's useful to have. As a formal model of human behaviour on small bets, it's spectacularly wrong. Matthew Rabin showed why. If people are consistently slightly risk averse on small bets and expected utility theory is approximately correct, then they have to be massively, stupidly risk averse on larger bets, in ways that are clearly unrealistic. Put simply, the small bets behaviour forces their utility to become far too concave.
For illustration, let's introduce Neville. Neville is risk averse. He will reject a single 50-50 deal where he gains $55 or loses $50. He might accept this deal if he were really rich enough, and felt rich - say if he had $20 000 in capital, he would accept the deal. I hope I'm not painting a completely unbelievable portrait of human behaviour here! And yet expected utility maximalisation then predicts that if Neville had fifteen thousand dollars ($15 000) in capital, he would reject a 50-50 bet that either lost him fifteen hundred dollars ($1 500), or gained him a hundred and fifty thousand dollars ($150 000) - a ratio of a hundred to one between gains and losses!
To see this, first define define the marginal utility at $X dollars (MU($X)) as Neville's utility gain from one extra dollar (in other words, MU($X) = U($(X+1)) - U($X)). Since Neville is risk averse, MU($X) ≥ MU($Y) whenever Y>X. Then we get the following theorem:
- If Neville has $X and rejects a 50-50 deal where he gains $55 or loses $50, then MU($(X+55)) ≤ (10/11)*MU($(X-50)).
This theorem is a simple result of the fact that U($(X+55))-U($X) must be greater than 55*MU($(X+55)) (each dollar up from the Xth up to the (X+54)th must have marginal utility at least MU($(X+55))), while U($X)-U($(X-50)) must be less than 50*MU($(X-50)) (each dollar from the (X-50)th up to (X-1)th must have marginal utility at most MU($(X-50))). Since Neville rejects the deal, U($X) ≥ 1/2(U($(X+55)) + U($(X-50)), hence U($(X+55))-U($X) ≤ U($X)-U($(X-50)), hence 55*MU($(X+55)) ≤ 50*MU($(X-50)) and the result follows.
Hence if we scale Neville's utility so that MU($15000)=1, we know that MU($15105) ≤ 10/11, MU($15210) ≤ (10/11)2, MU($15315) ≤ (10/11)3, ... all the way up to to MU($19935) = MU($(15000 + 47*105)) ≤ (10/11)47. Summing the series of MU's from $15000 to $(15000+48*110) = $20040, we can see that
- U($20040) - U($15000) ≤ 105*(1+(10/11)+(10/11)2+...+(10/11)47) = 110*(1-(10/11)48)/(1-(10/11)) ≈ 1143.
One immediate result of that is that Neville, on $15000, will reject a 50-50 chance of losing $1144 versus gaining $5000. But it gets much worse! Let's assume that the bet is a 50-50 bet which involves losing $1500 - how far up in the benefits do we need to go before Neville will accept this bet? Now the marginal utilities below $15000 are bounded below, just as those above $15000 are bounded above. So summing the series down to $(15000-1500) = $13500 > $(15000 - 14*105):
- U($15000) - U($13500) ≥ 105*(1+(11/10)+...+(11/10)13) = 105*(1-(11/10)14)/(1-(11/10)) ≈ 2937.
So gaining $5040 from $15000 will net Neville (at most) 1143 utilons, while losing $1500 will lose him (at least) 2937. The marginal utility for dollars above the 20040th is at most (10/11)47 < 0.012. So we need to add at least (2937-1143-1)/0.012 ≈ 149416 extra dollars before Neville would accept the bet. So, as was said,
- If Neville had fifteen thousand dollars ($15 000), he would reject a 50-50 bet that either lost him fifteen hundred dollars ($1 500), or gained him a hundred a fifty thousand dollars ($150 000).
These bounds are not sharp - the real situation is worse than that. So expected utility maximisation is not a flawed model of human risk aversion on small bets - it's a completely ridiculous model of human risk aversion on small bets. Other variants such as prospect theory perform a better job at the descriptive task, though as usual in the social sciences, they are flawed as well.
36 comments
Comments sorted by top scores.
comment by roystgnr · 2012-08-20T17:52:51.502Z · LW(p) · GW(p)
What always gets me in experiments offering gambles is the implictly unquestioned assumption that it's rational for a subject to assume that the claimed odds of a bet are in fact the actual odds of the bet. That would certainly make the analysis much simpler, but a tempting simplification isn't necessarily an accurate one.
Just because we focus on the likely irrationality of Neville refusing to bet $50 at purportedly even odds against $55, and ignore the similar irrationality of an experimenter offering to bet $55 at even odds against $50, that doesn't mean Neville isn't updating his beliefs based on the experimenter's behavior. If Neville then stubbornly assigns expected probabilities other than .5 and .5 to the bet outcomes, must he be an irrational person who is doomed to forgo a bounty of cash from generous economics researchers, or might he be a rational person who is merely inducting properly from his prior observations of three-card monty tables and extended warranty offers?
Replies from: Kindly, army1987↑ comment by Kindly · 2012-08-20T18:06:47.531Z · LW(p) · GW(p)
When I read "Neville [...] will reject a single 50-50 deal where he gains $55 or loses $50" the first thing I do is ask myself: "Can I imagine myself rejecting a similar 50-50 deal?" Because if I can't imagine that, then the thought experiment can't possibly apply to the way I think about money.
In this case, though, I have no trouble imagining this. I have some reservations about $20000 being the cutoff, but I'm willing to accept that for now to see the math; also, I believe in geometric progressions, so I suspect the cutoff doesn't matter too much.
If hypothetical Neville refused the bet because he suspects it's rigged, that doesn't affect me. When I checked Neville's refusal against my own intuitions, I accepted the 50/50 odds as given. I suppose it's possible that I'm subconsciously being suspicious of the odds, and that is leading me to be risk averse. Is that what you're suggesting?
Replies from: roystgnr, Stuart_Armstrong↑ comment by roystgnr · 2012-08-21T19:30:59.155Z · LW(p) · GW(p)
Subconscious suspicion is one possibility; evolution only cares about your behavior, not so much about how much introspection you did to get there.
It's certainly not the only possibility, though. Another example: Reduce the bet to 55 cents vs 50 cents and I'd imagine refusing it myself, for the obvious reason that the expected gain is grossly less than the transaction costs of stopping to think about the bet, look for possible "catches", flip the coin, and collect any winnings. There's probably other rational reasons to be "bet averse" that I haven't thought of, too.
↑ comment by Stuart_Armstrong · 2012-08-21T09:38:17.114Z · LW(p) · GW(p)
I have some reservations about $20000 being the cutoff, but I'm willing to accept that for now to see the math; also, I believe in geometric progressions, so I suspect the cutoff doesn't matter too much.
If you remove the cutoff, then Neville will not accept 50-50 odds of losing $1500 or winning any amount of money.
Replies from: Kindly↑ comment by A1987dM (army1987) · 2012-08-21T09:53:05.931Z · LW(p) · GW(p)
What always gets me in experiments offering gambles is the implictly unquestioned assumption that it's rational for a subject to assume that the claimed odds of a bet are in fact the actual odds of the bet. That would certainly make the analysis much simpler, but a tempting simplification isn't necessarily an accurate one.
Yes. I think I already mentioned that the real reason why I wouldn't take a 50% chance of winning $110 and 50% chance of losing $100 is that if someone is willing to offer such a bet to me, then they most likely know something about the coin to be flipped that I don't. If I was offered such a bet in a way extremely hard to cheat at (say, using random.org), I would happily take it -- but I don't expect anyone doing that, anyway.
comment by gjm · 2012-08-20T21:26:08.420Z · LW(p) · GW(p)
The following position seems fairly plausible to me. (1) Diminishing marginal utility is the only good strong reason for risk aversion; that is, the only thing that can justify a large difference between the value of $X and half the value of $2X. (2) But there are some good but weak reasons, which can't produce a large difference -- but if X is small then they may justify a fairly substantial relative difference. (3) Something a bit like actual human risk-averse behaviour can be justified by the combination of (1) when large sums are at stake and (2) when small sums are at stake.
(This is very strongly reminiscent of what happens with payday loan companies, which make small short-term loans with charges that are not very large in absolute terms but translate to absolutely horrifying numbers if you convert them to APRs; this isn't only because payday lenders are evil sharks (though they might be) and payday loans are really high risk (though they might be) but also because some of the cost of lending money is more or less independent of the size and duration of the loan. If I lend you $100 for a day and charge $1 for the effort of keeping track of what I'm owed by whom and when, that's an APR of over 3000%, but it's not obviously unreasonable even so: most of that $1 isn't really interest as such.)
comment by Irgy · 2012-08-21T02:19:46.411Z · LW(p) · GW(p)
So in other words, people's actual behaviour does not fit a (particular) simple mathematical rational model? Why is this surprising to anyone? Non-linear utility is a rationalisation and broad justification of risk aversion, but are there really people who think it's an accurate descriptive model of actual human behaviour? The whole concept of trying to fit a rational model to human behaviour seems pretty optimistic to me.
I also take issue with this quote: "[expected utility maximisation is] a completely ridiculous model of human risk aversion on small bets" You've shown that only by extrapolating behaviour on small bets to behaviour on large bets. To me that's similar to saying "Newtonian mechanics is a completely ridiculous model of ordinary scale physics" by extrapolating its behaviour to relativistic scales. Whether it's a good model on small bets is a function of its behaviour on small bets, not its extrapolated behaviour on large bets. I know I at least would use a completely different mindset on large bets than I would on small anyway, and would make no claim of the two being consistent under any single model.
That said, I agree with the conclusion if not the method. I would if anything be more risk averse on large bets not less. Risk aversion on small bets seems irrational to me in the first place. Utility should be approximately linear at small scales. Then again, I would take the 50-50 chance at $55 over $50 in the first place so maybe I'm not the sort of person you're talking about.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-08-21T09:51:16.909Z · LW(p) · GW(p)
So in other words, people's actual behaviour does not fit a (particular) simple mathematical rational model? Why is this surprising to anyone?
You haven't met many economists, have you? :-)
comment by Unnamed · 2012-08-20T19:30:32.192Z · LW(p) · GW(p)
The key assumption that leads to problems in trying to descriptively model people's decisions is just that people have a single consistent utility function, which is defined in terms of the amount of money that they have.
If someone starts with $18,000 and then gets $40, the assumption is that the benefit can be expressed as U(18,040)-U(18,000). Or, in words, the person thinks: getting $40 brought me from a world where I have $18,000 to a world where I have $18,040. I value a world where I have $18,000 at this amount, and I value a world where I have $18,040 at that amount, and the benefit of getting the $40 is just the difference between those two.
In this model, there is a single curve that you can plot of how much you value a world where you had $X. Think about what that curve would look like, with X ranging from 10,000 to 100,000. In nearly every plausible case, that curve will be close to linear on a small scale (in the range of 10s or 100s) almost everywhere. There may be some curvature to it (perhaps your curve resembles f(x)=log(x)), but if you zoom in then that will mostly go away and a straight line will give you a good fit (e.g., if you are looking at changes in x that are 2 orders of magnitude smaller than x, then a log function will look pretty much linear). In a few special cases, there may be a large sudden jump in the curve, if there is some specific thing that you really want to buy and there is a large benefit to suddenly being able to afford it, but those cases are rare. For the most part, U(x) will be relatively smooth, and it will be growing perceptibly over the whole range (even if your utility function is bounded, it's not like it will be almost at that bound before you even have $100,000).
And if your curve is approximately linear over small scales, then expected utility theory basically reduces to expected value theory when the stakes are small (e.g., a 50% of gaining $40 has an EV of $20). If U(x) is close to linear from x=18,000 to x=18,040, then U(18,020) must be about halfway in between U(18,000) and U(18,040). If you have $18,000, and are basing your decisions on a consistent utility function U(x), then for pretty much any plausible U(x) you'll prefer a 51% chance of gaining $40 to a 100% chance of gaining $20 (unless you just happen to have one of those rare big jumps in U(x) between $18,000 and $18,020 - perhaps you really really want something that costs $18,010?). The expected value is 2% higher ($20.40 vs. $20), and it's not plausible that your U(x) would be so sharply curved that you'd be willing to give up 2% EV over such a narrow range of x (it's just a 0.2% increase in x from $18,000 to $18,040).
Probably the most important feature of prospect theory is that it does away with this assumption of a single consistent utility function, and says that people value gambles based on the change from the status quo (or occasionally some other reference point, but we'll ignore that wrinkle here). So people think about the value of gaining $40 as U(+40) - it's whatever I have now plus forty dollars. The gamble in the previous paragraph now involves comparing U(+0), U(+20), and U(+40), rather than U(18,000), U(18,020), and U(18,040). It is no longer true that the scale of the change is small relative to the total amount, because the scale of the change sets the scale. So if there is any nonlinear curvature in your utility function, we can't get rid of it by zooming in to the point where we can use linear approximations, because no matter what we'll be looking at the function from U(+0) to U(+x). The utility function is at its curviest (least linear) near zero (think about log(x), or even sqrt(x)), and every change is defined relative to the status quo U(+0), so the curviest part of the curve is influencing every decision.
Replies from: Kindly↑ comment by Kindly · 2012-08-20T19:38:48.154Z · LW(p) · GW(p)
An assumption here that needs to be abandoned in order to have an accurate descriptive model of human decision making is that people have a single consistent utility function, which is defined in terms of the amount of money that they have.
That wasn't an assumption to be abandoned, that was the beginning of a proof by contradiction.
Replies from: Unnamedcomment by Vaniver · 2012-08-21T19:21:45.074Z · LW(p) · GW(p)
That paper has shown up here before, and I still don't like it.
Basically, the way that he presents his 'aversion' criterion may sound innocuous but it has really pernicious implications. Rabin thinks the pernicious implications means he's poked a hole in risk aversion- but instead he's just identified an incredibly terrible way to elicit aversion parameters. If any decision analyst was told by their client tell them that they'd turn down a -100/+105 bet with $345,000 in the bank, they'd start a long talk designed to make the client comfortable with the mathematics of decision-making under uncertainty, not take that as a reflectively endorsed preference.
Put another way, I don't think that Neville as stated actually exists (or is sane if he does exist). He might express those preferences under the framing of {.5 -50; .5 +55}, but I don't think he would reflectively endorse them under the framing of {.5 14950, .5 15055}, and real bets may be difficult to separate from emotional or status effects that invalidate the idea of preferences only being a function of wealth level rather than wealth history (which is a very different sort of aversion than utility functions that are concave in money).
Like you say in the conclusion, prospect theory is a better attempt to understand descriptive decision-making, but concave utility functions are a useful prescriptive tool.
comment by prase · 2012-08-20T18:21:22.313Z · LW(p) · GW(p)
This theorem is a simple result of the fact that U($(X+55))-U($X) must be greater than 55MU($(X+54)) (each dollar up from the Xth up to the (X+54)th must have marginal utility at least MU($(X+55))), while U($X)-U($(X-50)) must be less than 50MU($(X-100)) (each dollar from the (X-50)th up to (X-1)th must have marginal utility at most MU($(X-50))).
Are you sure you didn't intend to have (X+55) in the formula just after "greater than" instead of (X+54) and (X-50) in the last formula before the final parenthetical instead of (X-100)?
Also I think the formulas would be more readable if you omitted the dollar sign.
Replies from: army1987, Stuart_Armstrong↑ comment by A1987dM (army1987) · 2012-08-21T10:00:05.557Z · LW(p) · GW(p)
Also I think the formulas would be more readable if you omitted the dollar sign.
The best idea IMO is having it only with numbers, e.g. U(X + $54).
↑ comment by Stuart_Armstrong · 2012-08-20T22:39:06.356Z · LW(p) · GW(p)
Good catch and corrected!
comment by A1987dM (army1987) · 2012-08-21T22:34:58.433Z · LW(p) · GW(p)
If people are consistently slightly risk averse on small bets and expected utility theory is approximately correct, then they have to be massively, stupidly risk averse on larger bets, in ways that are clearly unrealistic.
Clearly unrealistic? EY (IIRC) once mentioned a study where a largish fraction of respondents explicitly preferred 100% probability of getting $500 to 15% probability of getting $1,000,000.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-08-21T22:51:20.486Z · LW(p) · GW(p)
For that strong claim, I think you'll need to give a reference.
Replies from: pengvado↑ comment by pengvado · 2012-08-22T03:31:48.072Z · LW(p) · GW(p)
Shane Frederick, 2005, "Cognitive Reflection and Decision Making", Journal of Economic Perspectives.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-08-22T09:18:35.425Z · LW(p) · GW(p)
Very interesting (i wish they'd gone into more details about that particular choice!) Though it doesn't change the fact that diminishing marginal utility doesn't explain betting behaviour for most people; I know enough people who'd reject the 50-50 gamble on +55, -50, but accept the higher gamble.
comment by CronoDAS · 2012-08-20T22:49:30.042Z · LW(p) · GW(p)
Aren't humans not so much risk-averse as loss-averse?
Replies from: shminux↑ comment by Shmi (shminux) · 2012-08-21T00:13:15.929Z · LW(p) · GW(p)
Is there a difference, given that there are rather few win-averse people?
Replies from: Stuart_Armstrong, CronoDAS↑ comment by Stuart_Armstrong · 2012-08-21T09:53:13.679Z · LW(p) · GW(p)
You can distinguish the two by offering people choices between a sure $50 and a 50-50 bet paying $0 or $100, and see if their behaviour differs from bets with losses.
↑ comment by CronoDAS · 2012-08-22T05:27:59.040Z · LW(p) · GW(p)
Given a choice between losing $50 or a 50% chance of losing $100, a risk averse person loses the $50 and the loss-averse person takes the 50% chance of losing $100.
Given a choice between gaining $50 or a 50% chance of gaining $100, a risk averse person chooses to gain the $50 and the loss-averse person doesn't care which option he gets.
comment by Kindly · 2012-08-20T18:00:31.573Z · LW(p) · GW(p)
Thank you! I've heard this argument vaguely alluded to before, so I'm very happy to see a post about it. I'm still not sure what I think about it, though, because decreasing marginal utility felt like it was the only good reason to be risk averse. So how am I supposed to model myself now?
Replies from: gjm↑ comment by gjm · 2012-08-20T21:06:52.498Z · LW(p) · GW(p)
If decreasing marginal utility is the only good reason to be risk averse but you're more risk averse than it can justify, then you should (1) model yourself in some empirical way that gives a reasonable description of your behaviours (you probably have such a model already, albeit implicit) and (2) try to be less risk averse.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-06-06T22:35:29.234Z · LW(p) · GW(p)
Real people are risk averse not only because of decreasing marginal utility, but also because they see "I had the choice to refuse a bet but did not take that choice, and consequently I lost money through my own choice," as an additional bad thing distinct from losing money.
comment by Viliam_Bur · 2012-08-21T07:25:18.866Z · LW(p) · GW(p)
Shorter version:
Joe has just as much food as he needs, though he would enjoy having some extra. Professor Mephistopheles offers him a bet, where Joe can flip a coin, and if he wins, he will have 3 times as much food as he needs, but if he loses, he will die from hunger. Joe refuses the bet.
Professor Mephistopheles publishes a scientific article about Joe's irrationality: he seems to care mostly about his food, and yet he refuses a bet which increases his expected amount of food. A new scientific controversy is born...
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-08-21T09:48:57.961Z · LW(p) · GW(p)
No, what you are presenting there is the traditional theory. Joe's behaviour is perfectly explained by him being an expected utility maximiser - you've shown that his utility function is concave in food. People haven't thought that Joe's behaviour is irrational, ever since Bernoulli.
What I'm saying is that that traditional theory is insufficient to explain people's betting behaviour. i.e. the monetary variants of "dying from hunger" and "having 3 times more than he needs" do not explain human behaviour.
Replies from: Viliam_Bur↑ comment by Viliam_Bur · 2012-08-21T12:50:13.432Z · LW(p) · GW(p)
I am probably missing something, but what exactly is the difference between money and food, if Joe in my example uses money to buy food?
For an average Joe, the utility function is also concave in money, isn't it? A smart person could use one dollar to buy themselves a food, and another dollar to start a new software company, making them a billionaire in a few years... but for the average Joe, the difference between $1 and $2 is probably a difference between basic food, and basic food plus a cookie.
Replies from: prase, Stuart_Armstrong↑ comment by Stuart_Armstrong · 2012-08-21T15:57:42.451Z · LW(p) · GW(p)
What I'm saying is that you've grasped how risk aversion is traditionally modelled: though the gain in terms of units is the same as the loss, the consequences of losing are much worse than the consequences of winning (ie being homeless versus owning a second home).
My point is that if you use that model to explain why people turn down small bets (50-50 on winning $55, losing $50 when the people are reasonably well off), then the model predicts stupidly risk aversive behaviour for larger bets, that don't correspond to what people do in practice.
comment by timtyler · 2012-08-21T00:03:49.040Z · LW(p) · GW(p)
Contrary to this post, utility-based models of humans work fine. As explained here, you can construct utility-based models to represent any computable agent.
Of course you may have to account for humans caring about things other than money.
Replies from: Stuart_Armstrong, None↑ comment by Stuart_Armstrong · 2012-08-21T09:43:13.009Z · LW(p) · GW(p)
You can also construct deontological models for any utility-based agent (the right action is always that which maximises utility). Virtue ethics is a bit hazier, but you can certainly have ethics where maximising utility is virtuous.
And when people purport to explain human behaviour on small bets through risk aversion on a utility function, they do not say "here is a two billion line utility function that encodes behaviour" but "people have utility functions concave in money".
↑ comment by [deleted] · 2012-08-21T00:52:44.271Z · LW(p) · GW(p)
Just because you can construct a utility-based model to represent an agent doesn't meant that the model so constructed is at all useful or at all informative about what is actually going on.
To get the utility function for an arbitrary agent, as described in the paper your link links to, you have to know what the agent would do in any possible situation. At which point, there's nothing left that the utility function can tell you.
Replies from: timtyler↑ comment by timtyler · 2012-08-21T09:49:21.230Z · LW(p) · GW(p)
So, to recap, the claim in this post was that "expected utility maximization" is "completely wrong as a descriptive theory of how humans behave".
It seems like an ungrounded claim to me. It is not the application of expected utility maximization to humans that is wrong, but the application of it to humans in this post.
I don't agree with the claim that general-purpose utility-based models are not "useful" or "informative". One point of them is that they allow comparison of the goals of arbitrary agents within a common framework. If you don't yet see how that might be useful, you should probably think about the issue some more.
In this example, the utility-based model shows that humans are doing something other than maximizing their future wealth. What that is is not immediately obvious - but they may, for example, be treating small transactions as a means of signalling to others about their behaviour when stakes are larger. Or they may be more interested in how many times they gain. What it doesn't mean is that they are not acting as expected utility maximizers.