Prospect Theory: A Framework for Understanding Cognitive Biases
post by Scott Alexander (Yvain) · 2011-07-10T05:20:26.365Z · LW · GW · Legacy · 50 commentsContents
50 comments
Related to: Shane Legg on Prospect Theory and Computational Finance
This post is on prospect theory partly because it fits the theme of replacing simple utility functions with complicated reward functions, but mostly because somehow Less Wrong doesn't have any posts on prospect theory yet and that needs to change.
Kahneman and Tversky, the first researchers to identify and rigorously study cognitive biases, proved that a simple version of expected utility theory did not accurately describe human behavior. Their response was to develop prospect theory, a model of how people really make decisions. Although the math is less elegant than that of expected utility, and the shapes of the curves have to be experimentally derived, it is worth a look because it successfully predicts many of the standard biases.
(source: Wikipedia)
A prospect theory agent tasked with a decision first sets it within a frame with a convenient zero point, allowing em to classify the results of the decision as either losses or gains. Ey then computes a subjective expected utility, where the subjective expected utility equals the subjective value times the subjective probability. The subjective value is calculated from the real value using a value function similar to the one on the left-hand graph, and the subjective probability is calculated from the real probability using a weighting function similar to the one on the right-hand graph.
Clear as mud? Let's fill some numbers into the functions - the exact assignments don't really matter as long as we capture the spirit of where things change steeply versus slowly - and run through an example.
Imagine a prospect theory agent - let's call him Prospero - trying to decide whether or not to buy an hurricane insurance policy costing $5000/year. Prospero owns assets worth $10,000, and estimates a 50%/year chance of a hurricane destroying his assets; to make things simple, he will be moving in one year and so need not consider the future. Under expected utility theory, he should feel neutral about the policy.
Under prospect theory, he first sets a frame in which to consider the decision; his current state is a natural frame, so we'll go with that.
We see on the left-hand graph that an objective $10,000 loss feels like a $5,000 loss, and an objective $5000 loss feels like a $4000 loss. And we see on the right-hand graph that a 50% probability feels like a 40% probability.
Now Prospero's choice is a certain $4000 loss if he buys the insurance, versus a 40% chance of a $5000 loss if he doesn't. Buying has a subjective expected utility of -$4000; not buying has a subjective expected utility of -$2000. So Prospero decisively rejects the insurance.
But suppose Prospero is fatalistic; he views his assets as already having been blown away. Here he might choose a different frame: the frame in which he starts with zero assets, and anything beyond that is viewed as a gain.
Since the gain half of the value function levels off more quickly than the loss half, $5000 is now subjectively worth $3000, and $10000 is now subjectively worth $3500.
Here he must choose between a certain gain of $5000 and a 50% chance of gaining $10000. Expected utility gives the same result as before, obviously. In prospect theory, he chooses between a certain subjective gain of $3000 and a 40% chance of gaining $3500. The insurance gives him subjective expected utility of $3000, and rejecting it gives him subjective expected utility of $1400.
All of a sudden Prospero wants the insurance.
We notice the opposite effect if there is only a a 1% chance of a hurricane. The insurance salesman lowers his price to $100 to preserve the neutrality of the insurance option when using utility.
But subjective probability rises very quickly, so a 1% chance may correspond to a subjective 10% chance. Now in the first frame, Prospero must decide between an objective loss of -$100 with certainty (corresponding to -$300 subjective since the value function is steeper closer to zero) or an objective loss of -$10,000 with objective probability 1% (subjective of 10%). Now the expected subjective utilities are -$300 if he buys, versus -$500 if he rejects. And so he buys the insurance. When we change the risk of hurricane from 50% to 1%, then even though we reduce the price of the insurance by an exactly equal amount, Prospero's decision switches from not buying to buying.
Let's see how many previously discussed biases we can fit into this model.
Prospero's change from rejecting the insurance when framed as gains, to buying it when framed as losses, directly mirrors the change in preferred survival strategies mentioned in Circular Altruism.
The necessity of frame-shifting between different perceptions of losses also produces the Sunk Cost Fallacy.
The greater steepness of the value function with losses as opposed to gains is not even an explanation for, but merely a mathematical representation of, loss aversion.
The leveling off of the value function that turned the huge objective difference between +$5000 and +$10000 into the teensy little subjective difference between +$3000 and +$3500 mirrors the scope insensitivity under which people show about the same level of interest in proposals to save endangered birds whether a thousand, ten thousand, or a hundred thousand birds are involved.
It may not be an official bias, but the "but there's still a chance, right" outlook looks a lot like the sharply rising curve of the subjective probability function near zero.
And although it is not immediately obvious from the theory, some people want to link the idea of a frame to priming and anchoring-adjustment, on the grounds that when a suitable reference frame doesn't exist any primed stimulus can help establish one.
And now, the twist: prospect theory probably isn't exactly true. Although it holds up well in experiments where subjects are asked to make hypothetical choices, it may fare less well in the rare experiments where researchers can afford to offer subjects choices for real money (this isn't the best paper out there, but it's one I could find freely available).
Nevertheless, prospect theory seems fundamentally closer to the mark than simple expected utility theory, and if any model is ever created that can explain both hypothetical and real choices, I would be very surprised if at least part of it did not involve something looking a lot like Kahneman and Tversky's model.
50 comments
Comments sorted by top scores.
comment by Kaj_Sotala · 2011-07-09T10:12:08.975Z · LW(p) · GW(p)
A prospect theory agent tasked with a decision first sets it within a frame with a convenient zero point, allowing em to classify the results of the decision as either losses or gains. Ey then computes a subjective expected utility, where the subjective expected utility equals the subjective value times the subjective probability.
If a person objects to singular they, I'm having a hard time seeing them not objecting to this. So why not just use singular they? It'd make this a lot more readable.
Replies from: syllogism↑ comment by syllogism · 2011-07-10T06:37:09.227Z · LW(p) · GW(p)
I've been meaning to make a post about this small procedural note. Singular they has a long history in English as a gender-neutral third person singular pronoun. Languages tend to resist the introduction of new pronouns, as they're "closed class" --- part of the language's grammar. It's especially problematic that nobody can even agree on which invented pronoun to get behind!
Can't we all just use singular they? It's much nicer.
Replies from: Yvain, None↑ comment by Scott Alexander (Yvain) · 2011-07-10T06:58:44.859Z · LW(p) · GW(p)
Okay, okay, I'll use singular they if you all promise that the first time someone pompously chides me for using "they" in the singular, you'll give them at least as much trouble as you're giving me for using gender-neutral third person pronouns.
Replies from: Will_Newsome, syllogism, drethelin↑ comment by Will_Newsome · 2011-07-10T10:04:59.064Z · LW(p) · GW(p)
Indeed I shall so chide. It's not so much that "ey" and the like bother me, it's mostly that Less Wrong might become one of the first communities where people can use singular they without flinching due to vague anticipation of undue contempt. Such trivial inconveniences add up very quickly for a certain kind of mind, like mine.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-02-15T19:36:50.179Z · LW(p) · GW(p)
Well, I don't anticipate undue contempt when using the singular they on Language Log, either. :-)
↑ comment by syllogism · 2011-07-10T08:34:06.691Z · LW(p) · GW(p)
Deal. I'll even pull rank with my formal qualifications on English grammar, should they care about that.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2011-07-12T20:45:47.476Z · LW(p) · GW(p)
Question for the formally qualified grammarian: When using singular "they", which is correct?
"When a person is biased, they make mistakes"
"When a person is based, they makes mistakes"
The second sounds absolutely horrible, but if singular "they" is really being used as a singular in the same sense as "he" or "she", it sounds like it ought to be correct.
Replies from: syllogism, Morendil↑ comment by Morendil · 2011-07-12T20:58:23.093Z · LW(p) · GW(p)
Have a gander at Language Log where the "singular they" has been extensively discussed - mostly, apparently, because it's something of a litmus test to determine whether someone is a descriptivist or a prescriptivist grammarian; the LL crowd falling squarely in the descriptivist camp.
The short answer is that it's grammatically plural; it's a "plural of indeterminacy of number" primarily, and has taken on under social pressure an aspect of "plural of indeterminacy of gender". Number one is correct.
ETA: background info.
comment by Jonathan_Graehl · 2011-07-09T00:02:20.252Z · LW(p) · GW(p)
I agree that known biases can be explained by curves like those, plus the choice of a "frame". But how do we know we're not overfitting?
In other words: does prospect theory pay rent?
I'd want to at least see that we're identifying some real differences between people when we fit their curves from a bunch of measurements of their behavior - I'd expect their personally fit model to describe their (held-out from fitting) future actions better than one fit over the whole population, etc.
It seems like the additional degree of freedom "well, it depends on how they chose their frame in this instance" needs to be nailed down as part of testing the model's fit on future actions.
Replies from: Yvain, jimmy, nazgulnarsil↑ comment by Scott Alexander (Yvain) · 2011-07-10T06:22:18.797Z · LW(p) · GW(p)
I am not entirely qualified to answer this objection, and I hope that one day someone who is more mathematical will make a post on the exact math involved.
Until then, I would say that the important part of prospect theory is not fitting numbers to the curves or determining the exact curve for each different person, but the discovery that the curves have the same basic shape in everyone. For example, that the slope of the losses curve is always greater than the slope of the gains curve; that the slope of both curves is steepest near zero but eventually levels out; that gains are always concave and losses are always convex. That subjective probability is steepest near zero, and also steep near one, but flatter in the middle. That decisions depend on frames, which can be changed and scaled depending on presentation.
I'm describing these visually because that's how I think; in the paper I linked to on top, Kahneman and Tversky describe the same information in the terms of mathematical equations which expected utility follows. None of these are intuitively predictable without having done the experiment, and all of them are pretty constant across different decisions.
I'm not sure what the status of research on applied prospect theory - figuring out the exact equations you can plug a frame and an amount of money into and predict the decision - is, but it must have had some success to win a Nobel Prize.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2011-07-10T20:10:03.872Z · LW(p) · GW(p)
We already knew that losses weigh roughly 2-3x (I forget which) as heavy as gains.
It's interesting but not surprising that people can re-orient losses and gains by framing.
It does make sense that the subjective value of monetary gains and losses should be more steeply sloped around 0, to the extent that emotional pain/reward needs to be strong enough in order to guide decisions even for small amounts of money (as in everyday transactions), but the dynamic range of the physical systems that register these feelings is limited. So we expect the magnitude of the slope to decrease as the quantities grow larger.
I wonder what happens to people who invest and manage to reframe their losses and gains as being percentage-of-total-wealth? We shouldn't accept that the only allowed frames are those that shift the origin.
It is interesting to point out that people act by weighting outcomes with a subjective probability that consistently differs from the actual information available to them. I'd like to understand the evidence for that better, but it's plausible - I can imagine it following from some fact about our brain architecture.
I'd be more impressed with the theory if it could really identify a characteristic of a person, even in just the domain of monetary loss/gain, such that it will predict future decisions even when that person is substantially poorer or richer than when the parameters were fit to them.
↑ comment by jimmy · 2011-07-10T18:17:53.232Z · LW(p) · GW(p)
Well, in two pictures it sums up loss aversion, scope insensitivity, overestimation of high probabilities, underestimation of low probabilities, and the framing effect. There's no information on there that corresponds to non-testable predictions, and the framing effect is a very real thing- you can often pick it for people.
It doesn't seem to simplify anything either, since the curves have to be justified by experiment instead of some simple theory, but it is a conveniently compact way of quantitatively representing what we know. How would you make quantitative statements about how loss aversion works without something equivalent to prospect theory?
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2011-07-10T20:17:32.088Z · LW(p) · GW(p)
I agree that the left curve (subjective value of monetary loss/gain) shows loss aversion and maybe scope insensitivity (there's only so much pain/reinforcement our brain can physically represent, and most of that dynamic range is reserved for routine quantities, not extreme ones), at least for money.
I'm not sure how the right curve, which I presume is used to explain the (objectively wrong under expected utility maximization) decisions/preferences people actually take when given actual probabilities, shows over- or under- estimation of probabilities. If you asked them to estimate the probability, maybe they'd report accurately - I presumed that's what the x axis was. If I use another interpretation, the graph may show under-estimation of low probabilities, but ALSO shows under-estimation of high probabilities (not over-estimation). Could you explain your interpretation?
Otherwise, I agree. These curves take these shapes because they're fit to real data.
I'm curious if the curves derived for an objective value like money, are actually predictive for other types of values (which may be difficult to test, if the mapping from circumstance to value is as personally idiosyncratic as utility).
Replies from: Majd↑ comment by nazgulnarsil · 2011-07-10T05:47:18.877Z · LW(p) · GW(p)
strongly agree. this feels like post hoc descriptions along the lines of psycho-analysis.
comment by knysna · 2011-07-10T20:01:50.978Z · LW(p) · GW(p)
I have a paper in press at the Journal of Applied Psychology that used both hypothetical scenarios and real money in prospect theory experiments. We looked at whether people shifted their reference points post hoc-- after they had learned the outcomes of their decisions. Our results showed that people shifted their reference points to either maintain positive moods or repair negative moods.
If you are interested, you can see the paper here:
http://faculty.washington.edu/mdj3/Johnson,%20Ilies,%20&%20Boles%20%28in%20press%29.pdf
comment by Unnamed · 2011-07-11T03:29:49.857Z · LW(p) · GW(p)
It can be interesting to look at prospect theory curves that are based on experimental data. Here are the best fit curves for 10 subjects in one study, Gonzalez & Wu (1999), for the value function for gains (v) and the probability weighting function (w). Each subject in the study made 165 (hypothetical) decisions about gambles with various possible outcomes and probabilities, in the domain of gains only (no losses).
Replies from: Vanivercomment by Academian · 2011-07-16T15:25:37.163Z · LW(p) · GW(p)
This post mis-uses the term "utility". Expected utility theory does not treat utility as linear in money, as you suggest.
... trying to decide whether or not to buy an hurricane insurance policy costing $5000/year. Prospero owns assets worth $10,000, and estimates a 50%/year chance of a hurricane destroying his assets; to make things simple, he will be moving in one year and so need not consider the future. Under expected utility theory, he should feel neutral about the policy.
See http://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem, or perhaps also
http://lesswrong.com/lw/244/vnm_expected_utility_theory_uses_abuses_and/
The main descriptive difference between prospect theory and EU theory is that for monetary decisions, EU theory uses one curve (utility function), whereas prospect theory uses two curves (a value function and weight function) as well as a framing variable... it's about three times as suspect for overfitting, so I think I'll wait until it pays a little more rent :)
Replies from: Unnamed, DanielLC↑ comment by Unnamed · 2011-07-16T19:40:21.318Z · LW(p) · GW(p)
The other big difference is that the prospect theory value function is defined relative to a reference point (which typically represents the status quo) while the EU theory utility function is defined based on total wealth. So (as jimmy said) the nonlinearity of the prospect theory curve has a big effect on pretty much any decision (since any change from the current state is taking you through the curviest part of the curve), but the nonlinearity of EU theory curve is relatively minor unless the stakes are large relative to your total wealth. Under those conditions, EU theory (based on the utility of total wealth) is essentially equivalent to expected value.
Let's say that you have $30,000 in total wealth and you're given a choice of getting $10 for sure or getting $21 with p=.5. On the EU curve, the relationship between U($30,000), U($30,010), and U($30,021) should be nearly linear, so with any reasonable curve EU theory predicts that you prefer the 50% chance at $21 (indeed, you'd even prefer a 50% chance at $20.01 to $10 for sure as long as your curve is something like the square root function or even the natural log function). But on the prospect theory curve, V($0), V($10), and V($21) are very nonlinear, so even if we just treat probabilities as probabilities (rather than using the probability weighting function) prospect theory predicts that you'll prefer the certain $10 (at least, it will if the V(x) curve is the square root function, or x^.88 as is commonly used).
When people are actually given choices like $10 for sure vs. $21 w. p=.5, they tend to choose $10 for sure just as prospect theory predicts (and EU theory does not). That's paying rent in anticipated experiences. Prospect theory was developed by asking people a bunch of questions like that one, seeing what they did, and fitting curves to the data so that predictions about hundreds of similar decisions could be made based on a model with only a few parameters. That research produced a lot of data which was inconsistent with expected value (which, for these types of gambles, implies that it was also inconsistent with EU theory based on utility-of-wealth) and so Kahneman & Tversky developed a relatively simple model that did fit the empirical data, prospect theory.
Replies from: Academian↑ comment by Academian · 2011-07-23T09:22:46.545Z · LW(p) · GW(p)
The main descriptive difference between prospect theory and EU theory is that for monetary decisions, EU theory uses one curve (utility function), whereas prospect theory uses two curves (a value function and weight function) as well as a framing variable
The other big difference is that the prospect theory value function is defined relative to a reference point
That's what Yvain and I are calling framing.
When people are actually given choices like $10 for sure vs. $21 w. p=.5, they tend to choose $10 for sure just as prospect theory predicts (and EU theory does not).
What you're calling EU theory is a very restricted version of EU theory, where you require utility to be a function of total monetary wealth, or total material wealth. You might call it "Expected Utility of Wealth" theory. EU theory is actually much more general, and assigns utility to outcomes rather than amounts of money or even lists of possessions. This is all discussed in
http://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem , and
http://lesswrong.com/lw/244/vnm_expected_utility_theory_uses_abuses_and/
But for predictive purposes, EU theory is so ridiculously general (there are so many situational parameters) that, as far as anyone knows, it has almost no predictive power. So for the purposes of prediction, I think you're justified in talking about "EUW" theory, because without a highly restrictive assumption like utility being a function of wealth, EU theory has no chance of making predictions.
Nonetheless, I want to encourage you, and anyone else, to make explicit the assumption "utility is a function of wealth" when you're making it. My reason is that, in toy decision-theory problems, EU theory is usually part of the framework, and it's a reasonable framework provided we don't impose the restrictions that make it predictively meaningful and false.
↑ comment by DanielLC · 2012-02-15T18:23:05.578Z · LW(p) · GW(p)
Utility is generally accepted to be differentiable in money, which means that it's approximately linear in amounts that are insignificant over your lifetime earnings. If you use a non-linear utility to explain risk aversion for a small amount of money, and extend this until you get large amounts of money, it results in absurdly huge utility falloff. I remember someone posted an article on this. I can't seem to find it at the moment.
Replies from: roystgnr, army1987, taw↑ comment by roystgnr · 2012-02-15T19:15:59.722Z · LW(p) · GW(p)
Unless you have a good estimate of your future earnings and can borrow up to that at low interest rates, I think "amounts that are insignificant compared to your current liquidity" might be a slightly more rational metric. Note also that any explanation of human risk aversion (as opposed to rational risk aversion) is trying to explain behaviors that evolved during a time when "borrowing at low interest rates" wasn't really an option. If a failed risk means you starve to death next year, it doesn't matter how copious a quantity of food you otherwise would have acquired in subsequent years.
comment by Unnamed · 2011-07-11T02:48:28.879Z · LW(p) · GW(p)
It may not be an official bias, but the "but there's still a chance, right" outlook looks a lot like the sharply rising curve of the subjective probability function near zero.
(RETRACTED) This is an official bias, known as the certainty effect. (/RETRACTED)
EDIT (thanks, Vaniver): This is closely related to the certainty effect, which describes the sharp change in weighting near p=1 when an outcome switches from a sure thing to merely a likely possibility. The sharp change in weighting near p=0 is similar, as an outcome switches from an impossibility to merely an unlikely possibility, but I don't think it has a handy name.
Replies from: Vaniver, Unnamed↑ comment by Vaniver · 2011-07-11T04:48:16.073Z · LW(p) · GW(p)
That looks like something else, actually- that's the sharply falling weight near 1, as uncertain things aren't as valuable as certain things. Yvain is discussing when people model a tiny chance of winning as much larger- as vividly displayed by the lottery, for example.
Replies from: Unnamedcomment by Unnamed · 2011-07-11T02:40:24.997Z · LW(p) · GW(p)
I recommend Kahneman & Tversky's 1984 paper Choices, Values, and Frames (republished as chp 1 in their book of the same name) as a more readable (and shorter) introduction to prospect theory than their 1979 paper which Yvain has linked. It contains several examples demonstrating consequences of the shape of the functions and exploring the possibilities for framing effects.
comment by DITTO · 2011-07-10T14:01:44.104Z · LW(p) · GW(p)
I'm learning about utitlity theory just now, but I hadn't heard about prospect theory before. Thanks for posting it.
I know the main point of the post was to introduce prospect theory, but I wanted to add a comment about standard utility theory. In the text you write that standard utility theory predicts Prospero should be indifferent between a certain $5,000 and a 50-50 chance of either $0 or $10,000. This isn't quite right, maximising expected utitlity isn't the same as maximising expected wealth.
In standard utility theory you have a utility function U(W), so Prospero has the choice between U(5,000) and a 50-50 chance of U(0) or U(10,000). The expected utility need not be the same for both cases. In fact, most investors are assumed to have a utility function such that each addttional dollar adds less utility than the previous one (diminishing marginal utility of wealth). E.g $10 adds less utility to a millionaire than it would to the same person if he were broke and homeless. An investor with diminishing marginal utility of wealth would always take the insurance since, taking the certain $5,000 as the base case, the 50% chance of losing that $5,000 would cost more utility than the 50% chance of the gain of an extra $5,000 would add.
In this case, what is the difference between standard theory and prospect theory? Taking the first graph, you could regard this as a plot of a standard utility function with wealth on the x axis and utility on the y axis. The differences seem to be:
- in the second plot, it is shown that a prospect theory agent seems to behave as if small probabilities are larger than they actually are, and as if large probabilities are smaller than the actually are;
- the fact that Prospero's utility function is different depending on how the question is framed;
- the shape of the utility function has the form shown in the first graph, wheras in standard utility theory it can take a wider variety of possible shapes.
↑ comment by jimmy · 2011-07-11T18:36:59.775Z · LW(p) · GW(p)
One difference is that in standard utility theory, while utility doesn't have to be linear in money, if you 'zoom in' enough it is very close.
In prospect theory the shape doesn't change. It generally makes sense to be risk averse when you're risking amounts near your total wealth, but prospect theory says that you'll be risk averse at the $1 level too.
comment by lukeprog · 2011-07-10T01:15:16.265Z · LW(p) · GW(p)
An excellent introduction, and I love how you've tied it in with LW discussion on cognitive biases.
Also check out temporal motivation theory (2006), which tries to integrate (cumulative) prospect theory with other theories of human behavior.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-07-10T10:16:11.483Z · LW(p) · GW(p)
Construal level theory is on that fringe, for example. Or as it's more commonly known, Near/Far. Unfortunately I didn't find anything in that area to be particularly compelling, but it's probably fertile ground for using Bayes to go where science can't. I vaguely remember using those tools to cast an interesting light on some aspects of moral psychology, even if the papers themselves were meh. That said I could easily have missed the best papers or best insights.
comment by AlexMennen · 2011-07-12T02:58:54.409Z · LW(p) · GW(p)
One thing I'm a bit confused about: How would weighted probabilities work when there are more than two possible outcomes? "sum Probability(x) = 1" does not imply "sum Weighted Probability(x) = 1", and furthermore, you can get a different weighted probability distribution by grouping similar outcomes and applying the weighting stepwise, first to groups of similar outcomes, and then to specific outcomes within the groups.
Replies from: k3nt↑ comment by k3nt · 2011-07-12T05:09:38.275Z · LW(p) · GW(p)
I think there's probably an interesting point in there but I can't quite parse the text. Can you give an example?
Replies from: AlexMennen↑ comment by AlexMennen · 2011-07-12T18:55:46.906Z · LW(p) · GW(p)
Suppose there is a 90% chance of maintaining what the prospect theory agent perceives as the status quo, which means a 10% probability of something different happening, which looks like it might correspond to a weighted probability of around 25% according to the graph. But now suppose that there are 10 equally likely (1%) possible outcomes other than status quo. Each of the 10 possibilities considered in isolation will have a weighted probability of 10% according to the graph, even though the weighted probability of anything other than the status quo happening is only 25%
Replies from: Unnamed↑ comment by Unnamed · 2011-07-13T03:14:05.903Z · LW(p) · GW(p)
You're getting into advanced questions; prospect theory was initially formulated to only deal with gambles with 2 (or fewer) possible outcomes so that it didn't have to deal with this sort of stuff. Eventually Tversky & Kahneman (1992) came out with a more complicated version of the theory, Cumulative Prospect Theory, which addressed this problem by being rank-dependent. Looking at the graph of w(p), basically what you do is rank the outcomes in order of their value, line them up along the probability axis in order giving each one a width equal to its probability, and weight each one by the change in w(p) over its width. So if the 10 outcomes each with probability .01 are all losses, then the largest loss gets the weight w(.01), the next-largest loss gets the weight w(.02)-w(.01), the next gets the weight w(.03)-w(.02), ... and the last one gets w(.10)-w(.09). So the total weight given to the 10 outcomes is still only w(.10), just as it would be if they were all combined into one outcome.
For more of the nitty gritty (like separating gains & losses), you can see the Tversky & Kahneman (1992) paper, or I found the explanation in this Fennema & Wakker (1997) paper easier to understand.
Tversky, A. & Kahneman, D. (1992). Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and Uncertainty 5: 297–323.
comment by AlexMennen · 2011-07-12T02:51:03.095Z · LW(p) · GW(p)
Prospero's change from rejecting the insurance when framed as gains, to buying it when framed as losses, directly mirrors the change in preferred survival strategies mentioned in Circular Altruism.
Did you mean?
Prospero's change from rejecting the insurance when framed as losses, to buying it when framed as gains, is directly opposite to the change in preferred survival strategies mentioned in Circular Altruism. Apparently we use radically different decision theories when thinking about our own well-being than we do when thinking about the well-being of abstract others.
comment by sixes_and_sevens · 2011-07-09T13:01:20.953Z · LW(p) · GW(p)
I'd upvote this twice if I could.
The weighted probability curve reminded me of some other research I first heard of a couple of years ago, to do with human choices being made by comparing them to their neighbouring choices, rather than on an absolute scale of utility. The result of this being that people find it hard to appraise things on more than five levels of gradation ("worse", "this", "better", and intervals between them). This provides a plausible explanation for why we rank so many things out of five.
I looked for the research in question, and found Decision by Sampling. Having now had a look at the actual paper, it actually references prospect theory twice. I really should follow these things up more.
Replies from: Alexei, Will_Sawin↑ comment by Will_Sawin · 2011-07-09T21:05:57.628Z · LW(p) · GW(p)
I've seen some psych research using 7 options - does anyone know if there's a reason for that? Do they know what they're doing more than the people who rank things using 5?
comment by Kaj_Sotala · 2011-07-09T10:21:03.782Z · LW(p) · GW(p)
Fascinating. I'm amazed that nobody has brought this up here before - this is something I should have read about years ago.
comment by Roger Dearnaley · 2023-05-14T20:47:03.989Z · LW(p) · GW(p)
The observation that gains saturate has a fairly simple explanation from evolutionary theory: the increased evolutionary fitness advantage from large material gains saturates (especially in a hunter-gatherer environment). Successfully hunting a rabbit will keep me from starving for a day; but if I successfully hunt a mammoth, I can't keep the meat for long enough for it to feed me for years. The best I can do is feed everyone in the village for a few days, hoping they remember this later when my hunting is less successful, and do a bunch of extra work to make some jerky with the rest. The evolutionary advantage is sub-linear in the kilograms of raw meat. In more recent agricultural societies, rich and powerful men like Ramses II who had O(100) children needed a lot more than 50 times the average resources of men in their society to achieve that outcome (and of course that evolutionary strategy isn't possible for women). Even today, if I'm unlucky enough to get pancreatic cancer it doesn't matter how rich I am: all that money isn't going to save me, even if I'm as rich as Steve Jobs.
Similarly, on the downside, from a personal evolutionary fitness point of view, saturation also makes sense, since there is a limit to how bad things can get: once I, my family, and everyone else in the tribe related to me are all dead, it's game over, and my personal evolutionary fitness doesn't really care whether everyone else in the region also died, or not.
So it seems to me that at least the first diagram above of prospect theory may be an example of humans being aligned with evolution's utility function.
I don't have a good evolutionary explanation for the second diagram, unless it's a mechanism to compensate for some psychological or statistical bias in how hunter-gatherers obtain information about and estimate risks, and/or how that compares to modern mathematical risk measures like probabilities and percentages.
comment by Khaled · 2011-07-09T18:34:29.030Z · LW(p) · GW(p)
I think the distinction between decisions (as an end result) and other brain processes can be useful in fields of behavioral economics and the like on the short term, as it reahes results quite fast. But the complexity of decisions makes me visit the examples of unifications in physics. Perhaps if all decisions (not only final output) are treated as the same phenomena, aspects like framing can be understood as altering sub decisions by using their constant value functions, leading to a different decision later in time (which just happens to be the output decision). The idea is perhaps understanding the building blocks of decisions (on a level smaller than final outputs and bigger than single nueron firings) can provide a better model for decision making