The Affect Heuristic

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-27T07:58:44.000Z · LW · GW · Legacy · 70 comments

Contents

70 comments

The affect heuristic is when subjective impressions of goodness/badness act as a heuristic—a source of fast, perceptual judgments. Pleasant and unpleasant feelings are central to human reasoning, and the affect heuristic comes with lovely biases—some of my favorites.

Let’s start with one of the relatively less crazy biases. You’re about to move to a new city, and you have to ship an antique grandfather clock. In the first case, the grandfather clock was a gift from your grandparents on your fifth birthday. In the second case, the clock was a gift from a remote relative and you have no special feelings for it. How much would you pay for an insurance policy that paid out $100 if the clock were lost in shipping? According to Hsee and Kunreuther, subjects stated willingness to pay more than twice as much in the first condition.1 This may sound rational—why not pay more to protect the more valuable object?—until you realize that the insurance doesn’t protect the clock, it just pays if the clock is lost, and pays exactly the same amount for either clock. (And yes, it was stated that the insurance was with an outside company, so it gives no special motive to the movers.)

All right, but that doesn’t sound too insane. Maybe you could get away with claiming the subjects were insuring affective outcomes, not financial outcomes—purchase of consolation.

Then how about this? Yamagishi showed that subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal.2 Apparently the mental image of a thousand dead bodies is much more alarming, compared to a single person who’s more likely to survive than not.

But wait, it gets worse.

Suppose an airport must decide whether to spend money to purchase some new equipment, while critics argue that the money should be spent on other aspects of airport safety. Slovic et al. presented two groups of subjects with the arguments for and against purchasing the equipment, with a response scale ranging from 0 (would not support at all) to 20 (very strong support).3 One group saw the measure described as saving 150 lives. The other group saw the measure described as saving 98% of 150 lives. The hypothesis motivating the experiment was that saving 150 lives sounds vaguely good—is that a lot? a little?—while saving 98% of something is clearly very good because 98% is so close to the upper bound of the percentage scale. Lo and behold, saving 150 lives had mean support of 10.4, while saving 98% of 150 lives had mean support of 13.6.

Or consider the report of Denes-Raj and Epstein: subjects who were offered an opportunity to win $1 each time they randomly drew a red jelly bean from a bowl often preferred to draw from a bowl with more red beans and a smaller proportion of red beans.4 E.g., 7 in 100 was preferred to 1 in 10.

According to Denes-Raj and Epstein, these subjects reported afterward that even though they knew the probabilities were against them, they felt they had a better chance when there were more red beans. This may sound crazy to you, oh Statistically Sophisticated Reader, but if you think more carefully you’ll realize that it makes perfect sense. A 7% probability versus 10% probability may be bad news, but it’s more than made up for by the increased number of red beans. It’s a worse probability, yes, but you’re still more likely to win, you see. You should meditate upon this thought until you attain enlightenment as to how the rest of the planet thinks about probability.

As I discussed in “The Scales of Justice, the Notebook of Rationality,” Finucane et al. found that for nuclear reactors, natural gas, and food preservatives, presenting information about high benefits made people perceive lower risks; presenting information about higher risks made people perceive lower benefits; and so on across the quadrants.5 People conflate their judgments about particular good/bad aspects of something into an overall good or bad feeling about that thing.

Finucane et al. also found that time pressure greatly increased the inverse relationship between perceived risk and perceived benefit, consistent with the general finding that time pressure, poor information, or distraction all increase the dominance of perceptual heuristics over analytic deliberation.

Ganzach found the same effect in the realm of finance.6 According to ordinary economic theory, return and risk should correlate positively—or to put it another way, people pay a premium price for safe investments, which lowers the return; stocks deliver higher returns than bonds, but have correspondingly greater risk. When judging familiar stocks, analysts’ judgments of risks and returns were positively correlated, as conventionally predicted. But when judging unfamiliar stocks, analysts tended to judge the stocks as if they were generally good or generally bad—low risk and high returns, or high risk and low returns.

For further reading I recommend Slovic’s fine summary article, “Rational Actors or Rational Fools: Implications of the Affect Heuristic for Behavioral Economics.”

1Christopher K. Hsee and Howard C. Kunreuther, “The Affection Effect in Insurance Decisions,” Journal of Risk and Uncertainty 20 (2 2000): 141–159.

2Kimihiko Yamagishi, “When a 12.86% Mortality Is More Dangerous than 24.14%: Implications for Risk Communication,” Applied Cognitive Psychology 11 (6 1997): 461–554.

3Paul Slovic et al., “Rational Actors or Rational Fools: Implications of the Affect Heuristic for Behavioral Economics,” Journal of Socio-Economics 31, no. 4 (2002): 329–342.

4Veronika Denes-Raj and Seymour Epstein, “Conflict between Intuitive and Rational Processing: When People Behave against Their Better Judgment,” Journal of Personality and Social Psychology 66 (5 1994): 819–829.

5Finucane et al., “The Affect Heuristic in Judgments of Risks and Benefits.”

6Yoav Ganzach, “Judging Risk and Return of Financial Assets,” Organizational Behavior and Human Decision Processes 83, no. 2 (2000): 353–370.

70 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Topo · 2007-11-27T12:11:41.000Z · LW(p) · GW(p)

No, honestly, this only shows that most people are almost unbelievably stupid and uneducated in even the most basic logic. There is nothing else to say, except call for radical changes in our educational system, along the lines of what Steven Pinker suggested not long ago.

Replies from: army1987
comment by A1987dM (army1987) · 2012-04-19T17:22:41.994Z · LW(p) · GW(p)

Why the hell was this downvoted that much?

Replies from: None, son0fhobs
comment by [deleted] · 2012-04-19T17:31:57.175Z · LW(p) · GW(p)

It's misanthropic and false?

Replies from: TimS
comment by TimS · 2012-04-19T18:59:12.243Z · LW(p) · GW(p)

I don't understand this assertion. The comment in question asserts that massive innumeracy explains the experimental results from the post. I don't believe that completely explains the data, but it seems like a plausible explanation, so why do you call it "misanthropic"?

Replies from: None
comment by [deleted] · 2012-04-19T19:36:53.304Z · LW(p) · GW(p)

The comment in question asserts that massive innumeracy explains the experimental results from the post.

Because it doesn't seem to me like an explanation at all. What is 'stupid' supposed to mean? That's a thing my eight year old says to people when he wants to be mean. And stupid compared to what? There's no mention in the article about variation in responses according to intelligence. Does it show that people are uneducated? I dunno, the article doesn't mention anything about variation in responses according to education.

And these studies are conducted by giving people tricky hypotheticals. If people do badly, can we infer from this that they are stupid, or even that they make similar mistakes when they reason through problems that they care about? Maybe. The article doesn't say. Heck, none of us even know how we'd do in these studies: it's very easy to call the people participating stupid when we're sitting here looking at the answer sheet.

'False' may have been the wrong word, because as an explanation it doesn't rise to the level of being true or false. 'Misanthropic' because that's what it does manage to express: a hatred of human beings.

Replies from: TimS
comment by TimS · 2012-04-19T19:41:03.636Z · LW(p) · GW(p)

Because it doesn't seem to me like an explanation at all.

Doing badly on written word problems can be explained by illiteracy. Why can't doing badly on math problems be explained by innumeracy (i.e. failure to comprehend mathematical concepts)?

When regression toward the mean is considered a moderate to advanced mathematical concept, I don't think innumeracy is an unreasonable label of the average person. Certainly it is a reasonable critique of the education system that it does not reliably generate numerate graduates (on parallel with the critique that some specific education systems do not reliably generate graduates who can read).

Replies from: None, Pastafarianist
comment by [deleted] · 2012-04-19T19:45:54.166Z · LW(p) · GW(p)

When regression toward the mean is considered a moderate to advanced mathematical concept, I don't think innumeracy is an unreasonable label of the average person.

I don't think 'illiteracy' or 'innumeracy' is much of an explanation in either case, since in both cases the content of those terms is or is close to 'does badly in these studies'. (ETA: I mean that we'd need further evidence to show that failure to do well on these tests is a failure to comprehend mathematical concepts.) But I don't have any reason to argue with what you've said. I was responding to Topo (ETA: ugh, 5 years after the fact. Nevermind.)

comment by Pastafarianist · 2014-07-23T16:54:54.546Z · LW(p) · GW(p)

Doing badly on written word problems can be explained by failure to comprehend linguistic concepts. Doing badly on math problems can be explained by failure to comprehend mathematical concepts.

You see, this explanation makes perfect sense.

comment by son0fhobs · 2013-04-12T04:09:59.367Z · LW(p) · GW(p)

I have so many issues with this article. Well, sections anyway. Please do correct me if I'm missing things, but maybe these are a few issues others had, thus why the comment was down voted.

(I apologize for my lack of eloquence and grammar, it's way too late and I need to head for bed).

  1. The insurance on the clock illustration doesn't take into account emotion. Yes, emotion has value (unless people are indifferent to depression v. motivated v. pumped).

A. One facet of that: If I lost a clock that had little value, my need for solace, or repayment would be little as I lost little. If I valued the clock a lot, I lost something of greater value, thus I'd have a greater desire to have some sort or recompense.

B. Insurance provides peace of mind. The more value the clock has to me, the more value that peace of mind is.

  1. The disease issue. It's ignoring other variables playing into the issues - namely how widespread it is. 24.14% doesn't say how wide spread the disease is. Anthrax has a death rate of 20% with antibiotics (1% without). Yet so few encounter it. Given diseases with that death rate are usually pretty well contained, it's probably assumed that it's not widespread. Yet when considering the disease has killed thousands already, it's clearly widespread and with greater impact.

  2. The play on visualizing numbers and percents is so true. Important point that's been reiterated in countless ways in countless scenarios. I concur.

  3. Beans - making a good point, but it didn't clarify the number of times the person could draw a bean. If unlimited, they'd eventually get all the beans, thus percent wouldn't matter. However, I'm pretty sure that's not true, but still, giving a more complete picture would be nice. (All the links were broken! I couldn't even double check it myself!)

... I don't have time to go through the rest, but most of my beef was with the first few.

I find one of the most common issues I have is that emotions, because they often can lead to illogical answers, get devalued altogether. They lead to improper conclusions only because people suck at understanding, reading, and interpreting them. Logic leads to just as bad decisions when mishandled. When you study emotions as much as you do cognitive logic, emotions can aid in logical reasoning and provide insights that the cognitive mind will miss.

comment by Topo · 2007-11-27T12:22:22.000Z · LW(p) · GW(p)

A 7% probability versus 10% probability may be bad news, but it's more than made up for by the increased number of red beans. It's a worse probability, yes, but you're still more likely to win, you see.

I don't understand. Do you mean you are more likely to win with 7 red beans rather than one but also proportionately more likely to lose with 93 non red beans rather than 9? You wink and suggest there is some great wisdom there. I simply don't even know what the hell you are talking about.

Replies from: wmcleod, CG_Morton
comment by wmcleod · 2009-12-01T18:20:10.489Z · LW(p) · GW(p)

No he's being sarcastic kid...

comment by CG_Morton · 2011-08-15T23:30:36.651Z · LW(p) · GW(p)

In the 1 red/10 beans scenario, you can only win once, no matter how hard you try. With 7 read/100 beans, you simply play the game 100 times, draw 7 red beans, and end up with 7x more money.

Unless the beans are replaced, in which case yeah, what the hell were they thinking?

Replies from: jubydoo
comment by jubydoo · 2011-09-09T15:17:42.325Z · LW(p) · GW(p)

I think the idea of the game was you get one chance to pick a bean. After all, if you can just keep picking beans until you've picked all the reds, there's not really much point to the so-called game anymore, is there?

Replies from: tlhonmey
comment by tlhonmey · 2021-01-12T21:14:45.448Z · LW(p) · GW(p)

Even if you can't keep picking until you've gotten all the reds, there may be some number of draws where the probability of drawing more than one from the jar with more reds exceeds the loss of probability from them being a smaller portion of the total.

But it depends on exactly what the rules are.

In terms of our reflexes...  The lower levels of consciousness often aren't particularly good at math, so they probably just use a rough count.

comment by Tom3 · 2007-11-27T12:31:36.000Z · LW(p) · GW(p)

Topo, it's a simple unprobabilistic phase inversion topography manifold calculation, I can hardly see how you could fail to understand it.

Replies from: None, DJRRYAN
comment by [deleted] · 2012-04-18T23:48:49.323Z · LW(p) · GW(p)

?

Replies from: deadheadtb
comment by deadheadtb · 2012-11-20T05:39:12.481Z · LW(p) · GW(p)

He's replying to the person above him... Probably didn't realize he could directly reply to it with the REPLY button

Replies from: Nornagest
comment by Nornagest · 2012-11-20T05:55:13.966Z · LW(p) · GW(p)

A lot of the older comments are ported from Overcoming Bias, which doesn't have LW's thread structure. So if you see posts from 2008 or earlier that look poorly threaded, that's probably what caused it.

comment by DJRRYAN · 2013-08-04T03:37:07.342Z · LW(p) · GW(p)

If a flux capacitor with the proper dynamic range is used the phase inversion topography can be expanded. Fairly obvious but I just thought I'd add to the interesting dialogue.

comment by Chris · 2007-11-27T13:02:14.000Z · LW(p) · GW(p)

Ha, Spock vs McCoy. I think Kirk's position was that it's the affect heuristic that makes us warm, cuddly, and human, data processors, even if it can be faulted in some artificial situations.. This ties in with the other thread about how far we look down possible chains of results in deciding on an action. We're wired to look to proximal results with high affect, and I'm all for it.

comment by Eric_Blincow · 2007-11-27T13:29:03.000Z · LW(p) · GW(p)

The three parts of that paper that I found most interesting were:

  1. Concentrated affect beats diffuse affect. Everybody knows what "obnoxious" means but "intelligent" could mean alot of different things, therefore obnoxious wins, carries a higher weight in the averaging of the descriptions. "More precise affective impressions reflect more precise meanings and carry more weight in impression formation, judgment, and decision making."

  2. The fact that more people chose to accept a gamble when a small loss was involved, because the small size of the loss (5 cents) qualified the size of the gain (9 dollars). "In commenting on the fact that the carriers of value are changes in wealth or welfare, rather than final states, Kahneman and Tversky observe that “Our perceptual apparatus is attuned to the evaluation of changes or differences rather than to the evaluation of absolute magnitudes” (p. 277)."

  3. The conclusion of the Damasio 1990 paper which showed that disruption in brain centers linked to affective states disrupted personality to the point of making people sociopathic. From that paper: "An investigation of this theory in patients with frontal damage reveals that their autonomic responses to socially meaningful stimuli are indeed abnormal, suggesting that such stimuli fail to activate somatic states at the most basic level."

I don't understand the meaning of "somatic" in this context, can anyone help me out?

My understanding of the Damasio paper's implication is that affect is central to being able to function socially.

Thats a whole lot of insights crammed into 40 pages!

comment by Chris · 2007-11-27T13:35:05.000Z · LW(p) · GW(p)

BTW, significant data was withheld in the examples given : a) how many dips do you get at the jellybeans ? Do the red ones taste better ? What is their market value with the current weak dollar ? b) 10,000 people overall or 10,000 infected people ? Degree of infectiousness of the disease ? But that's what the affect heuristic is for : taking decisions in situations of incomplete data. 150 people is a single bounded set, 98% of x people sounds as though it just might be a replicable set. Go for it.

comment by Different_Jeff2 · 2007-11-27T13:40:05.000Z · LW(p) · GW(p)

One of the things I found interesting in Eliezer's chapter on biases from his site was the repeated cautions about always being aware that these biases can affect us as well, even when we're aware of them. I certainly wouldn't trust the judgement of someone who chalks them up to the belief "most people are almost unbelievably stupid."

That chapter was a great read, btw.

comment by Caledonian2 · 2007-11-27T14:10:19.000Z · LW(p) · GW(p)

All people are unbelievably stupid most of the time. Some people just manage to stop now and then.

comment by Roko · 2007-11-27T14:24:17.000Z · LW(p) · GW(p)

"It's a worse probability, yes, but you're still more likely to win, you see. You should meditate upon this thought until you attain enlightenment as to how the rest of the planet thinks about probability."

rest of planet = retards

comment by Nominull3 · 2007-11-27T15:17:01.000Z · LW(p) · GW(p)

The first terrifying shock comes when you realize that the rest of the world is just so incredibly stupid.

The second terrifying shock comes when you realize that they're not the only ones.

Replies from: thomblake
comment by thomblake · 2012-04-19T00:19:20.810Z · LW(p) · GW(p)

This one should be on a list of quotes from Less Wrong comments.

comment by Doug_S. · 2007-11-27T16:08:38.000Z · LW(p) · GW(p)

Or consider the report of Denes-Raj and Epstein (1994): Subjects offered an opportunity to win $1 each time they randomly drew a red jelly bean from a bowl, often preferred to draw from a bowl with more red beans and a smaller proportion of red beans. E.g., 7 in 100 was preferred to 1 in 10.

How many times do I get to draw, and is it with or without replacement? If I get to draw every bean in the bowl, the number of non-red beans doesn't matter. ;)

"I proudly include myself in the idiot category... no matter how smart you are, you spend much of your day being an idiot." - Scott Adams, wise man

comment by Drake · 2007-11-27T16:38:56.000Z · LW(p) · GW(p)

"[I]t's a simple unprobabilistic phase inversion topography manifold calculation..."

Tosh. This ignores the salience of the linear data elicitation projected over dichotomous variables with a fully specified joint distribution.

comment by Eric_Blincow2 · 2007-11-27T16:53:13.000Z · LW(p) · GW(p)

So now five people have made the same comment, all with the same length (1 to 3 sentences), all with a relatively similar, bland style of expression. Caledonian incidentally also made the same comment. Hmmm...

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-27T17:09:27.000Z · LW(p) · GW(p)

I wasn't trying to say the rest of the planet is stupid. I'm saying that "probability" is a more difficult concept than it seems. E.g. Mr. Spock predicts a 98% chance of the Enterprise being destroyed, and he does this twenty times and it never happens once. That's the scriptwriter's concept of what the word "probability" means, and it's very closely related to the jellybean problem.

comment by Nominull3 · 2007-11-27T17:15:06.000Z · LW(p) · GW(p)

Probability is a "more difficult concept than it seems", you say, but in what sense is it difficult? It does not require a vast and complex formalism to avoid the sort of error we see in the jellybean problem, so clearly it is not an inherently difficult error to avoid. If it is a "difficult concept", then, it's difficult because our brains are fundamentally not wired to deal with it appropriately, which is a failure of the brain, or colloquially a "stupidity".

comment by Doug_S. · 2007-11-27T17:41:50.000Z · LW(p) · GW(p)

See also: Straw Vulcan, MillionToOneChance

Spock is half right; the reason the Enterprise isn't destroyed is the MillionToOneChance effect that, in fiction, makes what would otherwise be objectively improbable outcomes more likely because they make for a better story. Spock's just not smart enough to realize that the reason that the Enterprise never does get destroyed is that he's a character in TV show. ;)

On the other hand, maybe he's just afraid of the consequences of breaking the fourth wall...

comment by mobile · 2007-11-27T17:58:06.000Z · LW(p) · GW(p)

In fairness to analysts, if you are judging stocks that nobody is familiar with, or even worse, that nobody except for people who are complete morons are familiar with, then the risk-return relationship will break down. In general, judging whether an investment is fairly priced depends on your confidence in the judgement of the informed traders (which may include you, if the investment is familiar). The ordinary economic theory you cite does not apply when the market may become inefficient.

comment by Chris · 2007-11-27T18:55:38.000Z · LW(p) · GW(p)

Statistics is actually fun, as the notion of probability is so non-intuitive. There's a 1 in 6 chance of throwing a deuce. What does that mean in the real world ? Well, if I throw the die 6 times, it should come up once ? euh no... Well if I throw 100 sequences of 6 throws I can predict the number of times the deuce will show up ? euh, no.... Well, if I throw 1000 runs of 100 sequences of 6 throws...... sorry, you still don't know one damn thing about what the result will be. So what does probability mean ? It's great ! One of life's rich prizes is to watch someone making a prediction on a particular instance based on statistical reasoning.

comment by Cyan2 · 2007-11-27T19:20:37.000Z · LW(p) · GW(p)

I ran across a curious misunderstanding of probability in the SF novel Diamond Mask. In the murder mystery plotline of the book, the protagonist had collected and analyzed data on an (implicitly mutually exclusive and exhaustive) list of eight or nine suspects. The author used probabilities of lower than 20% as a shorthand for not too likely, probabilities of between 20% and 50% as moderately likely, and probabilities above 50% as indicating prime suspects. Unfortunately, there was ~300% total probability in the list. The author could have gotten away with it if she'd just used the word "likelihood" instead of "probability".

comment by Elliot · 2007-11-27T21:04:20.000Z · LW(p) · GW(p)

I don't think these people are quite as silly as is made out. Let's look at the morality rate example. When you give a morality rate instead of casualty figures, you haven't necessarily communicated what that means for a community, or what it means on a large scale. That information is implied, but you haven't handed it to people on a silver platter. A wise person would create that knowledge himself -- he'd realize that if 20% die, and 5k people are infected, that's 1k dead. He'd think of lots of things like that. He'd figure out what it means in a variety of contexts. And he wouldn't pass judgment until he really understood the situation.

What is alleged about people seems to be that they have very bad judgment, or they are irrational. But if my analysis is correct, that need not be the case. We can explain the data simply in terms of widespread ignorance of how to draw consequences out of percentage figures, ignorance of how to create understanding of the implications of a technical fact.

If that's the case, we could approach the problem by thinking about how to communicate more useful information to people, and also how to educate people on how to think well. That is a hopeful and approachable conclusion.

comment by Zubon · 2007-11-27T21:42:41.000Z · LW(p) · GW(p)

Elliot, I suspect something is missing from your comments. The technocratic knowledge you are describing is multiplication. It sounds like you are calling for greater education in basic arithmetic, or perhaps telling people "and use it." Knowing that 20% of 5,000 is 1,000 is not the mark of an exceptionally wise person; it is the mark of a competent elementary school student. There is perhaps a reason why we can support a game show called "Are You Smarter Than a 5th Grader?"

I do not have immediate access to the Yamagishi article. Were people actually presented with 1,286/10,000 versus 24.14%, or just asked about one (and people tended to react more strongly to absolute numbers than percentages)? The former is really bad. I suppose there is a story to be told about thinking that maybe few people get the 24.14% disease, or that the 98% of 150 measure is applied repeatedly while the 150 measure works just once, or you get many draws without replacement from the bean bowls. I don't know that those are plausible stories.

Don't we expect people to react differently to the same numbers in different contexts? Eliezer has alread hit Anchoring and Adjustment. Is it a similar bias, innumeracy, or something else that causes people to react differently to "17,520 times per year" versus "twice an hour"?

comment by Elliot · 2007-11-27T22:08:01.000Z · LW(p) · GW(p)

The issue is not multiplication.

Suppose we "put things in perspective" by comparing the figures 1286 and 10000 to quantities people understand better. In my case, we might note my hometown had a bit over 10k people, and the high school had a bit under 1286. That could give me a less abstract understanding of what that kind of casualty rate means. With that understanding, I might be able to make a better judgment about the situation, especially if, like many people, I dislike math and numbers. (Which is perfectly reasonable given how they were subjected to unpleasant math classes for years.)

What about that 24% figure? Well, it contains within itself less hints of what to apply it to in order to understand it. We aren't handed numbers we already know how to relate to our experience. It may be harder to get started.

In other words, thinking of a new perspective provides new knowledge about the situation, that was not contained in the information communicated to the study participants. It was implied, by so were infinitely many other things. There is much skill in knowing what implications find and follow. So, this contextualizing knowledge must be created, and many people don't know to do so, or do so poorly. The study questions which are more helpful to people in creating this kind of knowledge may understandably and reasonably result in people making better judgments, because they present more useful information.

comment by Elliot · 2007-11-27T22:10:26.000Z · LW(p) · GW(p)

implications to find and follow (missing word)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-11-27T22:31:15.000Z · LW(p) · GW(p)

Zubon, knowing when to use multiplication, how to use multiplication, why to use multiplication, and doing so reflexively and without outside prompting, is a bit more technocratic than you might think. Have you ever tried to teach math to someone who is not good at math?

comment by Eric_Blincow2 · 2007-11-27T23:00:10.000Z · LW(p) · GW(p)

Elliot wrote: "I don't think these people are quite as silly as is made out. " "What is alleged about people seems to be that they have very bad judgment, or they are irrational."

Clearly human beings have a brain relatively well suited to their world which is, nevertheless, far from infallible. Hence stock market crashes, wars, and all manner of other phenomena which demonstrate the imperfect judging ability of the human mind. The human mind commits errors. One needn't condemn the human mind, or the average capacity of humanity, in order to point out these errors and speculate as to their causes– as this seems to be a fruitful endeavor for learning more about the function of the mind, which I think the above-linked chapter demonstrates very well. One need not pass any sort of value judgement relating to decision makers– in fact, its far better if one doesn't because that is only distracting and ultimately polemical. We want to measure the precision with which the human mind models reality, and what its sources of error are. So it is in the end completely irrelevant in the context of this discussion if this or that group of decision makers is labelled as "super smart", "idiotic", "irrational", "silly", etc. The point is to investigate the underlying processes.

comment by Caledonian2 · 2007-11-27T23:35:39.000Z · LW(p) · GW(p)
So now five people have made the same comment, all with the same length (1 to 3 sentences), all with a relatively similar, bland style of expression.

Great minds think alike. And fools seldom differ.

comment by Zubon · 2007-11-28T14:28:00.000Z · LW(p) · GW(p)

Eliezer, we could spend a long time commiserating on that one. I used to think the problem was that people never learned algebra properly, but I have begun to wonder how many have a firm grasp on applying second grade math. The hard part seems to be knowing what to divide or multiply by what (teaching Bayes' Theorem is fun for this). Real life is all story problems.

Recent adventures in math include baffling a room with the insight that 12*5/12=5 and explaining how to figure out what percent of 1200 300 is. Perhaps I should be more worried about the technocratic difficulties of addition; Division of Labour has an occasional series of "The Diff."

comment by Barkley_Rosser · 2007-11-28T18:24:03.000Z · LW(p) · GW(p)

Eliezer is correct that lots of people are very bad at calculating probabilities, and there are all kinds of well known biases in calculating when affect gets involved, especially small sample biases when one is personally aware of an outlier example, especially a bad one.

However, the opening example is perfectly fine. Eliezer even has it: the higher insurance is to cover the real emotional pain of losing the more personally valued grandfather clock. How much we subjectively value something most certainly depends on the circumstances of how we obtained it. There is nothing irrational about this whatsoever. Rationality above all involves following that old advice of Polonius: know thyself.

comment by Nathan_Myers · 2007-11-28T21:34:00.000Z · LW(p) · GW(p)

With 7 beans in a hundred, I can just keep drawing beans until I get $14 worth, where with 1 in ten, the most I can get is $2. Not only that, I get to eat a hundred free jelly beans. This doesn't seem too mysterious to me.

comment by Benquo · 2007-11-28T22:59:00.000Z · LW(p) · GW(p)

Barkley Rosser,

The monetary payout isn't higher for the more emotionally valuable object -- it's $100 in both cases. If you missed that, that could explain why people paid more for it; they ignored the dollar figure and assumed that the more valuable item was insured for more.

But if you didn't miss that... Are you suggesting that the $100 is more valuable when it coincides with a greater misfortune?

comment by Barkley_Rosser · 2007-11-29T04:24:07.000Z · LW(p) · GW(p)

Benquo,

You are right. I misread it. The first case is one of irrationality.

comment by Chris3 · 2008-03-30T16:53:06.000Z · LW(p) · GW(p)

"A 7% probability versus 10% probability may be bad news, but it's more than made up for by the increased number of red beans. It's a worse probability, yes, but you're still more likely to win, you see. You should meditate upon this thought until you attain enlightenment as to how the rest of the planet thinks about probability."

I think this says less about probability and more about people's need to keep an optimistic outlook on life. You emphasize the positive fact that there's an "increased number of red beans", while ignoring the equally true fact that there's also a far greater increase in the number of non-red beans. This tends to support the cliched wisdom that people tend to filter out bad news, and hear only what they want to hear. It's a pretty good reflection of human nature.

comment by Chris3 · 2008-03-30T16:56:08.000Z · LW(p) · GW(p)

"A 7% probability versus 10% probability may be bad news, but it's more than made up for by the increased number of red beans. It's a worse probability, yes, but you're still more likely to win, you see. You should meditate upon this thought until you attain enlightenment as to how the rest of the planet thinks about probability."

I think this says less about probability and more about people's need to keep an optimistic outlook on life. You emphasize the positive fact that there's an "increased number of red beans", while ignoring the equally true fact that there's also a far greater increase in the number of non-red beans. This tends to support the cliched wisdom that people tend to filter out bad news, and hear only what they want to hear. It's a pretty good reflection of human nature.

comment by Chris3 · 2008-03-30T16:58:36.000Z · LW(p) · GW(p)

P.S. There's something screwy with the comments on this page. My first comment didn't show up at all after I posted, so I reposted, and now it's showing up as "Posted by: Barkley Rosser"...

comment by simplicio · 2010-03-13T17:27:36.534Z · LW(p) · GW(p)

"This may sound crazy to you, oh Statistically Sophisticated Reader, but if you think more carefully you'll realize that it makes perfect sense. A 7% probability versus 10% probability may be bad news, but it's more than made up for by the increased number of red beans. It's a worse probability, yes, but you're still more likely to win, you see. You should meditate upon this thought until you attain enlightenment as to how the rest of the planet thinks about probability."

I snorted rather loudly upon reading this, and sent the quote to a friend of mine. He told me that he was once playing WoW (get thee behind me, Satan!) and there was some sort of task in which you had to try to get an item, and each try was known to have a 1% chance of success. His companion in WoW was convinced that what that meant was: try 100 times, and on the hundredth try you get the item. Why? Because 100x1%=100%. Obviously.

Replies from: wnoise, DSimon
comment by wnoise · 2010-03-13T18:43:40.031Z · LW(p) · GW(p)

The conclusion, while ridiculous, is not too misleading in practice. 100 is the right order of magnitude that one would expect to have to play to get one -- the expected time of the first reward is 100, and it does have an expected return of 1 after 100 times. Of course, it only has a 63.4% chance of getting at least one after 100 times.

Replies from: simplicio
comment by simplicio · 2010-03-13T19:22:05.456Z · LW(p) · GW(p)

Oh, certainly. I wouldn't expect somebody to calculate that in their head; what amused me is not that he thought that he ought to get it within 100 tries, but that he expected to get it on the 100th try.

comment by DSimon · 2012-04-18T23:56:55.641Z · LW(p) · GW(p)

get thee behind me, Satan!

Why do people want Satan to go behind them? Isn't that just asked to be stabbed in the back with a pitchfork? If I thought it would have any effect I'd tend to go for something like "Get the frak away from me, Satan!".

Replies from: wedrifid
comment by wedrifid · 2012-04-19T00:12:02.786Z · LW(p) · GW(p)

If I thought it would have any effect I'd tend to go for something like "Get the frak away from me, Satan!".

Not 'go to hell'?

comment by buybuydandavis · 2011-10-27T08:33:55.602Z · LW(p) · GW(p)

A 7% probability versus 10% probability may be bad news, but it's more than made up for by the increased number of red beans.

The comedic timing was awesome! It just broke me into the giggles. They keep sneaking out. I can't stop. I'll be laughing for weeks about this.

Replies from: pedanterrific
comment by pedanterrific · 2011-10-27T08:42:26.488Z · LW(p) · GW(p)

I'm pretty sure giggling for a week is a perfectly normal side effect of attaining enlightenment as to how the rest of the planet thinks about probability. Be sure to apply the topical anti-existentialism cream once a day.

comment by gRR · 2012-02-10T18:28:04.786Z · LW(p) · GW(p)

One group saw the measure described as saving 150 lives. The other group saw the measure described as saving 98% of 150 lives. The hypothesis motivating the experiment was that saving 150 lives sounds vaguely good - is that a lot? a little? - while saving 98% of something is clearly very good because 98% is so close to the upper bound of the percentage scale. Lo and behold, saving 150 lives had mean support of 10.4, while saving 98% of 150 lives had mean support of 13.6.

Pragmatics of normal language usage prescribes that any explicitly supplied information will be relevant to the hearer. Assuming that "98%" is relevant, and no other useful information, it is rational to support a measure with such a high level of efficiency, and to support it more than one for which no efficiency figure is provided.

comment by AstraSequi · 2012-03-16T01:52:00.052Z · LW(p) · GW(p)

"Yamagishi (1997) showed that subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal. Apparently the mental image of a thousand dead bodies is much more alarming, compared to a single person who's more likely to survive than not."

I'm not sure this is necessarily due to the mental image. My initial thoughts on reading this were that "1,286 people out of every 10,000" carries connotations implying that at least 10,000 people have been affected, since it would be strange to say that otherwise (you would say "out of 1,000" or a different convenient denominator). The 24.14% figure does not contain this information.

It's still not valid reasoning, since they used diseases that affect far more than 10,000 people. I'm just saying that I think the underlying basis in this example may be different.

comment by Paulovsk · 2012-06-15T12:49:38.092Z · LW(p) · GW(p)

The link in the end of text is broken. I've found another one, would you update it?

Check whether it is the same pdf before posting. I believe it is.

comment by Holograph · 2012-07-17T17:25:35.660Z · LW(p) · GW(p)

I think some of these experiment results are better explained by a bunch of different quirks in human thinking, not Only the affect heuristic. Maybe I'm overconfident in my knowledge here, but still I'm going to go through them in order:

The thing about the clock is obviously the affect heuristic at work and there doesn't seem to be much more to it. The disease example I take issue with however. It seems to me that it's rather about framing than about the affect heuristic. Though peoples emotions about a deadly disease is at play too, the crucial difference is between the way people either get to hear about either a bunch of people dying (relative frequency) or an abstract percentage. They get to imagine the people. Percentages are harder to think about, as it is not what our minds are naturally designed to do and that might be why relative frequency gets through to us better. (people also react more strongly to hear "out of every 1000 people who take this medication 1 gets side effects" (that person could be me!) then to hear "the chance of side-effects from this medication is 0,1%" (that souns like a low percentage...))

The two examples that follow further drives this point. Although theres a positive association with 98% as being close to the upper bound, the other example about the marbles is just an example of how we're bad at learning to intuitively and automatically apply our knowledge of statistics. 7 marbles might feel like more because we somehow imagine "7 red, the rest white" (vs "1 red, the rest white") but there's no need to involve our emotions about these marbles to explain it.

The Finucane experiment is what I most associate directly with the affect heuristic. The subjects were using their feelings towards one aspect as a heuristic for judgeing the other aspects. (a bit like the halo effect really)

The last experiment I don't feel I understand well enough to speak about.

(sorry if my english is bad, it's not my first language. Also, this is my first comment here, I recently discovered this site. Hi! :D )

comment by Colombi · 2014-02-20T05:40:41.102Z · LW(p) · GW(p)

Is this a derivative of the charity question, about saving the 20,000 birds? Seems very similar. And I love the bias you describe people having even when the stats themselves are not biased at all. This is like a mini phycology lesson! I love it!

comment by pyrovortex · 2014-08-20T04:55:35.624Z · LW(p) · GW(p)

Many of the examples given here suffer from what look to be deliberate ambiguities that leave the exact meaning of one of the compared elements wide open to interpretation. Note that I have not examined the source materials for consistency with your summary results, so perhaps this is an issue with the phrasing of your summary rather that the original research. For example:

My mind interprets "a disease [that kills] 1,286 people out of every 10,000" as: "for any given person, there is a 12.86% chance of dying of disease (A)". Since the statement made no qualification on people, I must infer that it is not a constrained sample, especially since this form of phrasing is often used to describe the death statistics for disasters, epidemics, or mass killings, which will often affect an entire population.

Conversely, a disease "that is 24.14% likely to be fatal" reads as "for a given person who already has disease (B), there is a 24.14% chance of dying from it". Without information about the infection rate, it stands to reason that disease (A), which has a fatality rate of somewhere between 12.86% and 100% and is known to infect somewhere between 12.86% and 100% of the total population (inversely to the fatality rate), is likely more dangerous. Thus I would argue that the issue in this case is more about inferring particular meanings in the presence of ambiguity than a fault of mathematics of logic.

For another example: A measure that saves [exactly] 150 lives is objectively worse than a measure that saves 98% of [a sample of] 150 lives, for any group of individuals larger than 150/0.98 = 153.06 people.

comment by Brotherzed · 2014-11-21T18:14:13.788Z · LW(p) · GW(p)

The way you're summarizing the "disease" study mangles what was described in the abstract, even though the abstract makes your own point. I haven't checked the rest. I went digging for the abstract:

Participants assessed the riskiness of 11 well-known causes of death. Each participant was presented with an estimation of the number of deaths in the population due to that particular cause. The estimates were obtained from a previous study of naive participants' intuitive estimations. For instance, based on the result of the previous study, the number of deaths due to cancer was presented as: ‘2,414 out of 10,000’, ‘1,286 out of 10,000’, ‘24.14 out of 100’ and ‘12.86 out of 100’. The estimates of deaths were presented in analogous ways for the remaining ten causes of death. It was hypothesized that the judged degree of riskiness is affected by the number of deaths, irrespective of the total possible number (such as 10,000 or 100). Results from Experiment 1 were consistent with this prediction. Participants rated cancer as riskier when it was described as ‘kills 1,286 out of 10,000 people’ than as ‘kills 24.14 out of 100 people’, and similar results were observed regarding the remaining 10 causes of death. Experiment 2 replicated this trend. Implications for risk communications are discussed. © 1997 John Wiley & Sons, Ltd.

The way you described it --

Then how about this? Yamagishi (1997) showed that subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal. Apparently the mental image of a thousand dead bodies is much more alarming, compared to a single person who's more likely to survive than not.

Crucially, your verbiage as-is provides Group A with a 12% total population mortality rate, and Group B with a 24% case fatality rate, and those are incommensurable. I'm assuming you meant to say the information was presented to two separate groups, maybe too generously there too. The original study very explicitly specifies mortality rate for both figures. I.E. 24.14 out of 100 to be fatal for the whole population (for a cancer, and not expressed as a % - different priming effects on some).

If you got that past all of us, I think it shows there are chinks in our armor as well. I wouldn't deny that the affect heuristic is real, but the way you present the information doesn't pass my smell test.

comment by EngineerofScience · 2015-08-04T14:00:59.541Z · LW(p) · GW(p)

Suppose an airport must decide whether to spend money to purchase some new equipment, while critics argue that the money should be spent on other aspects of airport safety. Slovic et. al. (2002) presented two groups of subjects with the arguments for and against purchasing the equipment, with a response scale ranging from 0 (would not support at all) to 20 (very strong support). One group saw the measure described as saving 150 lives. The other group saw the measure described as saving 98% of 150 lives. The hypothesis motivating the experiment was that saving 150 lives sounds vaguely good—is that a lot? a little?—while saving 98% of something is clearly very good because 98% is so close to the upper bound of the percentage scale. Lo and behold, saving 150 lives had mean support of 10.4, while saving 98% of 150 lives had mean support of 13.6.

I find this very interesting, but I think if people would both work hard enough to multiply 98% by 150 and would be educated about biases of the human brain than people would support the right things more often.

This is another problem with school. School(at lower levels especially) teaches things that is unimportant but doesn't teach useful stuff. For example, states and capitals of the U.S. can be googled in less than thirty seconds but bayes thereom and overcoming biases can't.

Replies from: ChristianKl
comment by ChristianKl · 2015-08-04T21:04:36.433Z · LW(p) · GW(p)

I find this very interesting, but I think if people would both work hard enough to multiply 98% by 150 and would be educated about biases of the human brain than people would support the right things more often.

Research of biases in general comes to the conclusion that being educated about biases does very little in cases like that.

comment by Kate Gladstone (kate-gladstone) · 2019-07-12T23:00:20.362Z · LW(p) · GW(p)

The Denes-Raj/Epstein study makes me wonder whether the subjects would still have picked the jar with 100 beans (7 red) if, say, the other jar had been announced to contain 6 beans (5 red) . Is there any “tipping point” (any specific number or percentage of red beans versus other beans) at which the subjects finally choose to follow the probabilities instead of going with “more reds”? What if the other jar had been stated to contain only 5, 4, 3, 2, or 1 bean — but with ALL beans in that jar stated to be red? Would some subjects still go for the jar with 7 red beans in 100 (because 7 is more than five)? Has anyone tested the possibility that some subjects would actually say: “Yes, I know that I’m guaranteed to win if I pick from a jar that contain only one red bean and no other beans — but I’m still picking from the jar that has 7 red beans and 93 that aren’t red, because 7 is so much more than 1”?!

comment by Dio · 2019-07-30T16:28:51.039Z · LW(p) · GW(p)

I think on the disease example that the so-called heuristic interpretation is not necessarily irrational but depends on the ambiguous significance of each description of evidence. The statement that the disease kills 1,286 people out of every 10,000 can be interpreted as the report of a killing and is equivalent to "the disease is definitely fatal to at least some people". However the statement that the disease is 24.14% likely to be fatal can be interpreted as merely a speculation of the disease's potential fatality.

comment by Caperu_Wesperizzon · 2022-09-08T17:42:56.300Z · LW(p) · GW(p)

This may sound rational—why not pay more to protect the more valuable object?—until you realize that the insurance doesn’t protect the clock, it just pays if the clock is lost, and pays exactly the same amount for either clock. (And yes, it was stated that the insurance was with an outside company, so it gives no special motive to the movers.)

There's always the hope that, if enough customers pay the outside company enough, it'll be zealous and make the movers an offer they can't refuse.