Absurdity Heuristic, Absurdity Bias

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-09-05T03:20:06.000Z · LW · GW · Legacy · 10 comments

Contents

10 comments

Followup toStranger Than History, Robin's post What Evidence Ease of Imagination?

I've been pondering lately the notion of "absurdity" - wondering what exactly goes on in people's minds when they utter the adjective "absurd" or the objection "Absurd!"

If there is an absurdity heuristic, it would seem, at first glance, to be the mirror image of the well-known representativeness heuristic.  The less X resembles Y, or the more X violates typicality assumptions of Y, the less probable that X is the product, explanation, or outcome of Y.  A sequence of events is less probable when it involves an egg unscrambling itself, water flowing upward, machines thinking or dead people coming back to life.  Since human psychology is not a pure structure of quantitative probabilities, it is easy to imagine that the absurdity heuristic is separate from the representativeness heuristic - implemented by separate absurdity-detecting brainware.

I suspect people may also be more sensitive to "absurdity" that invalidates a plan or indicates cheating.  Consider the difference between "I saw a little blue man yesterday, walking down the street" versus "I'm going to jump off this cliff and a little blue man will catch me on the way down" or "If you give me your wallet, a little blue man will bring you a pot of gold."  (I'm thinking, in particular, about how projections of future technology are often met by the objection, "That's absurd!", and how the objection seems more violent than usual in this case.)

As Robin observed, a heuristic is not necessarily a bias.  The vast majority of objects do not fall upward.  And yet helium balloons are an exception.  When are exceptions predictable?

I can think of three major circumstances where the absurdity heuristic gives rise to an absurdity bias:

The first case is when we have information about underlying laws which should override surface reasoning.  If you know why most objects fall, and you can calculate how fast they fall, then your calculation that a helium balloon should rise at such-and-such a rate, ought to strictly override the absurdity of an object falling upward.  If you can do deep calculations, you have no need for qualitative surface reasoning.  But we may find it hard to attend to mere calculations in the face of surface absurdity, until we see the balloon rise.

(In 1913, Lee de Forest was accused of fraud for selling stock in an impossible endeavor, the Radio Telephone Company:  "De Forest has said in many newspapers and over his signature that it would be possible to transmit human voice across the Atlantic before many years. Based on these absurd and deliberately misleading statements, the misguided public...has been persuaded to purchase stock in his company...")

The second case is a generalization of the first - attending to surface absurdity in the face of abstract information that ought to override it.  If people cannot accept that studies show that marginal spending on medicine has zero net effect, because it seems absurd - violating the surface rule that  "medicine cures" - then I would call this "absurdity bias".  There are many reasons that people may fail to attend to abstract information or integrate it incorrectly.  I think it worth distinguishing cases where the failure arises from absurdity detectors going off.

The third case is when the absurdity heuristic simply doesn't work - the process is not stable in its surface properties over the range of extrapolation - and yet people use it anyway.  The future is usually "absurd" - it is unstable in its surface rules over fifty-year intervals.

This doesn't mean that anything can happen.  Of all the events in the 20th century that would have been "absurd" by the standards of the 19th century, not a single one - to the best of our knowledge - violated the law of conservation of energy, which was known in 1850.  Reality is not up for grabs; it works by rules even more precise than the ones we believe in instinctively.

The point is not that you can say anything you like about the future and no one can contradict you; but, rather, that the particular practice of crying "Absurd!" has historically been an extremely poor heuristic for predicting the future.  Over the last few centuries, the absurdity heuristic has done worse than maximum entropy - ruled out the actual outcomes as being far too absurd to be considered.  You would have been better off saying "I don't know".

10 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Robin_Hanson2 · 2007-09-05T10:23:41.000Z · LW(p) · GW(p)

Part of what may be going on is a distrust of abstract reasoning. The absurdity heuristic seems to us to be a relatively direct and reliable indicator, while indirect abstract reasoning seems more susceptible to reasoning errors and overconfidence. And of course we all prefer the indicators that we are better at - those who are bad at abstract reasoning will therefore downgrade it.

comment by Hopefully_Anonymous · 2007-09-05T13:10:52.000Z · LW(p) · GW(p)

Eliezer, great post, and I appreciate the writing (which I think is clearer, more direct, and optimized use parables & examples).

Also, good comment from Robin.

comment by Doug_S. · 2007-09-05T17:35:47.000Z · LW(p) · GW(p)
Of all the events in the 20th century that would have been "absurd" by the standards of the 19th century, not a single one - to the best of our knowledge - violated the law of conservation of energy, which was known in 1850.

::nitpick:: Nuclear reactions violate the 1850s version of the law of conservation of energy. We get around that today by redefining mass as a kind of energy via Einstein's famous equation E=mc^2.

Transmutation of chemical elements? Absurd! ::end nitpick::

comment by Michael_Sullivan · 2007-09-05T20:08:11.000Z · LW(p) · GW(p)

Over the last few centuries, the absurdity heuristic has done worse than maximum entropy - ruled out the actual outcomes as being far too absurd to be considered. You would have been better off saying "I don't know".

Really? I doubt it.

On the set of things that looked absurd 100 years ago, but have actually happened, I'm quite sure you're correct. But of course, that's a highly self-selected sample.

On the set of all possible predictions about the future that were made in 1900? Probably not.

I recall reading not long ago, a list of predictions made about technological and social changes expected during the 20th century, written in 1900. Might have been linked from a previous discussion on this blog, in fact. The surprising thing to me was not how many predictions were way off (quite a few), but how many were dead on, or about as close as they could have been presented in the language and concepts known in 1900 (maybe half).

I'm not going to claim that anti-absurdity is a good heuristic, but I don't think you're judging it quite fairly here. I think it's a fair bit better than maximum entropy.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2011-09-10T23:09:36.214Z · LW(p) · GW(p)

The problem is that by declaring something "Absurd" you're making a very strong bet against it. You're going to lose a fair number of these bets.

Suppose calling something absurd merely means it's 1% probable. If you're right about that 90% of the time, each one you get wrong costs you a factor of 10 on your accuracy, far more than you gain from ascribing the extra 9% probability to the other 9 cases you happened to be right. And 1% is high enough few would call it truly absurd.

Calling something absurd is asking to be smacked hard (in terms of accuracy) if you're wrong - and feeling safe about it.

Replies from: Michael_Sullivan
comment by Michael_Sullivan · 2011-10-07T02:38:16.685Z · LW(p) · GW(p)

Bringing myself back to what I was thinking in 2007 -- I think we have some semantic confusion around two different sense of absurdity. One is the heuristic Eliezer discusses -- the determination of whether a claim/prediction has surface plausibility. If not we file it under "absurd". An absurdity heuristic would be some heuristic which considers surface plausibility or lack thereof as evidence for or against a claim.

On the other hand, we have the sense of "Absurd!" as a very strong negative claim about something's probability of truth. So "Absurd!" stands in for "less than .01/.001/whatever", instead of a term such as "unlikely" which might mean "less than .15"

I was talking only about the first sense. It seemed to me that Eliezer was making a very strong claim that the absurdity heuristic (in the first sense) does no better than maximum entropy. That's equivalent to saying that surface plausibility or lack thereof amounts to zero evidence. That allowing yourself to modify probabilities downward due to "absurdity" even a small amount would be an error.

I strongly doubt that this is the case.

I agree completely that a claim of "Absurd!" in the second sense about a long-dated future prediction cannot ever be justified merely by absurdity in the first sense.

comment by Hamilton-Lovecraft · 2008-03-07T21:51:55.000Z · LW(p) · GW(p)

Sagan said "They laughed at Columbus, they laughed at Fulton, they laughed at the Wright Brothers. But they also laughed at Bozo the Clown."

comment by TheOtherDave · 2010-10-26T16:26:04.755Z · LW(p) · GW(p)

Yes.

Perhaps only peripherally related to the point of your post: I have a pet peeve about people using "X is absurd!" to mean they feel really strongly that X is false.

What I try to mean when I call a proposition P1 absurd in a context C1 is that P1 contradicts some fundamental organizing principle of C1, such that if I accept P1 the entire system of thought comes under attack.

I think this is what the absurdists (in a literary sense) are getting at: absurd statements, if taken seriously, challenge the ways we interpret events and leave us unable to trust the metaphorical -- and perhaps the literal -- ground under our feet. They challenge axioms, if you prefer.

That doesn't necessarily mean they're false, though it would of course be nice to believe that any system of thought I actually implement doesn't allow for true absurd statements. It does mean that, if true, they are important.

If P1 is genuinely absurd, two things follow:

  1. Any evidence that supports P1 (that shifts its probability up, if you prefer) is worth considering very carefully and explicitly, because the emotional drive to simply dismiss it will be strong.
  2. If there is evidence supporting it, I should tread carefully around the implications of that, because it's quite plausible that my normal habits of thought won't work quite right for them.

(Yes, of course, careful and explicit and rigorous thought is always a good thing. But most of the time, its benefits aren't all that immediate.)

Replies from: Michael_Sullivan
comment by Michael_Sullivan · 2011-10-07T02:45:16.044Z · LW(p) · GW(p)

I think of this as "heresy", and agree that it is a very useful concept.

comment by niplav · 2022-08-06T13:33:37.253Z · LW(p) · GW(p)

The future is usually "absurd" - it is unstable in its surface rules over fifty-year intervals.

I believe this is glossing over a very interesting and not at all obvious question while already providing an answer: What aspects of reality are stable in their surface rules for which time intervals?

There's been some research into that question, but not enough to warrant such a strong statement.