Rationalist Judo, or Using the Availability Heuristic to Win

post by jschulter · 2011-07-15T08:39:35.029Z · LW · GW · Legacy · 28 comments

During the sessions at the 2011 rationality minicamp, we learned that some of our biases can be used constructively, rather than just tolerated and avoided.

For example, in an excellent article discussing intuitions and the way they are formed, psychologist Robin Hogarth recommends that "if people want to shape their intuitions, [they should] make conscious efforts to inhabit environments that expose them to the experiences and information that form the intuitions that they want."

Another example: Carl Shulman remarked that due to the availability heuristic we anticipate car crashes with frequencies determined by how many people we know of or have heard about who have gotten into one. So if you don't fear car crashes but you want to acquire a more accurate level of concern about driving, you could seek out news or footage of car crashes. Video footage may work best, because experiential data unconsciously inform our intuitions more effectively than, say, written data.

This fact may lie behind many effective strategies for getting your brain to do what you want it to do:

In The Mystery of the Haunted Rationalist we see a someone whose stated beliefs don't match their anticipations. Now we can actually use the brain's machinery to get it to do what we want it to: alieve that ghosts aren't real or dangerous. One method would be for our ghost stricken friend to get people to tell her detailed stories about pleasant nights they spent in haunted houses (complete with spooky details) where nothing bad happened. Alternatively, she could read some books or watch some videos with similar content. Best of all would be if she spent a month living in a 'haunted' house, perhaps after doing some of the other things to soothe her nerves. There are many who will attest that eventually one 'gets used to' the scary noises and frightening atmosphere of an old house, and ceases to be scared when sleeping in similar houses.

I attribute the effectiveness of these tactics mostly to successful persuasion of the non-conscious brain using experiential data.

So, it seems we have a (potentially very powerful) new technique to add to our rationalist arsenal. To summarize:

  1. Find something you want to alieve.
  2. Determine what experiences that alief should cause you to anticipate.
  3. Have those experiences, by proxy if necessary, artificial or not.
  4. Test whether you now anticipate what you want to.
  5. If the test reveals progress, but not enough, repeat.


It can be annoying when our unconsciously moderated aliefs don't match our rationality-influenced beliefs, but luckily our aliefs can be trained.


1 Thanks to Hugh Ristik for talking about this at minicamp.

2 Credit for this example goes to Brandon Reinhart.

Special thanks to Luke for all the help


Comments sorted by top scores.

comment by NancyLebovitz · 2011-07-15T17:52:54.317Z · LW(p) · GW(p)

The other side of this is to try to be aware if people are trying to load up your mind with fake experiences to influence your intuition.

Replies from: jschulter
comment by jschulter · 2011-07-18T16:01:01.898Z · LW(p) · GW(p)

This can actually be done unintentionally as well. One of the things that might have caused the original haunted rationalist problem could have been watching/reading too much horror fiction: if most experiences you've seen regarding an old house end up with people tortured and dead, even if you know they were all known to be fictitious, you will still anticipate, however strongly, bad things happening in old houses. This also makes me wary that my anticipations regarding the future are likely highly influenced by all the science fiction I read, so I know to watch my aliefs in that regard very very closely.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-07-18T20:01:24.575Z · LW(p) · GW(p)

I'm not sure my aliefs have been affected that strongly, but I've gotten annoyed by stories which consist of a cool idea followed by disaster. It's lazy plotting.

comment by Michelle_Z · 2011-07-15T16:39:15.255Z · LW(p) · GW(p)

I tried a similar technique a couple of years ago. I had an irrational fear of drains (the ones at the bottom of a pool) after seeing on the news that a woman drowned when her hair was caught in one. After panicking in a public pool once when I realized I was swimming above one, I decided to stand on top of the drain (shallow end of the pool) at my house for a while. It took a while, but it works over time. The fear diminished a good amount. I still feel a compulsion to avoid them, but don't panic or feel too anxious when around them.

I'm glad this was posted here. This is a good habit to pick up.

comment by wedrifid · 2011-07-17T08:08:38.171Z · LW(p) · GW(p)

Want to alieve snakes are generally not dangerous?

No! Those things can kill you! Perhaps I am safe here in Berkeley for the next month or so but back home I expect most of the snakes I encounter to be capable of killing me if they bite me. They aren't particularly likely to bite me unless I touch them, corner them or stand on them - that's where the fear comes in handy. It makes me feel uncomfortable when walking through long grass, particularly when wearing light footwear. That way I at least pay attention to movements and sounds and so give the snake a chance to move out of the way before I run on him.

Replies from: jschulter
comment by jschulter · 2011-07-18T15:56:37.173Z · LW(p) · GW(p)

This example was intended as a possible alief you might want to hold, whether it is accurate to your beliefs or not. There are some people who can reasonably expect to never encounter a dangerous snake in the wild who are nonetheless very afraid of them (and all other snakes as well); while respect and fear for dangerous and potentially poisonous animals is worthwhile for some, for others it can be a handicap.

I should also mention (though I took this part out of the article) that there are some situations where one might want to alieve things entirely counter to ones beliefs. The technique allows for cultivation of these types of aliefs as well, and not fearing snakes might be one of them. Other examples could be the alief that cake is not delicious, or that drinking/being drunk is boring and often painful. Note that I do not personally advocate lying to oneself in an overly convincing manner, as that way darkness lies.

comment by pjeby · 2011-07-15T20:53:26.655Z · LW(p) · GW(p)

Establishing 'pull' motivation' works best with strong visualization, and is reinforced upon experiencing the completion of the task.

To be clear, are you making a general statement here, or describing experimental results from the conference? And if this is from experimental results, could you elaborate on the specific evidence that led to these conclusions?

That is, what specifically do you mean by "strong visualization" and "reinforced", not to mention "experiencing the completion"? Thanks!

Replies from: jschulter
comment by jschulter · 2011-07-18T16:20:43.920Z · LW(p) · GW(p)

The statement about strong visualization (essentially simulating experiences as closely as possible) is taken from the video and personal (and anecdotal) experience with the method. The reinforcement from actual completion refers to how once you've completed the task you were motivating yourself to do, you should get the feeling of reward you were imagining to motivate yourself. Actually experiencing the reward makes it easier to simulate if you need to become motivated again later. Additionally the mental connection you'll make between completing the task and the reward makes it less likely that you'll need to repeat the exercise for that task, unless it has an extremely high activation cost: the next time you go to do the task, one of the first things that comes to mind will likely be the reward you felt last time(s) you performed it.

Replies from: pjeby
comment by pjeby · 2011-07-18T23:06:00.364Z · LW(p) · GW(p)

Perhaps I wasn't clear; I wasn't asking for your conclusions (which were already stated) or your hypothesized mechanisms for those conclusions, but rather, I was asking for evidence and definitions. Would you be willing to share the evidence that led you to formulate the above hypotheses?

I am particularly concerned because some of what you have said sounds like the sort of thing that one might anticipate about the process, but which is not actually the case at all. For example, I have seen no evidence of a reinforcement process such as you describe. (Quite the opposite in fact.) So, if you have actually measured or demonstrated such a reinforcement effect, I would be most curious to know how.

There are other things you're saying that also appear to me to be contrary to actual fact (as opposed to one's intuitive expectations that are easily confirmation-biased into appearing real), so I would really like to find out what specific evidence you have and what contrary explanations you've tested, because I don't wish the efficacy of the technique to be overstated. (Thereby presenting others with something to criticize, never mind that I wasn't the one who made the overstated claim(s).)


Replies from: jschulter
comment by jschulter · 2011-07-21T18:16:26.219Z · LW(p) · GW(p)

Okay, thanks for clarifying the question. I've essentially already stated all the "evidence" I'm using for the claim, it's almost entirely anecdotal, and there's certainly no actual studies that I've used to support this particular bullet point. So, there is a good chance I may have stated things in a way which seems overconfident, and I may in fact be overconfident regarding this particular claim, especially considering that I've not tested alternate explanations for the efficacy I've had. I'd be more than willing to have a detailed discussion regarding both of our experiences/intuitions with the method, but I feel as though this probably isn't the place(I've already messaged you), though I'd be happy to update the wording of the article afterwards if it's necessary.

Replies from: pjeby
comment by pjeby · 2011-07-21T21:28:02.892Z · LW(p) · GW(p)

Okay, thanks for clarifying the question. I've essentially already stated all the "evidence" I'm using for the claim, it's almost entirely anecdotal, and there's certainly no actual studies that I've used to support this particular bullet point.

I don't take issue with anecdotal evidence; it's the complete lack of any specifics whatsoever that's a problem. Even well-run studies are routinely misunderstood, misinterpreted and miscommunicated due to lack of relevant detail.

I'd be more than willing to have a detailed discussion regarding both of our experiences/intuitions with the method

I'm curious about the experiences that led you to the claims that you're making. I really don't want the intuitions or the reasoning behind your interpretations, because I don't want to contribute to erasing the information I really want from your brain. i.e., I'm trying to avoid witness tampering, although it may already be too late for that. ;-)

For the same reason, I'm not interested in a "discussion". I just want facts, or at least a reasonably-specific anecdote about them. ;-)

Anyway, if you'd be willing to share the specific experiences that led you to your conclusions -- and only the experiences, not the reasoning or conclusions -- please do so, whether publicly or privately.


comment by Armok_GoB · 2011-07-15T13:02:55.983Z · LW(p) · GW(p)

Hmm. My brain seems to do somehting very similar automatically, and I can't think of any clear problems that I have of this type (at the moment, it dosn't necessarily mean there ain't any). There is the possibility that some other less positive factor causes my abnormally high apparent alif-belif correlation thou. Still, figuring out what I did to acquire this habit might still be useful to others.

Replies from: jschulter
comment by jschulter · 2011-07-18T16:05:13.789Z · LW(p) · GW(p)

Do you read/watch a lot of fiction? I personally end up selecting for fiction which matches my beliefs somewhat closely, and that in retrospect has likely strongly enforced the connection. This seems like a reasonable candidate for an automatic yet unnoticeable process with those results.

Replies from: Armok_GoB
comment by Armok_GoB · 2011-07-18T16:58:44.323Z · LW(p) · GW(p)

With certain kind of beliefs, yes, but generally using fictional evidence even for somehting like this has disadvantages, as does limiting yourself to fictions that reaffirms our beliefs in general.

comment by MixedNuts · 2011-07-15T08:46:58.425Z · LW(p) · GW(p)

You mean alieve, not believe. This is a technique to alieve what you already believe.

Replies from: Jordan, Vaniver, jschulter
comment by Jordan · 2011-07-15T17:29:07.618Z · LW(p) · GW(p)

It's difficult for my brain to parse a sentence with 'alieve'. I guess I've watched too many commercials, and my brain associates 'Aleve' with 'relieve', which has an approximately opposite meaning. I have to mentally substitute 'alieve' with something like 'actually believe' in order to comfortably read the sentence.

comment by Vaniver · 2011-07-16T18:02:59.978Z · LW(p) · GW(p)

I cannot find alieve in any of the dictionaries I have checked. Is it a Lesswrongism? If so, I strongly recommend dropping it, as suggested by Jordan below, as English-speakers parse it as meaning the opposite of how you appear to be using it, and there is no reference for them to turn to.

EDIT: Ah, now it links to alief, which is better. I'm still leery of the word, though, since googling it produces nothing, wikitionary produces nothing, wikipedia links to a town in Texas, and it sounds like its opposite.

Replies from: FiftyTwo, MixedNuts
comment by FiftyTwo · 2011-07-19T01:09:26.604Z · LW(p) · GW(p)

It might be more readable if aleive was replaced with something like 'subconsciously believe' or 'have emotional reaction that...'

comment by MixedNuts · 2011-07-16T18:07:29.504Z · LW(p) · GW(p)

It's a rather recent neologism) (not from LW though). I'm not fanatically attached to the word, but it's pretty important to distinguish beliefs held by System-1 and System-2. Trying to change your beliefs (as analytic evidence processing with explicit correction) using the methods in the post would be awful doublethink. Trying to change your beliefs (as gut feelings you feel impulses to act on, or aliefs) to match the former kind of belief is just dandy.

comment by jschulter · 2011-07-15T08:54:48.343Z · LW(p) · GW(p)


Replies from: jsalvatier
comment by jsalvatier · 2011-07-15T16:19:38.205Z · LW(p) · GW(p)

I think the link to aliefs should go on the first mention. You might also want to remove the extra title at the top and eliminate the extra spacing between paragraphs (I've had trouble with this; the post is not updated right way when you make a change to the source, I think you have to wait a few minutes for it to change).

comment by jsalvatier · 2011-07-15T16:21:27.006Z · LW(p) · GW(p)

I like this framing a lot.

comment by FiftyTwo · 2011-07-19T01:06:40.263Z · LW(p) · GW(p)

As looking glass self theory states,1 we are shaped by how others see us. This is largely due to the experience of having people react to us in certain ways.

Could you elaborate on how to use this in a positive way? Presumably in getting people to act toward you in a certain way you can motivate your behaviour to change, but it would seem practically very difficult to do so in normal settings.

Edit spelling & elaboration

comment by lukeprog · 2011-07-17T02:39:04.233Z · LW(p) · GW(p)

You don't need to re-state the title in the body of the post.

comment by Vaniver · 2011-07-15T21:05:53.652Z · LW(p) · GW(p)

Want to alieve that boxing is dangerous? Watch some footage of boxers being punched painfully in the face, and ask a good boxer to win a fight against you in a painful but non-damaging manner. Now are you reluctant to box someone you have a good chance of beating?

I don't avoid boxing because pain is unpleasant. I avoid boxing because it's not worth the brain damage.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2011-07-15T23:49:06.768Z · LW(p) · GW(p)

Sure. But you can easily believe rapidly accelerated/rotated skull -> brain damage, and there are plenty of famously dead or brain-damaged boxers (or NFL players) from brain-rattling. The idea is to anticipate taking many blows if you ever do fight someone, even if you're better.

Replies from: Vaniver
comment by Vaniver · 2011-07-16T18:08:55.061Z · LW(p) · GW(p)

I misunderstood the word "alieve," parsing it as "relieve," and so the message I thought I was replying to was not the intended one.