SotW: Avoid Motivated Cognition

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-28T15:57:55.813Z · LW · GW · Legacy · 77 comments

Contents

  Conceptual understanding / insights / theoretical background:
  Countering the rationalization impulse / restoring truth-seeking:
  Noticing flinches and attachments, and raising them to conscious attention:
  Awards for previous SotW suggestions:
None
77 comments

(The Exercise Prize series of posts is the Center for Applied Rationality asking for help inventing exercises that can teach cognitive skills.  The difficulty is coming up with exercises interesting enough, with a high enough hedonic return, that people actually do them and remember them; this often involves standing up and performing actions, or interacting with other people, not just working alone with an exercise booklet and a pencil.  We offer prizes of $50 for any suggestion we decide to test, and $500 for any suggestion we decide to adopt.  This prize also extends to LW meetup activities and good ideas for verifying that a skill has been acquired.  See here for details.)


The following awards have been made:  $550 to Palladias, $550 to Stefie_K, $50 to lincolnquirk, and $50 to John_Maxwell_IV.  See the bottom for details.  If you've earned a prize, please PM StephenCole to claim it.  (If you strongly believe that one of your suggestions Really Would Have Worked, consider trying it at your local Less Wrong meetup.  If it works there, send us some participant comments; this may make us update enough to test it.)


Lucy and Marvin are walking down the street one day, when they pass a shop showing a large chocolate cake in the window.

"Hm," says Lucy, "I think I'll buy and eat that chocolate cake."

"What, the whole thing?" says Marvin.  "Now?"

"Yes," says Lucy, "I want to support the sugar industry."

There is a slight pause.

"I don't suppose that your liking chocolate cake has anything to do with your decision?" says Marvin.

"Well," says Lucy, "I suppose it could have played a role in suggesting that I eat a whole chocolate cake, but the reason why I decided to do it was to support the sugar industry.  Lots of people have jobs in the sugar industry, and they've been having some trouble lately."


Motivated cognition is the way (all? most?) brains generate false landscapes of justification in the presence of attachments and flinches.  It's not enough for the human brain to attach to the sunk cost of a PhD program, so that we are impelled in our actions to stay - no, that attachment can also go off and spin a justificational landscape to convince the other parts of ourselves, even the part that knows about consequentialism and the sunk cost fallacy, to stay in the PhD program.

We're almost certain that the subject matter of "motivated cognition" isn't a single unit, probably more like 3 or 8 units.  We're also highly uncertain of where to start teaching it.  Where we start will probably end up being determined by where we get the best suggestions for exercises that can teach it - i.e., end up being determined by what we (the community) can figure out how to teach well.

The cognitive patterns that we use to actually combat motivated cognition seem to break out along the following lines:

  1. Our conceptual understanding of 'motivated cognition', and why it's defective as a cognitive algorithm - the "Bottom Line" insight.
  2. Ways to reduce the strength of the rationalization impulse, or restore truth-seeking in the presence of motivation: e.g., Anna's "Become Curious" technique.
  3. Noticing the internal attachment or internal flinch, so that you can invoke the other skills; realizing when you're in a situation that makes you liable to rationalize.
  4. Realigning the internal parts that are trying to persuade each other: belief-alief or goal-urge reconciliation procedures.

And also:

Exercises to teach all of these are desired, but I'm setting apart the Rationalization Patterns into a separate SotW, since there are so many that I'm worried 1-4 won't get fair treatment otherwise.  This SotW will focus on items 1-3 above; #4 seems like more of a separate unit.


Conceptual understanding / insights / theoretical background:

The core reasons why rationalization doesn't work are given in The Bottom Line and Rationalization.  The Bayesian analysis of selective search is given in What Evidence Filtered Evidence? and Conservation of Expected Evidence.

For further discussion, see the entire Against Rationalization sequence, also The Meditation on Curiosity (for the Litany of Tarski).

Some key concepts (it'd be nice if some exercise taught a gut-level understanding thereof, although as always the goal is to t each skills rather than concepts):

(We might also need an exercise just for getting people to understand the concept of motivated cognition at all.  When Anna and Michael ran their first session on motivated cognition, they found that while most participants immediately recognized the notion of 'rationalization' from examples like Lucy above, several people had no idea what they were talking about - they didn't see why anyone would ever want to use a technique like the Litany of Tarski.  Yes, we know you're skeptical, we also couldn't see how that could possibly be true a priori, but sometimes the evidence just punches you in the nose.  After some investigation, it seems entirely possible that Alicorn has simply never rationalized, ever.  Other cases (not Alicorn's) suggest that some people might have a very low need for verbal justification; even if they feel guilty about breaking their diet, they feel no urge to invent an elaborate excuse - they just break their diet.  On the other hand, LW!Hermione failed to reproduce this experiment - she couldn't find anyone who didn't immediately recognize "rationalization" after 10 tries with her friends.  We notice we are confused.)

(The upshot is that part of the challenge of constructing a first unit on motivated cognition may be to "Explain to some participants what the heck a 'rationalization' is, when they don't remember any internal experience of that" or might even be "Filter out attendees who don't rationalize in the first place, and have them do a different unit instead."  Please don't be fascinated by this problem at the expense of the primary purpose of the unit, though; we're probably going to award at most 1 prize on this subtopic, and more likely 0, and there's an existing thread for further discussion.)


Countering the rationalization impulse / restoring truth-seeking:

The Tarski method:  This is the new name of what we were previously calling the Litany of Tarski:  "If the sky is blue, I want to believe the sky is blue; if the sky is not blue, I want to believe the sky is not blue; let me not become attached to beliefs I may not want."

Example:  Suppose you walk outside on a fall day wearing a short-sleeved shirt, when you feel a slightly chill breath of air on your arms.  You wonder if you should go back into the house and get a sweater.  But that seems like work; and so your mind quickly notes that the Sun might come out soon and then you wouldn't need the sweater.

Diagram:

  It stays cold enough to require a sweater It gets warm enough that no sweater is needed.
You believe you need a sweater A warm walk in a toasty sweater. Your walk is ruined forever by the need to carry an extra sweater.
You believe you don't need a sweater You are cold!  Cold cold cold!  Why didn't you get a sweater? Free and unencumbered, you stroll along as the warm Sun comes out overhead.

Visualizing all 4 quadrants of this binary proposition - the world is like A and I believe A, the world is like B and I believe A, etc. - should, in principle, emotionally confirm the truth of the proposition:  "If it will be cold, I want to believe it's cold; if it's not cold, I want to believe it's not cold; let me not become attached to beliefs I may not want."

Eliezer and Anna, when using this method against the temptation to believe X, visualize only the quadrant "The world is not like X and I believe X" to remind themselves of the consequences; e.g. we would only visualize the "You are cold!" quadrant.  Michael Smith (aka "Val", short for Valentine) says that after some practice on this technique as a kata, he was able to visualize all 4 quadrants quickly and that visualizing all 4 seemed to help.

Val also used an upside-down W-diagram with the two worlds at the top and the four beliefs at the bottom, to emphasize the idea that the world is there first, and is fixed, and we have only a choice of what to believe within a fixed world, not a choice of which background world to live in.  The Tarski Method embodies a "Start from the world" mental process in which you visualize the world being there first, and your belief coming afterward; a similar "Start from the world" rule is likewise emphasized in the Bayes unit, wherein one starts from a world and asks about the probability of the evidence, rather than starting from the evidence and trying to make it match up with a world.

When we actually tested a unit based on asking people to draw Tarski squares, it didn't work very well - possibly because people didn't seem to understand what it was for, or when they would use it; possibly because it wasn't a group exercise.  In any case, we already tried teaching this the obvious way ("Go draw Tarski squares!") and it didn't work.  But it still seems worth teaching if someone can invent a better exercise, because it's something that multiple CfAR people actually use to counter the rationalization impulse / restore truthseeking in real life.

Become Curious:  Detect non-curiosity and become curious.  Anna's main alarm signal is when she notices that she's not curious in the middle of a conversation - that she doesn't have an impulse-to-find-out the answer - and then try to make herself curious about the subject of discussion.  Besides visualizing the not-X-and-believe-X quadrant of the Tarski diagram, this is also something you may be able to do by brute introspection - remember the feeling of curiosity, and try to call it up.  (This is probably in the top 3 most important things I learned from Anna. -- EY)

Take Pride in Your Objectivity:  Julia teaches this as a primary counter in her Combat Reflexes unit (how to avoid instantly defending or attacking).  Eliezer does this every time he admits he's wrong on the Internet - congratulates himself on being such a great rationalist, in order to apply counter-hedons to the flash of pain that would otherwise be associated.

Visualize a Fixed Probability:  This is what Eliezer used as a child to stop being scared of the dark - he would deliberately visualize a murderer standing with a knife behind a door, then visualize his own thoughts having no effect on the fixed probability that any such murderer was actually present.  In other words, the notion of a "true probability" that his thoughts couldn't affect, countered the fear of thoughts affecting reality.  Visualizing there being a fixed frequency of worlds, or a lawful probability that a Bayesian agent would assign, can help in perceiving the futility of rationalization because you're trying to use arguments to move a lawful probability that is fixed.  This is also part of the domain of Lawful Uncertainty, the notion that there are still rules which apply even when we're unsure (not presently part of any unit).

Imagine the Revelation:  Anna imagines that the answer is about to be looked up on the Internet, that Omega is about to reveal the answer, etc., to check if her thoughts would change if she was potentially about to be embarrassed right now.  This detects belief-alief divergence, but also provides truthseeking impulse.

Knowing the Rules:  And finally, if you have sufficient mastery of probability theory or decision theory, you may have a procedure to follow which is lawful enough, and sufficiently well-understood, that rationalization can't influence it much without the mistake being blatant even to you.  (In a sense, this is what most of Less Wrong is about - reducing the amount of self-honesty required by increasing the obviousness of mistakes.)


Noticing flinches and attachments, and raising them to conscious attention:

A trigger for use of curiosity-restoration or the Tarski Method:  Noticing what it feels like for your mind to:

Learning to notice these events introspectively seems extremely important - we all use it heavily in daily practice - but we don't know how to teach that.
Anna observes that Rejection Therapy is often a good time to observe oneself rationalizing, as apparently many participants reported that their mind started generating crazy reasons not to approach someone with a request.
Anna also says that she's been self-rewarding each time she notices a flinch or attachment, i.e., she's trying to train her inner pigeon to notice (not, one hopes, training the flinching or attachment!)  It's possible we could ask participants to self-reward each event of "noticing the flinch or attachment" while doing Rejection Therapy, but we still need other ideas.
Along similar lines of internal behaviorism, Eliezer avoids rewarding himself for rationalizing by repeating the phrase "Only congratulate yourself for actually changing a probability estimate or policy" on any occasion where he hasn't changed his mind after argument - as opposed to e.g. feeling any sense of reward for having defeated an incoming argument; even if the incoming argument happens to be wrong, still, "Only congratulate yourself for actually changing a probability estimate or policy."
Another thing most of us do is name attachments or flinches out loud, in conversation, as we notice them, in order to reduce their strength, i.e. "This is probably a complete post-facto rationalization, but..." (Eliezer) or "I may just be trying to avoid having my status reduced, but..." (Anna).  (Note:  This requires enough trust that nearby people also know they're flawed themselves, that you don't feel embarrassed for confessing your own flaws in front of them.  In other words, you have to tell embarrassing stories about your own failures of rationality, before other people will feel that they can do this around you.)

Anna's anti-rationalization makes heavy use of noticing suspect situations where the outside view says she might rationalize - cases where her status is at stake, and so on - and specific keywords like "I believe that" or "No, I really believe that".  She wants to try training people to notice likely contexts for rationalization, and to figure out keywords that might indicate rationalization in themselves.  (Eliezer has never tried to train himself to notice keywords because he figures his brain will just train itself to avoid the trigger phrase; and he worries about likely-context training because he's seen failure modes where no amount of evidence or sound argument is enough to overcome the suspicion of rationalization once it's been invoked.)

"Look toward the painful thought instead of away from it" is an important reflex to install to counter flinches, but would probably require some sort of hedonic support - like a strong, pre-existing pride in objectivity, or a social support group that applauds, or something to stop this from being pure negative reinforcement.

Awards for previous SotW suggestions:

$550 to Palladias for the Monday-Tuesday game, which has been tested ($50) and now adopted ($500) into the Be Specific unit (though it might be moved to some sort of Anticipation unit later on).

$550 to Stefie_K for her suggestion to have the instructor pretend to be someone who really wants you to invest in their company, but is never specific; also $50 to daenrys for the  "More Specific!" improv-game suggestion.  In combination these inspired the Vague Consultant game ("Hi, I'm a consultant, I'm here to improve your business processes!"  "How?"  "By consulting with stakeholders!") which has now been adopted into the Be Specific unit.

$50 to lincolnquirk for the "Channel Paul Graham" game, which we tested.  We all thought this would work - it was our highest-rated candidate suggestion - but it didn't get positive audience feedback.  Congratulations to lincolnquirk on a good suggestion nonetheless.

We haven't yet tested, but definitely intend to at least test, and are hence already awarding $50 to, the following idea:

$50 to John Maxwell IV for the Choose Your Own Adventure suggestion for the Consequentialism unit.

To claim a prize, send a LessWrong private message (so we know it originates from the same LW user account) to StephenCole.

77 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2012-05-26T08:20:40.494Z · LW(p) · GW(p)

This often works for me: think of some smart, critical and rational friend that you have, and then imagine/visualize presenting your argument to them. Or suppose that you just put up your argument on the Internet to be critiqued. Is any part of your reasoning such that you'd actually prefer not to present it in public, knowing that it won't hold up to scrutiny?

For me at least, if I imagine presenting my reasoning to someone else, I suddenly become a lot more conscious about its weak spots. And then I try to remind myself that if I can't think of a reason why those weak spots should hold up to scrutiny, my instinct should be to abandon the argument instead of just hoping that those problems won't come up.

Replies from: Ezekiel
comment by Ezekiel · 2012-05-27T18:09:09.542Z · LW(p) · GW(p)

Anecdotal supporting evidence: In my last days as a religious person, I found myself imagining myself presenting my pro-religion arguments to the author of the Sequences, and literally could not fantasize a scenario where he found them convincing.

comment by JGWeissman · 2012-05-25T18:43:00.636Z · LW(p) · GW(p)

"Well," says Lucy, "I suppose it could have played a role in suggesting that I eat a whole chocolate cake, but the reason why I decided to do it was to support the sugar industry. Lots of people have jobs in the sugar industry, and they've been having some trouble lately."

The obvious response to Lucy is "Is buying and eating that whole choclate cake really the best way to help the sugar industry? How does it compare to other strategies, like directly giving them money?"

The generalization is that when you say you are taking action A in pursuit of goal G, ask what other actions might be more effective at achieving G.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-25T19:14:46.134Z · LW(p) · GW(p)

Yeah.

I also find that getting into the habit of imagining my response to the putative cause turning out not to be true can help.

That is, suppose I suddenly discover that the sugar industry is actually doing really well, but the broccoli industry is in deep financial trouble. Does my decision about the cake change in response? If not, it's likely that putative cause isn't actually causal.

Replies from: JGWeissman
comment by JGWeissman · 2012-05-25T19:27:00.694Z · LW(p) · GW(p)

suppose I suddenly discover that the sugar industry is actually doing really well, but the broccoli industry is in deep financial trouble. Does my decision about the cake change in response?

This is also a good response, though outside the pattern I described. It's generalization seems to be that when you are taking an action to help some group of people, ask would you take actions that similarly help some other group of people. So it is a somewhat narrower technique, applying when the goal is helping some group of people. Perhaps you could generalize it further.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-25T19:34:14.218Z · LW(p) · GW(p)

(nods) I would say, rather, that the generalization is that when I believe I'm taking A in response to condition C, ask whether my desire to do A would vary if C were radically altered. If changes in C don't correspond to changes in A, it's likely that A is not actually a response to C.

comment by daenerys · 2012-05-29T17:36:57.698Z · LW(p) · GW(p)

Set your baseline to always assume you are rationalizing, and then have to prove that you aren't, rather than vice versa.

Something I do, that I'm surprised I don't see mentioned here, is to just assume that any point I am trying to make, or anything I think, is a rationalization (which is pretty likely).

So instead of having to think "Am I rationalizing?" (which can be hard to figure out), I change my baseline, to ALWAYS assume "I am probably rationalizing. How am I rationalizing?" and go from there. Sort of a quick run-through of what biases and semi-hidden desires could be influencing my decisions or statements at any given time. From there I can either accept or reject these "rationalizations".

This also ends up leading to many disclaimers in conversations, as mentioned in the OP. (i.e. "Well I can't know for sure what I HAD thought, because by now hindsight bias has taken hold...." or "Well, I'm completely anchored off your estimate now, but...") I see "Conversation Disclaimers" themselves to be a major skill. Maybe an exercise could be made out of that?

Quick idea: Have people in pairs have a conversation about a debatable subject. Every sentence they say has to be pre-empted with a disclaimer.

(Note: This is my immediate reaction to this post. I'll give it more thought later.)

Replies from: daenerys
comment by daenerys · 2012-05-29T20:36:21.894Z · LW(p) · GW(p)

Exercise Idea- Rationalization Listing

Ask leading questions, or have everyone come up with a list of premises they think are true. (Examples- It's good to be vegetarian/paleo/omni; Political Belief System X is strongest; I enjoy Activity B; Continuing grad school is good/bad idea; etc)

Once they have developed this list, have them list as many reasons as they can that they are actually just rationalizing. (i.e. have them ASSUME they are rationalizing, and then have them list ways this could be possible) The person who comes up with the MOST, wins.

Example (This is a real one for me)-

Premise: It's good to be vegetarian

Think: "I am probably rationalizing. How? Why? Opposing evidence?

Possible rationalizations: 1) I've been a vegetarian for a long time, so consistency bias wants me to continue to be one
2) Admitting I'm wrong would mean that I've been wrong for the past 8 years.
3) Being a vegetarian allows me to hold moral high ground, despite actual morality, or lack thereof, of the choice.
4) Deciding that vegetarianism is NOT a moral choice would mean that I have no reason to remain one. It's easier for me at this point to maintain the status quo, than to switch my diet back to omnivorous.
5) Social pressure- People I like and respect think that vegetarianism is a good choice, even if they themselves aren't.

Arguments against:
1) Animals' life in nature is also completely horrendous. Factory farmed animals may actually have it better off than the average animal in nature.
2) Animals are not sentient enough that I should care overly much for their well-being.
3) Meat is yummy (or at least it was at one point...Now I don't much like the smell anymore)

At the end, they can pair up and try to add MORE rationalization possibilities to their partner's list.

Replies from: pnrjulius
comment by pnrjulius · 2012-06-09T00:15:26.303Z · LW(p) · GW(p)

I'm concerned that this technique will just make people able to come up with lots of bad reasons to do things, thus making them better at rationalization. I feel like we would be better off encouraging people to come up with good reasons, and then perhaps comparing them to bad reasons.

comment by Vaniver · 2012-05-26T01:23:01.535Z · LW(p) · GW(p)

Our conceptual understanding of 'motivated cognition', and why it's defective as a cognitive algorithm - the "Bottom Line" insight.

"Defective" isn't quite enough; you want a prescription to replace it with. Saying "this is a bad habit" seems less useful than saying "here is a good habit."

There are two obvious prescriptions I see: provide correct rationales for decisions, or do not provide rationales for decisions. Which prescription you shoot for has a radically different impact on what exercises you do, and so should be talked about quite a bit. It may be desirable to try and wipe out rationalization, first, and then progress to correct rationales.

One exercise might be asking "who will this convince?" and "whose desires do I want to maximize?". Lucy probably doesn't actually expect Marvin to be swayed by the plight of Big Sugar, and probably doesn't actually suspect that Marvin will believe she's motivated by the plight of Big Sugar, and so that deflection may be the wrong play because it's incredible.

It seems to me that social incentives will swallow most internal incentives here. If I can get more out of others by rationalizing, then it may be a losing move for me to not rationalize- and so it may be more profitable to focus specifically on internal desire-desire conflicts. If Marvin will buy the cake for Lucy if she gives Marvin-optimized reasons, then Lucy should seek to determine whether or not she wants the cake for Lucy-optimized reasons and then present the case to Marvin in terms of Marvin-optimized reasons.

When Lucy senses one desire to eat a whole chocolate cake, and comes up with the sugar industry reason, perhaps Lucy should ask which Lucy that represents (Altruist Lucy) and which other Lucys want to have a say on the issue. Thin Lucy and Cheap Lucy might both think that Lucy shouldn't buy the cake, and Sweet Tooth Lucy wants that cake.

And when Lucy simulates their internal discussion, she quickly realizes that Altruist Lucy doesn't actually care much about the cake issue, compared to the other three. If Altruist Lucy was fully modeled here, she'd probably side with Cheap Lucy (as those dollars can do more good elsewhere). And so the question is what tradeoff Lucy wants to make between the preferences of Thin Lucy, Cheap Lucy, and Sweet Tooth Lucy.

Notice that the rationalization is an explicit call for alliances or disguise in this model. Only three Lucys are really interested- and the weight is against the cake- but Sweet Tooth Lucy can call in other Lucys by constructing arguments that tangentially involve them. That should be a costly move- at the beginning of a decision, Lucy should determine which Lucys are most relevant to the decision, and then be skeptical of attempts to bring in other Lucys.

The first exercise would be labeling the desires involved in a decision. I suspect there will generally be at least three, but in some decisions one or two will dominate. It might be useful to start with decisions where one desire dominates, and then move to where two desires agree, and then three desires agree, and then start introducing conflicting desires.

Jack tripped, and is falling. He notices a desire to stop his fall.

Healthy Jack wants to not get hurt.

Jack tripped, and is falling, within sight of his girlfriend. He notices a desire to stop his fall.

Healthy Jack wants to not get hurt and Impressive Jack wants to not make a fool of himself. They agree on the recommended action.

On a lazy Saturday afternoon, Jack notices a desire to do a mildly dangerous trick in front of his girlfriend.

Impressive Jack wants to show off, and Healthy Jack wants to not get hurt. They disagree on the recommended action.

The second exercise would be declaring other desires as invalid (or possibly valid). This one seems like it could be done either as a worksheet - "does Cheap Jack have anything important to say about Jack tripping, conditioned on Healthy Jack and Impressive Jack already being in the discussion?" - or better yet, socially, in which someone describes a recent decision they faced, which three desires they thought were the most important, and then their partner / other members of the group seek to argue for the inclusion of other desires. It's not yet clear how to get a good balance of suggestions that should be shot down and suggestions that should be considered more deeply, and assigning any sort of points to performance in this exercise could cause motivated cognition, which is bad.

The third exercise would be finding a quick way to resolve this competition between desires. This seems the area where it's hardest to be prescriptive- different methods will fit different minds. Here are a few I can think of:

  1. Summarize each desire's case in a single sentence, put all the sentences next to each other, and choose one side or the other.

  2. Summarize each desire's case in a single sentence, then go with the one that's most compelling.

  3. Summarize each desire's case in a single sentence, assign each a weight, and then randomly determine which desire to go with (using the weights).

  4. Take the proposed courses of action, and then find compromises along the axis of each desire. Cheap Lucy could be satisfied more and Sweet Tooth Lucy only a little less if Lucy just bought a bag of sugar and ate some of it. Thin Lucy could be satisfied more and Sweet Tooth Lucy only a little less if Lucy bought a cake made with Splenda instead of sugar. Imagine the expanded alternative set and choose from one of the options in it.

I'm sure there are more.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-05-29T03:10:18.581Z · LW(p) · GW(p)

"Break down what your parts have to say into parts" would be an interesting counter to rationalization - I think I'll have to call this an immediate $50 award on the grounds that I intend to test the skill itself, never mind how to teach it.

Replies from: shminux, Vaniver
comment by Shmi (shminux) · 2012-05-29T20:04:53.584Z · LW(p) · GW(p)

"Break down what your parts have to say into parts"

I thought that was your reason for writing the HJPEV's internal 4-way dialog.

comment by Vaniver · 2012-05-29T03:17:46.702Z · LW(p) · GW(p)

"Break down what your parts have to say into parts" would be an interesting counter to rationalization - I think I'll have to call this an immediate $50 award on the grounds that I intend to test the skill itself, never mind how to teach it.

Awesome!

comment by HonoreDB · 2012-05-25T19:50:00.012Z · LW(p) · GW(p)

My girlfriend says that a common case of motivated cognition is witnesses picking someone out of a lineup. They want to recognize the criminal, so given five faces they're very likely to pick one even if the real criminal's not there, whereas if people are leafing through a big book of mugshots they're less likely to make a false positive identification.

She suggests a prank-type exercise where there are two plants in the class. Plant A, who wears a hoodie and sunglasses, leaves to go to the bathroom, whereupon Plant B announces that they're pretty sure Plant A is actually $FAMOUS_ACTOR here incognito. Plant A pokes his head in, says he needs to go take a call, and leaves. See who manages to talk themselves into thinking that really is the celebrity.

comment by Emile · 2012-05-26T22:54:29.540Z · LW(p) · GW(p)

OK, here's an exercise that could at least help noticing how motivated cognition gives us incorrect beliefs; it's like a mix between calibration and paranoid debating.

Requirements

This exercise requires a set of "interesting questions" with a numerical answer, for example "How many people were killed by guns in New York in 1990-1999?" or "What is the unemployment rate in China" (the questions should be at least related to political/social issues, no "What's the population of Mozambique").

It is also best done in a classroom-type place with a big video projector, and a bunch of computers with internet connections. Somebody who knows Excel should also need to prepare a special spreadsheet.

Step one: Crafting Arguments

Students are together in a room, with one computer each (or they can take turns using a computer, timing isn't critical); an organizer goes to each student and gives him a paper with the question, then flips a coin and announces "high!" if it's heads, and "low!" if it's tails.

Each student then has 30 minutes to prepare a set of arguments for why that particular value is high, or low. The result should be one powerpoint slide (or maybe better, the Google docs equivalent), containing his best arguments; he is allowed to look anything up on the internet (including the true value), but his slide can only contain true information ("New York was rated the most violent city by XXX magazine", things like that).

Step two: Everybody guesses

Once everybody is ready, the organizers collect all the argument powerpoint slides; for each one of them: the question is read aloud, and then the list of arguments are displayed, as well as whether those are arguments for a high or a low value. Each student (except the arguer) writes down his best guess for the answer to the question, as well as a 90% confidence interval; they should be given about thirty seconds.

Once the time is up, everybody reads his answer out loud to an organizer who enters them into Excel and immediately gets a nice chart (projected on the wall), with everybody's answers and confidence intervals compared to the true answer, and then a scatterplot showing how narrow people's intervals were compared and how close they were to the answer.

The clever arguer gets points for how many estimates were too high (/too low), and how wide the confidence intervals were on the side he argued; others get points for the probability they assigned to the correct answer (log scoring rule etc.)

Comments

If this works (if people do indeed bias their estimates despite knowing that the arguments are the result of the flip of a coin, and if they learn to correct this as the exercise progresses), it should give participants a "gut level" understanding of why motivated cognition gives wrong belief (Our conceptual understanding of 'motivated cognition', and why it's defective as a cognitive algorithm - the "Bottom Line" insight.).

This exercised is designed to make the feedback as close as possible to the estimation, to make learning stronger. Having it in a slightly competitive setting also discourages people from just giving huge confidence intervals.

It may be interesting to first collect a bunch of estimations from people who won't participate, just to compare them to the student's estimates on the same questions, so that the students can then be shown the difference between "how people guess with the bottom line argument" and "how people guess without the bottom line argument" (normally, they should guess worse).

A simple variant is having yes-or-no questions, and students giving probability estimates.

Another variant is to have multiple choice questions (ideally with six possible answers, so the organizers can roll a die); this simplifies the guessing (no more confidence intervals!), and questions about numerical values can be transformed into multiple choice questions with a list of intervals.

comment by casualhero · 2012-05-25T20:08:15.739Z · LW(p) · GW(p)

We might also need an exercise just for getting people to understand the concept of motivated cognition at all.

"Motivated cognition" in the first place seems like a poor label because most thinking is motivated. It's redundant and arguing against "motivated cognition" at first glance sounds like arguing against any kind of motivated thinking. That's problematic because good thinking is also motivated, i.e. "I'll invent FAI because it would help the world."

One interesting thing I've heard repeated and found to be true about rationalizations is that you can usually get the truth out of someone by asking them a canned line: Is there any other reason? I'm sure this is from some Dale Carnegie book or something, but my guess is that most people don't feel the need to come up with multiple rationalizations. They usually come up with one "good" one and try to stick with it. "Is there any other reason why you want the cake?" "Well, I also really love chocolate cake." "Is there any other reason you don't want to go to [Event] besides being busy?" "Well, my ex will be there, too." By no means an airtight method but still useful for getting other people to tell you what they're really thinking.

"If the sky is blue, I want to believe the sky is blue[.]" The question is why people don't want to believe what is true, and the areas of focus here should be the same as with self-deception and procrastination. Irrational people believe that if they can't see/hear/feel/recognize/know it, it can't hurt them -- self deception -- and even in those moments where some thought tells them this is irrational or that they must eventually face the truth, they put it off -- procrastination, preferring the short term over the long.

As kids, I think the experience of "unseeing" or "trying not to see" is common, as we thought of the monster in the dark shadows. On nights when we just shut our eyes and go to sleep there is no monster. On nights where we keep our eyes open and look around, there suddenly is one. So the most important factor in this pre-rational understanding of the world is association -- "I looked, there seemed to be a monster; I didn't look, there didn't seem to be a monster. Therefore looking determines whether there is a monster." At some level we recognize that "the monster" is not a monster, but a trick of shadows, a trick of perception. If we don't look, we also undo the monster itself. This is a detrimental lesson, at this level of understanding. But basically everyone, even kids, recognize this as self-deception. If there is a monster in your closet, crawling under the covers is like making a tasty kid-burrito. If you had actually seen, rather than suspected, a monster, you would run/fight/scream.

Even though someone might logically reason that there isn't a monster or that there isn't a god or that their moral views are contradictory, there will still be a compulsion to lie to themselves because it's easier in the short term. People procrastinate against facing the truth like they procrastinate against writing essays. The kid doesn't turn on the light because what if the monster is actually real even though he knows it isn't? He'll be more scared than he is inside the shelter of covers, even if the monster turns out to be fake, so he prefers to sit right where he is.

The way to deal with this situation is to apply the methods against self-deception and then procrastination. The unfortunate part of that is, it seems some people never get over the hurdle of procrastination.

Replies from: billswift, thomblake
comment by billswift · 2012-05-25T20:35:52.438Z · LW(p) · GW(p)

I would like to suggest "motivational bias" as an alternative name; it is a more accurate description than "motivated cognition", which is much too general.

I don't see spending too much time investigating things you don't want to be true as too bad a problem, a bit wasteful, but that's it. "Motivated stopping" is more likely to lead you astray, you need to remember to keep questioning things you agree with. In fact, this is a common bias that is often used by con artists, for example, and a good rule is a generalization of the common self-defense against con artists contained in the phrase, "If it looks to good to be true, it probably is." If you are immediately tempted to accept or agree with an argument, take a second look.

comment by thomblake · 2012-05-29T17:12:04.946Z · LW(p) · GW(p)

I'm sure this is from some Dale Carnegie book or something, but my guess is that most people don't feel the need to come up with multiple rationalizations. They usually come up with one "good" one and try to stick with it.

Interesting and useful if true

comment by Shmi (shminux) · 2012-05-28T19:16:00.764Z · LW(p) · GW(p)

how to verify that a skill has been acquired

I suppose that one way to test #2 and #3 is to ask the participants (informally, so it does not appear like a test) how useful or successful they think some other skill learning activity was. This is probably a part of the procedure already. There is a significant pressure to say that "they learned a lot and are now better at it", due to several biases, including motivated cognition. When asked to elaborate, they are likely to put forth a number of arguments. To a trained eye, it should be easy to spot whether a particular argument is in fact a rationalization. For example, any immediate supporting argument that is not a specific example of successfully using the relevant skill is very suspect. Some of the better replies would be along the lines of "Wait, how do I know whether this activity was helpful?" and "OMG, I caught myself wanting to rationalize that it was! I guess I did learn something, after all!"

comment by Grognor · 2012-05-28T06:09:07.944Z · LW(p) · GW(p)

One of the motivations of motivated cognition is consistency. People want to be predictable and they want to be seen as stable. So I suggest demotivating it. Have people read chapter 3 of Cialdini's Influence. I particularly like the Emerson quote:

A foolish consistency is the hobgoblin of small minds.

Yes! Teach that preferring consistency over accuracy is low-status!

I predict this will have no effect unless coupled with the following technique:

Detachment. You are not your beliefs; you are not your actions: When the opportunity to think arises, within five seconds, think those words.

Replies from: thomblake, hamnox, pnrjulius
comment by thomblake · 2012-05-29T17:21:50.344Z · LW(p) · GW(p)

I always make sure to point out, to those who hear that quote but have not studied Emerson, that "foolish consistency" does not mean "consistency that is foolish" but rather "consistency, which is foolish".

comment by hamnox · 2012-05-28T15:22:03.417Z · LW(p) · GW(p)

muses I wonder how this might combine or clash with non-conformity training. Sticking to your guns against public ridicule is a form of consistency, and it does have the danger of running at cross purposes with the virtue of lightness...

Can you really teach being ready to turn that dedication on a dime if the evidence blows that direction? Being faithful to THE truth instead of A truth? This suddenly sounds like a much harder task.

comment by pnrjulius · 2012-06-09T00:16:36.194Z · LW(p) · GW(p)

Personally I just think Emerson often contradicted himself and didn't want to bother correcting his mistakes, so he came up with a way of making self-contradiction seem deep.

comment by Furslid · 2012-05-29T16:21:09.946Z · LW(p) · GW(p)

One of the key markers of rationalization I've seen is rationalizations ignore tradeoffs and other option. This is obviously true only about the rationalizations about actions and policies. For instance "I want to eat the whole cake to help the sugar industry..." never finishes "...and this help to the sugar industry is worth any ill health effects." or "...and this is more efficient than other ways to help the sugar industry,"

One activity that might help is to give people a plausible proposition to argue for in their own life that they do not follow. So "Veganism is the optimal dietary option." could be given to someone who eats meat. Have them argue for it, telling them that they will only be evaluated on the persuasiveness of the argument. After this is done, ask them what costs and tradeoffs are for that position. Also ask them what other alternatives there are to achieve the goals and values they hoped to achieve.

This can provide an example to the participants of what rationalization and the results of rationalization looks like. It also provides a demonstration of the efficacy of a couple of questions to catch rationalization.

comment by beoShaffer · 2012-05-26T18:54:47.822Z · LW(p) · GW(p)

I notice that you don't mention any existing studies. Is there a reason for this? A fairly cursory search (no backtracking of citations, or significant effort to use synonyms), brings up several relevant articles (mostly paywalled) and a book. It doesn't seem like a a super well studied field, but I don't see why its being completely ignored. The (semi) relevant stuff I found.

The Human Brain as an Evolved Rationalization Machine http://www.epjournal.net/wp-content/uploads/EP102934.pdf

Rational processing or rationalization? The effect of disconfirming information on a stated religious belief.

Self-interest masquerading as ingroup beneficence: Altruistic rationalization and interindividual–intergroup discontinuity.

Reactance versus rationalization: Divergent responses to policies that constrain freedom.

Moral credentialing and the rationalization of misconduct

Rationalization of indigenous male circumcision as a sacred religious custom: Health beliefs of Xhosa men in South Africa.

Regret and rationalization among smokers in Thailand and Malaysia: Findings from the International Tobacco Control Southeast Asia Survey.

:edit spacing

:edit to add I'm currently trying to write a Main level post about current research on rationalization. In doing so I've found that many of the article in the above list aren't actually that relevant, on the other hand I am finding a good bit of useful information so my core point still stands

comment by Xom · 2012-06-04T17:11:57.197Z · LW(p) · GW(p)

On the other hand, LW!Hermione failed to reproduce this experiment

I'm not sure how wide of an audience this post is targeting, but the !-notation feels gratuitous here. How about:

On the other hand, Hermione (LW user) failed to reproduce this experiment

comment by Morendil · 2012-05-27T17:36:32.604Z · LW(p) · GW(p)

The title strikes me as slightly problematic: I think of skills as (positive) ways to achieve outcomes, and framing "avoidance" of some mistake as a skill doesn't feel quite right.

Replies from: handoflixue
comment by handoflixue · 2012-05-30T20:29:07.351Z · LW(p) · GW(p)

Agreed - when you state it that way, it makes me realize that I don't know what the ideal state, the opposite of motivated cognition, actually looks like, or how to refer to that directly.

Which then makes me suspect that simply having a clear modelwould help me build the skill

("apathetic cognition" is a fun reversal of the name :))

comment by JoachimSchipper · 2012-05-27T11:27:07.079Z · LW(p) · GW(p)

Rationalizing anything: ask participants questions like "why did you have that cake?" or even "yesterday, you hit your kid. Why?". These questions should not be based on reality, and the instructor(s) should probably do the first two or three rounds themselves to get people used to these kinds of questions. Despite the fact that the facts are blatantly false, the participant should come up with a reason that sounds as impressive as possible. ("Yes, I hit my child yesterday. Fourteen generations of Smiths have been brought up that way and gone on to be successful businessman, politicians and lawyers; I'll be damned if I let some wishy-washy state nanny tell me what to do.")

This may teach people to recognize rationalizations in themselves and others.

comment by Shmi (shminux) · 2012-05-26T00:12:26.590Z · LW(p) · GW(p)

I am somewhat surprised that I cannot find the honest Devil's advocacy anywhere on the list of techniques of spotting rationalizations. I understand that EY dislikes it, because it's easy to "invent arguments for anything", but how easy is it to invent a good argument against something you deeply believe in? And by "good" I mean an argument that does not appear silly at the first or even second glance (so, no "chocolate cake in the asteroid belt" nonsense). Or maybe this is covered by the "The world is not like X and I believe X" quadrant, though not explicitly.

To me, one of the best examples of the technique is popularized in the ST:TNG episode The Measure of a Man).

Replies from: wedrifid, handoflixue, private_messaging
comment by wedrifid · 2012-05-26T05:36:13.108Z · LW(p) · GW(p)

I understand that EY dislikes it, because it's easy to "invent arguments for anything", but how easy is it to invent a good argument against something you deeply believe in?

Is this intended to be rhetorical? The literal answer seems to be "very easy, particularly with practice". Some people really are skilled at bullshit. Typically here I refer to MENSA mailing lists for examples.

comment by handoflixue · 2012-05-30T20:54:14.507Z · LW(p) · GW(p)

The ability to create arguments for either side is fundamental to "debate club" and what passes for national news media in the United States. Indeed, both groups are quite skilled at insisting that all issues have exactly two sides, which are completely equal in merit.

I can't see much reason to value the exercise in light of such an astonishingly BAD track record...

I do believe it's important to see how easy it is to rationalize something, it's just not useful to practice (unless you're looking to do writing or some other endeavor where "what if" scenarios are actually an important skill)

Speaking from my own personal experience, I can explain away ~90% of plot holes in kids cartoons as being completely reasonable. (Percentage calculated based on my friends having genuine "okay, wow, that makes sense - you're so smart!" reactions.)

comment by private_messaging · 2012-05-28T06:43:21.548Z · LW(p) · GW(p)

because it's easy to "invent arguments for anything"

if it is easy to invent arguments for anything, then your definition of what constitutes an argument is too broad and includes nonsense.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-05-28T10:11:11.531Z · LW(p) · GW(p)

It's easy to invent bad arguments for anything, including arguments that, when you are motivated to produce them, you may not notice are bad arguments. Limiting "argument" to mean good arguments is too narrow a definition. As always, the definition of "argument" is not the point here.

comment by JGWeissman · 2012-05-25T18:34:20.125Z · LW(p) · GW(p)

Val also used an upside-down W-diagram with the two worlds at the top and the four beliefs at the bottom, to emphasize the idea that the world is there first, and is fixed, and we have only a choice of what to believe within a fixed world, not a choice of which background world to live in.

I don't know what a "W-diagram" is (and a simple Google search didn't help), so I don't see how this works. Perhaps a picture could help explain.

Replies from: TheOtherDave, Mercurial
comment by TheOtherDave · 2012-05-25T18:39:00.187Z · LW(p) · GW(p)

I don't know either, but I observe that an upside down W resembles a segment of a conditional tree with two points at the top and either three or four at the bottom, which sounds similar to what the OP is describing.

Replies from: CuSithBell, JGWeissman, cousin_it
comment by CuSithBell · 2012-05-27T00:38:46.417Z · LW(p) · GW(p)

I don't know either, but I observe that an upside down W resembles

... an M...?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-27T02:50:44.750Z · LW(p) · GW(p)

Why yes! Also a sigma on it's side.

Replies from: tgb
comment by tgb · 2012-05-29T02:41:49.580Z · LW(p) · GW(p)

I found the above two comments particularly funny because I had just spent the last couple minutes reading this with the vague feeling that "an upside-down W" would be better described as some other letter but I had been continuously procrastinating actually figuring out what that letter was.

comment by JGWeissman · 2012-05-25T18:52:47.562Z · LW(p) · GW(p)

Ah, I think I see it. So on top it has the possible worlds, and on the bottom each world branches into the possible consequences in that world of having each of the possible beliefs.

Replies from: Mercurial
comment by Mercurial · 2012-05-26T22:46:11.923Z · LW(p) · GW(p)

Yep.

comment by cousin_it · 2012-05-25T19:14:36.049Z · LW(p) · GW(p)

An upside down W looks like an M to me.

comment by Mercurial · 2012-05-26T22:45:42.163Z · LW(p) · GW(p)

To clarify, I actually think of it with two upside-down 'V' shapes. I imagine one off to my left in the world where X is true and observe the two possible outcomes based on what I believe is true in that world, and then I look off to my right to see the upside-down 'V' in the world where X is not true and consider the alternatives.

I should also add that I have to put all four representations in near-mode. I view this whole process as a way of getting my brain to emotionally get that yes, it's better to have true beliefs even if they describe a world I'd rather not be in.

(To clarify, in case this isn't obvious: I'm Valentine.)

Replies from: Alicorn
comment by Alicorn · 2012-05-26T22:52:53.038Z · LW(p) · GW(p)

I think you should just register the username "Valentine" and use it, now before you acquire more comment history under this name.

Replies from: Valentine
comment by Valentine · 2012-05-26T23:04:44.074Z · LW(p) · GW(p)

Point taken. Done.

(I had been quietly hoping to see if it's possible to change my account name from "Mercurial" to "Valentine," but I planned on doing this in the ephemeral land of "later." This just works. Thanks for pointing that out!)

Replies from: Kevin
comment by Kevin · 2012-05-28T00:45:37.784Z · LW(p) · GW(p)

Pretty sure that the software doesn't allow for name changes, but an admin can give you karma to a new account if you really want it.

Replies from: Valentine
comment by Valentine · 2012-05-28T22:02:31.175Z · LW(p) · GW(p)

I don't think I care that much about karma except for the "can post articles" limit, and that doesn't seem too difficult to reach.

comment by pnrjulius · 2012-06-09T00:10:00.999Z · LW(p) · GW(p)

A mantra I use: "I want things to be a certain way. But I don't want to believe they are. I want to believe what is true."

A little more explanation: There was a time when I would say to myself things like, "I want to think that people are basically good." or "I want to believe that my life has not been a mistake." But now I realize that what I want is actually for people to be basically good; I don't want to think that, not if it isn't true. I don't want to believe that my life has not been a mistake; I want my life to not be a mistake. I used to think of the two as almost synonymous; but in fact they are radically different.

In terms of an exercise, it would be a start to get people to list things that would be nice if they were true but probably aren't---e.g. "I will win the lottery tomorrow", "the US federal budget is balanced", "I have everything I want in my life". Then go through each one, and ask: "Do you want to believe that, knowing it's not true? Or do you just want it it to be true? Is your desire about your beliefs, or is it about reality?" Some of the things on the list may be things that you can make true, in which case the next step is, "Go do it." For the others, maybe we could work on techniques for helping people accept unfortunate realities.

comment by palladias · 2012-06-03T00:31:27.755Z · LW(p) · GW(p)

The Retrench or Retreat Game

You're making a decision and want to check if you're rationalizing. Imagine a friend or someone else whose opinion you respect raises a criticism of/question about your stated reason. Which feels easier?

  1. Falling back to a different justification
  2. Arguing with the friend to defend yourself

Maybe this is better as a "True Rejection" exercise, but if you find it a lot easier/more comfortable to shift arguments, your surface answer may have been a rationalization.

comment by Stefie_K · 2012-06-01T20:45:38.404Z · LW(p) · GW(p)

The original post mentions some techniques for getting people to avoid rationalizing once they've realized they're doing it, but an earlier step is to get them to realize that they're doing it.

The key to this may be that a person who is rationalizing without realizing it is arguing with him/herself without realizing it, since it's easier to recognize (and to accept) that you're arguing than that you're rationalizing. Accordingly, getting people to realize that they're rationalizing would involve getting them to realize that they're the one that they're arguing with.

The 5-second-level goal would be simply to get them to realize that they're arguing, if that itself is an issue. The next step, getting them to recognize that they're arguing with themselves, would take longer for some people.

(Rationalizing can be distinguished from people who are arguing honestly with themselves in that a rationalizer cares which "one of them" wins, and the non-rationalizer doesn't.)

Step one: recognizing that you're arguing:

If this is an issue, the question to ask is whether the potential rationalizer is thinking about reasons. If you're thinking of reasons, you're mentally arguing with someone. When you want to get a glass of water, you generally don't think about any reasons why you should or shouldn't. When you want to get a soda or a cup of coffee, though, you might about reasons relating to the cost and/or health of the beverage. If so, you're arguing, whether with yourself, or with a mental representation of a friend who suggested drinking less soda/coffee, or whoever.

Step two: determining who you're arguing with:

This step would work best as a series of questions. First, who do you think you're arguing with? Is it a specific person? Is it a hypothetical person?

Why does this person disagree with you? What alternative position do they take, and why that one? What kind of person is it that you're arguing with?

What exactly are this person's arguments, that you're arguing against?

How much do you want to win the argument with this person? Why?

Any suggestions on other questions it would be good to ask in Step two? Personally, I tend to notice it if I'm rationalizing, so I'm not entirely sure how someone who doesn't notice would respond to these questions.

Replies from: Stefie_K
comment by Stefie_K · 2012-06-09T17:32:00.658Z · LW(p) · GW(p)

To follow up on my post:

The original post talks about noticing flinches and attachments, which is similar to what I discussed above. However, I would expect it to be a lot more difficult to notice the flinch itself than it would be to notice the aftereffects, because the flinch is one moment, and the aftereffects last. (At least, when I catch myself doing it, the flinch is a single moment, and then the rationalization normalizes very quickly unless I act to counter it.)

The momentary nature of the flinch would not only make it harder to notice, but also more difficult to teach people to notice flinches.

There may well be a better approach to this than the one I suggested, but I have to think that exercises/activities that focus on the aftereffects would work better than ones that depend on catching that flinch.

comment by chepin · 2012-06-01T11:25:01.423Z · LW(p) · GW(p)

EXERCISE

Ask the person to do an unpleasant task (dishwashing standing on slippery floor), which will cause the unconscious desire to finish the task.

While the task is executed, the person answers questions which can be TRUE or FALSE, the task only ends if the person complete a certain number of TRUE responses, but if the answers are wrong, it decreases the score for the task.

Replies from: pnrjulius
comment by pnrjulius · 2012-06-09T00:12:14.648Z · LW(p) · GW(p)

it affects the punctuation at the end.

What do you mean by that?

Replies from: chepin
comment by chepin · 2012-06-09T23:33:01.711Z · LW(p) · GW(p)

The comment was modified, thanks for the observation

comment by SarahNibs (GuySrinivasan) · 2012-05-30T19:49:51.618Z · LW(p) · GW(p)

I don't always have a problem with motivated cognition, but when I do, my brain usually makes it some or all of the way through the following steps:

  • Notice that I'm becoming more comfortable with (feeling safer about) a decision or action I'm about to make or just made.
  • Notice that the physical cause of my comfort is that I recently had a thought consisting of a reason the decision could have a good outcome.
  • Some brain process that I haven't pinned down yet that feels like and possibly is a mix of noticing optimization by proxy, feeling disdain for non-generalizable reasoning algorithms, and wondering about the true component strength of the reason.
  • Apply my bullshit detector to my comforting thought.
  • If appropriate, begin to legitimately think about the decision or action.

If this is a procedure that will work more generally, then these exercises may help:

Replies from: GuySrinivasan, GuySrinivasan, GuySrinivasan, GuySrinivasan, GuySrinivasan, GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2012-05-30T19:51:46.844Z · LW(p) · GW(p)

Check Yourself For BS Practice applying your bullshit detector on your own thoughts. The key observation is that it's not important to have bullshit thoughts on which to practice detecting bullshit - like asking yourself whether you're dreaming while awake to induce lucid dreams, the practice is having a mode where your bullshit detector is on and applied to your thoughts, so that it will actually fire when the time comes. So ask yourself, over and over, a) what your last thought was, and b) whether any part of it would have been flagged as BS if you saw the thought in someone else. Exercise: Think about something, trying to come up with the right answer, maybe where you've previously stated your guess at the right answer. At some point the facilitator will ring a bell or an alarm will sound or something. Locate your previous thought (if this is hard, practice this skill). Ask whether it was bullshit. Repeat 7 times. Have a cue to do the same during the day, like whenever you hear anyone's cell phone ring or make an "I have a text!" noise.

comment by SarahNibs (GuySrinivasan) · 2012-05-30T19:51:31.351Z · LW(p) · GW(p)

Desire Generalizable Decision Processes Have everyone read Kahneman's rant on picking the action with the best expected outcome in Thinking, Fast and Slow (chapter 31, Risk Policies, especially the sermon). Encourage people to play enough poker and read enough poker theory to become at least close to neutral-EV in Vegas. Experiencing the concept of pot odds in a real game was my strongest "passing up positive-EV moves is leaving money on the table no matter how loss-averse you are" learning moment. One exercise might be to (several times) present a story in which someone makes a decision and ask participants to a) make up a near-mode explanation of why the decision feels legit, b) give a far-mode explanation of why the reasoning that led to the decision would be disastrous if everyone used it, and c) figure out what the broadest set of people/circumstances is that would allow that reasoning to be generalized and still work well. Concrete example: Holden focuses on charities that are neglected by traditional funding. a) This is great because his marginal actions will actually be at the margin, not pushed back by some giant philanthropist who suddenly funds the whole charity. b) This is awful because if everyone focused on neglected charities, the most valuable charities would receive less than optimal focus. c) As long as very few people are actually applying this heuristic, there's no danger of high-profile valuable charities suddenly losing all of their focus. So if Holden makes his decisions according to c), he's doing great, but if just a), then his algorithm is flawed though its output might be correct in this case.

comment by SarahNibs (GuySrinivasan) · 2012-05-30T19:50:27.133Z · LW(p) · GW(p)

Noting emotions. Name your emotions, simply and without judgment, as you see them arise. Inspired by Mahasi noting, it may be enough of an exercise to simply state out loud a word describing any emotion you notice, taking care to not care whether your word is perfectly accurate but instead only care about having noticed and said something with the intent that if you knew the perfect descriptor word you would have used it. For ease it may well be enough to simply say a word that means "I noticed and attended to an emotion just now". As you practice this, if you enjoy noticing the emotion regardless of the emotion itself, with any luck you'll begin to notice your emotions and thus the "suddenly I'm more comfortable and feel safer" emotion without conscious effort. As an exercise, perhaps put on a particularly emotional-roller-coaster piece of instrumental music, stopping it at random points and asking everyone to name their emotion and the emotion they were feeling due to the music just before (presumably the in-the-moment emotion will often be annoyance due to the music being interrupted).

comment by SarahNibs (GuySrinivasan) · 2012-05-30T19:51:59.513Z · LW(p) · GW(p)

Notice Rationalization Directly If you have sufficient skill at noticing rationalization such that you recognize it maybe 1+ times per day, here is an exercise that User:thejash had me do which helped my brain latch on to recognizing rationalization much more regularly: carry a pencil and a bit of paper or notebook. Every time you notice yourself rationalizing, grin and make a mark on the paper. Every time, no exceptions, for an entire week. That's all. If you must, pretend like you'll use the data for something, but it's the noticing practice itself that is useful.

comment by SarahNibs (GuySrinivasan) · 2012-05-30T19:51:14.383Z · LW(p) · GW(p)

Avoid Optimizing Proxy Measures I have a gut-level averse reaction to being comforted by a reason my decision or action may work out when I only saw that reason in order to feel safer. Here is a possible exercise to show students that optimizing proxy measures is a terrible general algorithm: hand out a worksheet of two questions to each student, asking them to answer individually. There are two possible worksheets. The first says "1. Describe in a short paragraph Vatican City. 2. Write any short paragraph you like containing the following concepts: bright, cyclic, white, round, and dappled." The second says "1. Describe in a short paragraph the appearance of the moon. 2. Write any short paragraph you like containing the following concepts: tiny, historic, beautiful, religious, gardens." Then score every paragraph by giving each a score from 0 to 1 based on how well they mentioned each of the 5 concepts, and a final score from 0 to 5 by summing the scores, and note how the proxy measure is reasonable at picking out good descriptions (is it? playtest) if you don't optimize for the proxy measure but terrible if you do.

comment by SarahNibs (GuySrinivasan) · 2012-05-30T19:50:57.310Z · LW(p) · GW(p)

Trace emotion causes. Magically acquire a gut-level realization that emotions are physical events in your brain and those events have physical causes (which may be other emotions or thoughts, since they're physical). I acquired this gut-level realization during the 2011 minicamp from User:Academian. Then, when you do manage to notice an emotion and have a moment, practice asking yourself what the proximate cause of that emotion was until it becomes a not-uncommon practice upon noticing a surprising emotion. To practice, you first need a way to notice emotions - either develop this, or ask someone else to cue you in whatever way the two of you choose, and precommit to ask yourself at their cue a) what your strongest emotion is currently, and b) what the largest components of that emotion's cause happen to be. I'm not sure what a good exercise for this might be, since it feels like you legitimately need an unanticipated emotion.

comment by MarkL · 2012-05-31T16:36:00.955Z · LW(p) · GW(p)

Shinzen Young's "Do Nothing" Meditation:

http://www.youtube.com/watch?v=cZ6cdIaUZCA (Note that he is "neuroscience aware.")

http://www.shinzen.org/Retreat%20Reading/FiveWays.pdf

http://www.basicmindfulness.org/

(Also the discussion of "Willingness" in http://www.amazon.com/Acceptance-Commitment-Therapy-Second-Edition/dp/1609189620/)

The goal is to raise the baseline of equanimity and mindfulness. You naturally start allowing emotion and inner talk to thunder through you without letting it drive behavior. This is a prerequisite for rational deliberation and choice under increasingly emotional situations. Maybe.

comment by lavalamp · 2012-05-26T14:11:10.584Z · LW(p) · GW(p)

Whenever you find yourself about to express a belief, ask yourself if you really believe it it first. This works for me, but may not work for people not feeling curiosity or for people not already in the habit of being honest with themselves.

...often that thought causes me to not know what I believe.

Of course the hard part is figuring out a way to reliably put people in a situation where they will find themselves rationalizing. Social pressure would work on a lot of people, but probably not so well on LW types.

comment by HonoreDB · 2012-05-25T19:14:56.082Z · LW(p) · GW(p)

This seems like it'll be easiest to teach and test if you can artificially create a preference for an objective fact. Can you offer actual prizes? Candy? Have you ever tried a point system and have people reacted well?

Assume you have a set of good prizes (maybe chocolate bars, or tickets good for 10 points) and a set of less-good prizes (Hershey's kisses, or tickets good for 1 point).

Choose a box: Have two actual boxes, labeled "TRUE" and "FALSE". Before the class comes in, the instructor writes a proposition on the blackboard, such as "The idea that carrots are good for your eyesight is a myth promoted as part of a government conspiracy to cover up secret military technology" or "A duck's quack never echoes, and nobody knows why." If the instructor believes that the proposition is true, the instructor puts a bunch of good prizes in the TRUE box and nothing in the FALSE box. Otherwise, the instructor fills the FALSE box with less-good prizes. The class comes in, and the instructor explains the rules. Then she spends 5 minutes trying to persuade the class that she believes the proposition. After that, people who think she actually believes it line up at the TRUE box, and everyone else lines up at the FALSE box. Everyone who guessed right gets a prize from their box. If you guess TRUE and you're right, your prize is better than if you guess FALSE and are right. Repeat this for a few propositions, and it's at least a useful test for whether you can separate what you want from what seems plausible.

Replies from: JGWeissman
comment by JGWeissman · 2012-05-25T21:34:20.969Z · LW(p) · GW(p)

If the prize for correctly answering "true" is 10 times as good as the prize for correctly answering "false", then you really should be about 91% confident the correct answer is "false" before you give that answer.

Replies from: HonoreDB
comment by HonoreDB · 2012-05-25T21:52:02.543Z · LW(p) · GW(p)

Yup. The propositions need to be such that you can get more confident than that.

Replies from: JGWeissman
comment by JGWeissman · 2012-05-25T22:02:24.613Z · LW(p) · GW(p)

My point was that being biased to answer "true", even if "false" is more likely to be correct, is a rational strategy. This problem could be eliminated if the good effects of the correct answer being "true" were independant of getting the right answer. That is, if the correct answer is "true" you get 10 points, and if you answer correctly you get 1 point. That way, you want the answer to be "true", but it is not rational to let this have any effect on your answer.

Replies from: handoflixue
comment by handoflixue · 2012-05-30T20:57:14.171Z · LW(p) · GW(p)

I assumed the point was to illustrate how, given the motivation (candy), your thinking DOES end up rationally biased towards saying "true". It gives you a clear example of motivated thinking (I want to answer "True" even if that's not the correct answer) and puts you inside the experience while still being entirely aware that it's motivated thinking.

Replies from: JGWeissman
comment by JGWeissman · 2012-05-30T21:37:48.814Z · LW(p) · GW(p)

(I want to answer "True" even if that's not the correct answer)

That's not right. It is still the case that you want to answer "True" if the correct answer is "True", and you want to answer "False" if the correct answer is false. It's just that in the original formulation, the greater reward of correctly answer "True" means that you are willing to take a smaller chance of correctly answer "True" instead of a larger chance of correctly answering "False". Where in my modification, you want the correct answer to be "True", but if this actually influences whether you answer "True" or "False", you did something wrong.

Replies from: handoflixue
comment by handoflixue · 2012-05-30T23:31:41.367Z · LW(p) · GW(p)

you are willing to take a smaller chance of correctly answer "True" instead of a larger chance of correctly answering "False"

Sorry, that's what I meant by "I want to answer True even if it's not correct" :)

Where in my modification, you want the correct answer to be "True", but if this actually influences whether you answer "True" or "False", you did something wrong.

Yours strikes me as teaching the skill "disentangle mixed incentives" - there's an incentive to be correct, and an incentive for the correct answer to be "TRUE". There's also the skill of recognizing which of these two incentives you have control over (the former only). While these are certainly USEFUL skills, I question whether this really helps people avoid Motivated Cognition. On an abstract level, it seems like they might be related, but I don't think it's the sort of exercise that would build an intuitive sense of "Oh, wait, I'm doing motivated cognition here!"

I feel the original exercise, however, does a good job, because it puts the person in a position where they can SEE that they're doing motivated cognition. It lets them get a good look at what it FEELS like to be doing motivated cognition, what it looks like. Once they can recognize it, it's a LOT easier to fight it :)

comment by [deleted] · 2015-06-18T04:10:01.702Z · LW(p) · GW(p)

I've reading about the CFAE exercise prizes. Now that there are some great tools I can learn, ay tips for mitigating the initial pain of going from any given irrational state to a more rational one? Any stories of how learning these rationality exercises have made an impact on some part of the quality of your lives?

comment by thomblake · 2012-05-29T18:25:07.337Z · LW(p) · GW(p)

Exercise: Trick people into rationalizing

Engineer a situation where subjects will perform an action because you tricked them into doing it, and then ask them for reasons why they performed that action. Specifics here that actually work are beyond my pay-grade. But I'm reminded of an experiment where a subject was hypnotized and given an umbrella. After the hypnosis, the researcher asked "Why do you have an umbrella?" and the subject, initially looking a bit shocked at the umbrella, answered "I thought it was going to rain".

It would help to have a list of reliable triggers to make people perform actions without conscious decision, which most people wouldn't be aware of.

This is conceptually motivated by the finding that giving a child specific warnings is less effective than giving nonspecific warnings. Supposedly, if given a vague warning the child will make up their own reasons for avoidance of bad behavior, like "I didn't really want to do that anyway."

And the point of this, of course, is to have a definitive case of rationalization to point to, in order to train the ability to spot rationalization in oneself.

comment by Benquo · 2012-05-29T18:07:55.648Z · LW(p) · GW(p)

Think about a time when you were badly disappointed. Then think of any opportunities you had previously, to notice that things weren't going to turn out the way you'd expected. In particular, cases where you came across an argument or evidence that in hindsight should have changed your opinion, but you dismissed by a clever argument, or by not thinking about it.

Think about what you could have done in the time you had between your missed opportunity and the disappointment. Plans you could have changed to make the best of a bad situation. Mentally contrast the actual outcome with the outcome had you been forewarned, and allow yourself to be very annoyed at your past self for throwing away time.

comment by Incorrect · 2012-05-26T18:43:58.144Z · LW(p) · GW(p)

Instructor writes a decision on the board and asks all participants to give factors to consider before making that decision. Participants are then asked as a group if each factor (for typical possible values of the factor) is 1. necessary and 2. sufficient to determine whether the decision should be taken.

comment by Incorrect · 2012-05-26T04:46:42.186Z · LW(p) · GW(p)

I would ask Lucy if she would still eat the cake if the sugar industry were doing fine. More generally, I would ask if there is any action Lucy is taking that they would not take if the sugar industry were doing fine.

Ask the participants to come up with a belief that they feel is important to them but hasn't been necessary (though they may mistakenly think it sufficient) for making any meaningful decisions.

Replies from: Benquo
comment by Benquo · 2012-05-29T17:31:11.064Z · LW(p) · GW(p)

Perhaps Lucy would do well do ask:

What if eating the cake somehow harmed the sugar industry? How would I feel about that?

comment by private_messaging · 2012-05-27T18:30:29.711Z · LW(p) · GW(p)

One needs to avoid "sloppy cognition". If you are making a mathematical proof to show off and really want to be correct, you still won't make a mistake just as long as you are at least somewhat competent at writing proofs. If you are a programmer and you want your software to work, and inevitably want to believe it'll work, if you are competent and not sloppy in your thought - which you can acquire by training - you won't make extra mistakes just because you want the outcome.

On the other side, if your thinking is sloppy, somehow eliminating motivated cognition won't help you a whole lot. It might marginally decrease the error, in some circumstances when the inference chain is very short and some sense goes through despite the sloppiness.

And of course, on the worst side of sloppy, is the thinking where there is no correct logical pathway from something true to what you're thinking, or all pathways are too computationally expensive to traverse, in which case, you're doomed to nonsense, motivated or not (the best you can do is refrain from making an answer in such circumstances).

edit: TL;DR; : if your thinking style can be affected by 'motivated cognition', it is most likely sloppy enough so that the result is most likely entirely untrustworthy even if you remove the 'motivated cognition' from the equation. Same goes for other biases. No, removal of biases does not give you magical computing superpowers allowing you to answer dramatically harder questions. And the unmotivated sloppiness is still sloppiness.