How do you notice when you're rationalizing?

post by AnnaSalamon · 2012-03-02T07:28:21.698Z · LW · GW · Legacy · 85 comments

Contents

85 comments

How do you notice when you're rationalizing?  Like, what *actually* tips you off, in real life?

I've listed my cues below; please add your own (one idea per comment), and upvote the comments that you either: (a) use; or (b) will now try using.

I'll be using this list in a trial rationality seminar on Wednesday; it also sounds useful in general.

85 comments

Comments sorted by top scores.

comment by Will_Newsome · 2012-03-02T10:28:20.621Z · LW(p) · GW(p)

Cue: Any time my brain goes into "explaining" mode rather than "thinking" ("discovering") mode. These are rather distinct modes of mental activity and can be distinguished easily. "Explaining" is much more verbal and usually involves imagining a hypothetical audience, e.g. Anna Salamon or Less Wrong. When explaining I usually presume that my conclusion is correct and focus on optimizing the credibility and presentation of my arguments. "Actually thinking" is much more kinesthetic and "stressful" (in a not-particularly-negative sense of the word) and I feel a lot less certain about where I'm going. When in "explaining" mode (or, inversely, "skeptical" mode) my conceptual metaphors are also more visual: "I see where you're going with that, but..." or "I don't see how that is related to your earlier point about...". Explaining produces rationalizations by default but this is usually okay as the "rationalizations" are cached results from previous periods of "actually thinking"; of course, oftentimes it's introspectively unclear how much actual thought was put into reaching any given conclusion, and it's easy to assume that any conclusion previously reached by my brain must be correct.

Replies from: Will_Newsome, lavalamp, Will_Newsome, CriticalSteel2
comment by Will_Newsome · 2012-03-03T01:41:07.601Z · LW(p) · GW(p)

(Both of these are in some sense a form of "rationalization": "thinking" being the rationalization of data primarily via forward propagation to any conclusionspace in a relatively wide set of possible conclusionspaces, "explaining" being the rationalization of narrow conclusionspaces primarily via backpropagation. But when people use the word "rationalization" they almost always mean the latter. I hope the processes actually are sufficiently distinct such that it's not a bad idea to praise the former while demonizing the latter; I do have some trepidation over the whole "rationalization is bad" campaign.)

comment by lavalamp · 2012-03-02T23:01:42.950Z · LW(p) · GW(p)

Yeah, that's what it feels like to me, too.

comment by Will_Newsome · 2012-03-02T14:06:18.342Z · LW(p) · GW(p)

Yo, people who categorically or near-categorically downvote my contributions: assuming you at least have good intentions, could you please exercise more context-sensitivity? I understand this would impose additional costs on your screening processes but I think the result would be fewer negative externalities in the form of subtly misinformed Less Wrong readers, e.g. especially in this case Anna Salamon, who is designing rationality practices and would benefit from relatively unbiased information to a greater extent than might be naively expected. Thanks for your consideration.

Replies from: gjm
comment by gjm · 2012-03-03T03:01:03.578Z · LW(p) · GW(p)

I don't normally downvote your contributions (and indeed had just upvoted one), but I downvoted this one for whining about downvotes. (Especially as its parent is actually at +13 right now -- maybe it was at -3 or something when you originally wrote the above, though.)

Anyone who categorically or near-categorically downvotes your contributions is unlikely to be swayed by a polite request for them not to do so.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-03T03:20:52.282Z · LW(p) · GW(p)

Anyone who categorically or near-categorically downvotes your contributions is unlikely to be swayed by a polite request for them not to do so.

Of course, but in this case it seemed deontologically necessary to request it anyway; I would feel guilty if I didn't even make a token effort to keep people from being needlessly self-defeating. This happens to me all the time: "if it's normally distributed then you should just straightforwardly optimize for the median outcome" versus "a heavy-tailed distribution is more accurate or at least acts as a better proxy for accuracy, we should optimize for rare but significant events on the tails". I feel like the latter is often the case but people systematically don't see it and subsequently predictably shoot their own feet off.

ETA: "To not forget scenarios consistent with the evidence, even at the cost of overweighting them; to prioritize low relative entropy over low expected absolute error, as a proxy for expected costs from error."

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-03T04:14:07.408Z · LW(p) · GW(p)

If you consider it important that certain contributions not be unfairly downvoted, and you consider it likely that making those contributions under your name will result in them being unfairly downvoted, it would seem to follow that you consider it important not to make those contributions under your name. No?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-03T04:19:15.403Z · LW(p) · GW(p)

It does follow but I might still take the lesser of two evils and post it anyway. It's true that if I used a different name that would have been strictly better; for some reason that idea hadn't occurred to me. (Upvoted.) In retrospect I should have found the third option, but in practice when commenting on LW I'm normally already feeling as if I've gone out of my way to take a third option and feel that if I kept on in that vein I would get paralyzed and super-stressed. Perhaps I should update once again towards thinking harder and more broadly even at the cost of an even greater risk of paralysis.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-03T05:26:52.116Z · LW(p) · GW(p)

Well, I endorse nonparalysis.

That said, sometimes thinking dilemmas through after I've made (and implemented) a decision and then, if I find a viable third option, noting it in my head so that it comes to mind more readily the next time I'm faced with a similar decision, can get me broad thinking and nonparalysis.

comment by CriticalSteel2 · 2012-03-02T15:17:27.163Z · LW(p) · GW(p)

"When explaining I usually presume that my conclusion is correct and focus on optimizing the credibility and presentation of my arguments"

Thats because your sentences are badly formed.

As a debater (i know how much you guys hate debating and testing your theories), i have to do allot of explaining, and if i include any flaws or fallacies from critical thinking (something else you dont know about), then i know, and my opponent should know, that this is a mistake. So to illuminate them helps to create a logical explanation. And that is our goal.

Too bad no one here seems to know diddly squat about critical thinking OR debating.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-03T02:04:18.538Z · LW(p) · GW(p)

What brings you to Less Wrong, CriticalSteel2?

comment by AnnaSalamon · 2012-03-02T07:32:37.921Z · LW(p) · GW(p)

Cue for noticing rationalization: I find my mouth responding with a "no" before stopping to think or draw breath.

(Example: Bob: "We shouldn't do question three this way; you only think so because you're a bad writer". My mouth/brain: "No, we should definitely do question three this way! [because I totally don't want to think I'm a bad writer]" Me: Wait, my mouth just moved without me being at all curious as to how question three will play out, nor about what Bob is seeing in question three. I should call an interrupt here.)

Replies from: multifoliaterose, Alexei, Benquo
comment by multifoliaterose · 2012-03-03T01:02:22.831Z · LW(p) · GW(p)

Bob: "We shouldn't do question three this way; you only think so because you're a bad writer". My mouth/brain: "No, we should definitely do question three this way! [because I totally don't want to think I'm a bad writer]"

It's probably generically the case that the likelihood of rationalization increases with the contextual cue of a slight. But one usually isn't aware of this in real time.

comment by Alexei · 2012-03-02T17:43:27.696Z · LW(p) · GW(p)

Sometimes this happens for me when the person just said (or is about to say) an invalid argument that I've heard before and know exactly how to correct/retort.

comment by Benquo · 2012-03-02T13:35:15.588Z · LW(p) · GW(p)

This one is a big clue for me too.

comment by Paul Crowley (ciphergoth) · 2012-03-02T08:04:15.020Z · LW(p) · GW(p)

Cue: I find that I have multiple, independent reasons to support the same course of action. As with Policy Debates Should Not Appear One-Sided, there is every reason to expect multiple lines of evidence to converge on the truth, but there is no reason to expect multiple independent considerations to converge on the same course of action.

Replies from: billswift
comment by billswift · 2012-03-02T12:08:18.780Z · LW(p) · GW(p)

You can find multiple, independent considerations to support almost any course of action. The warning sign is when you don't find points against a course of action. There are almost always multiple points both for and against any course of action you may be considering.

comment by Viliam_Bur · 2012-03-02T10:36:13.930Z · LW(p) · GW(p)

When I can't explain my reasoning to other person without a feeling of guilt that I am slightly manipulating them.

I am better at noticing when I lie to other people, than when I lie to myself. Maybe it's because I have to fully verbalize my arguments, so it is easier to notice the weak parts that would otherwise be skipped. Maybe it's because when I talk to someone, I model how would they react if they had all the information I have, and this gives me an outside view.

comment by shminux · 2012-03-02T15:52:40.017Z · LW(p) · GW(p)

When I feel relief that I did not have to change my point of view after thinking through something. (" representing said that , therefore I should continue supporting ")

comment by Will_Newsome · 2012-03-02T10:16:28.742Z · LW(p) · GW(p)

Orthogonal: Noticing when you're rationalizing is good, but assuming that you might be rationalizing and devising a plan of information acquisition and analysis that is relatively resilient to rationalization is also a safe bet.

Replies from: khafra
comment by khafra · 2012-03-02T16:16:47.004Z · LW(p) · GW(p)

A rationalization-robust algorithm like that sounds like a good subject for a discussion post.

comment by Eugine_Nier · 2012-03-03T03:42:36.180Z · LW(p) · GW(p)

I noticed that there is a certain perfectly rational process that can feel a lot like rationalization from the inside:

Suppose I were to present you with plans for a perpetual motion machine. You would then engage in a process that looks a lot like rationalization to explain why my plan can't work as advertised.

This is of course perfectly rational since the probability that my proposal would actually work is tiny. However, this example does leave me wondering how to separate rationalization from rationality possibly with excessively strong priors.

Replies from: Sohum
comment by Sohum · 2012-03-08T13:18:17.766Z · LW(p) · GW(p)

What's happening there, I think, is that you have received a piece of evidence ("this guy's claims to have designed a perpetual motion machine") and you, upon processing that information, slightly increase your probability that perpetual motion machines are plausible and highly increase your probability that he's lying or joking or ignorant. Then you seek to test that new hypothesis: you search for flaws in the blueprints first because your beliefs say you have the highest likelihood of finding new evidence if you do so, and you would think it more likely that you've missed something than that the machine could actually work. However, after the proper sequence of tests all coming out in favour, you would not be opposed to building the machine to check; you're not opposed to the theoretical possibility that we've suddenly discovered free energy.

In rationalisation, at least the second and possibly both parts of the process differ. You seek to confirm the hypothesis, not test it, so check what the theoretical world in which the hypothesis is unarguably false feels like, maybe? Checking whether you had the appropriate evidence to form the hypothesis in the first place is also a useful check, though I suppose false positives would happen on that one.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-03-10T20:15:49.867Z · LW(p) · GW(p)

In rationalization you engage in motivated cognition, this is very similar to what happens in the perpetual motion example.

comment by gwern · 2012-03-02T16:44:53.084Z · LW(p) · GW(p)

The twinge of fear. When I came across Bruine de Bruin et al 2007 as a cite for the claim that sunk cost lead to bad real-world consequences (not the usual lab hypothetical questionnaires), I felt a twinge or sickness in my stomach - and realized that I had now bought thoroughly into my sunk cost essay and that I would have to do an extra-careful job reading that paper since I could no longer trust my default response.

comment by Vladimir_Nesov · 2012-03-02T12:03:48.003Z · LW(p) · GW(p)

Cue: The conclusion I'm investigating is unlikely to be correct, as in I feel that I couldn't have enough understanding to solve the problem yet, to single out this particular conclusion, and so any overly specific conclusion is suspect and not worth focusing on. It's the "sticky conclusion" effect: once considered, an idea wants to be defended.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-03-03T02:07:11.646Z · LW(p) · GW(p)

It's the "sticky conclusion" effect: once considered, an idea wants to be defended.

Do you generally find it useful to semi-anthropomorphize ideas/representations/drives like that?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-03-03T02:21:07.065Z · LW(p) · GW(p)

Particularly in programming, it's very useful to think of elements of a project (of varying scale) as knowing certain things, understanding certain ideas, having certain skills or ensuring certain properties. Once you have enough skill, many subtasks can be thought of as explaining some idea to the program or teaching it to do new tricks. (I'm not talking about anything AI-related, just normal software development.)

(My statement you quoted is somewhat wrong, it could instead either act as referring to the hypothetically more common rationalization dynamic that I don't observe much, or else should be changed to "once considered, an idea wants to take up more attention than it's probably worth".)

comment by Giles · 2012-03-02T14:39:37.629Z · LW(p) · GW(p)

Cue: I feel emotionally invested in a particular fact being true. I feel like a blue or a green. May be related to Anna's point about ugh fields.

comment by AnnaSalamon · 2012-03-02T07:46:54.908Z · LW(p) · GW(p)

Let's also try the converse problem: what cues tip you off, in real life, that you are actually thinking, and that there is actually something you're trying to figure out? Please stick such cues under this comment.

Replies from: AnnaSalamon, AnnaSalamon, Giles, AnnaSalamon, shokwave, AnnaSalamon, Will_Newsome
comment by AnnaSalamon · 2012-03-02T07:57:00.975Z · LW(p) · GW(p)

Cue of not rationalizing: I feel curious -- a chasing, seeking, engaged feeling, like a cat chasing a mouse.

comment by AnnaSalamon · 2012-03-02T08:14:18.240Z · LW(p) · GW(p)

Cue of not rationalizing: I find myself thinking new thoughts, hearing new ideas (if talking to someone else), and making updates I'm interested in -- even if they aren't about the specific decision I'm trying to make. The experience feels like walking through a forest looking around, and being pleasantly surprised by parts of my surroundings ("oh! maybe that is why clothes with buttons and folds and detail are more fashionable").

comment by Giles · 2012-03-02T14:33:52.680Z · LW(p) · GW(p)

Cue of not rationalizing: I feel smart. I feel like many different parts of my brain are all engaged on the question at hand. It feels like what I'm saying could be turned into mathematics or a computer program.

comment by AnnaSalamon · 2012-03-02T07:57:24.001Z · LW(p) · GW(p)

Cue of not rationalizing: I don't know what conclusion I'll come to.

comment by shokwave · 2012-03-03T23:17:45.620Z · LW(p) · GW(p)

Cue for actual thinking: I spontaneously produce lines of research, not lines of argumentation. I feel relief when I can classify a problem as "already decided by empirical data" and I begin to seek that data out.

comment by AnnaSalamon · 2012-03-02T07:58:26.277Z · LW(p) · GW(p)

Cue of not rationalizing: I am picturing the two possible outcomes (from deciding correctly/incorrectly), feel fear (lest I end up with the bad outcome), and find my attention spontaneously on the considerations that seem as though they'll actually help with that.

comment by Will_Newsome · 2012-03-03T06:10:01.152Z · LW(p) · GW(p)

Cure of not rationalizing: I feel confident that I'm not rationalizing. If I feel uncertain about whether or not I'm rationalizing then it can go either way and it's a sign I should be on guard, but if I don't feel any note of self-deception then that's normally pretty strong evidence that I'm not rationalizing. I'm not sure if this would be true for others; I've definitely seen people who claimed they felt that they were not rationalizing blatantly rationalizing.

comment by AnnaSalamon · 2012-03-02T07:44:52.898Z · LW(p) · GW(p)

Cue for noticing rationalization: I notice that I want a particular conclusion, and that I don't actually anticipate updates from the considerations that I'm "thinking through" (if alone) or saying/hearing (if conversing).

Replies from: torekp
comment by torekp · 2012-03-04T01:55:25.274Z · LW(p) · GW(p)

The first part is the big one for me. When I'm reading a study and see "the results of our experiment were..." and I know what I want to see next, warning bells go off.

comment by GuySrinivasan · 2012-03-02T20:34:33.922Z · LW(p) · GW(p)

Cue: I say the word "clearly", "hopefully", or "obviously".

This definitely doesn't always indicate rationalization, but I say one of those words (and probably some others that I haven't explicitly noticed) with much greater probability if I'm rationalizing than if I'm not, and saying them fairly reliably kicks off an "ORLY" process.

Replies from: fiddlemath
comment by fiddlemath · 2012-03-29T04:19:43.595Z · LW(p) · GW(p)

Oooh, I just realized: if you can pick out words like this when you're writing, and you can code in your text editor, then your computer can catch this cue for you. (like writegood-mode in emacs). I've had this prevent ugly sentences in technical writing, but catching "rationalization words" in written sentences would also be useful when, say, writing a journal, or writing down an argument.

More awesomely, the highlighting is immediate in that mode, so you get tight feedback about the sentences you're typing.

comment by [deleted] · 2012-03-02T17:45:21.869Z · LW(p) · GW(p)

Cue: Noticing that I'm only trying to think of arguments for (or against, if i'm arguing with someone) some view.

comment by Grognor · 2012-03-02T11:46:23.173Z · LW(p) · GW(p)

Self-supplication is a strong indicator of rationalization in me. Phrases like, "at least", "can't I just", "when can I..." are so far always indicators that I'm trying to get myself to do something I know I shouldn't or stop doing something I know I should be doing.

comment by AnnaSalamon · 2012-03-02T07:56:20.310Z · LW(p) · GW(p)

Cue: I have an "ugh field" across part of my mental landscape -- it feels almost like literal tunnel vision, and is the exact same feeling as the "ugh field" I might get around an unpaid bill. (Except this time it's abstract; it descends out of nowhere while I'm having a conversation about whether we should hire so-and-so, or what SingInst strategy should be, or whatever).

Replies from: fiddlemath
comment by fiddlemath · 2012-03-06T07:31:57.986Z · LW(p) · GW(p)

Wait, how do you notice an ugh field? I only notice these when I'm viewing myself "in far mode", by trying to explain thoughts or emotions that I have, but wouldn't expect "someone like me" to have.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2012-03-06T07:50:36.176Z · LW(p) · GW(p)

Huh, really? For me it feels, um, the way I described it above -- I go to think about an important-email-I-need-to-send, or whatever, and I feel physically tense, and averse, and if I wasn't paying attention it used to be that before I'd exercised any conscious control I'd be checking the LW comment feed or something, to move away from the painful stimulus. And if it's a bad case, and something I'm not yet thinking about, it feels as though my eyes stop moving freely across my field of vision,and as though I'm kind of trapped or something, and as though whatever boring thing is in front of me is suddenly intensely absorbing [so that I won't have to think about that other thing]...

Replies from: fiddlemath
comment by fiddlemath · 2012-03-07T07:15:29.592Z · LW(p) · GW(p)

... oh! Thanks, that quite clarifies a few things.

Specifically: I notice my rationalization in the presence of obvious fear or disgust, but hadn't been thinking of strong negative affects as "ugh fields." Before your reply, I hadn't verbally noticed that being suddenly interested in marginal details was a good sign of ugh fields, and thus a likely warning sign for rationalization.

comment by AnnaSalamon · 2012-03-02T07:40:19.420Z · LW(p) · GW(p)

Cue for noticing rationalization: My head feels tired and full of static after the conversation.

comment by juliawise · 2012-03-04T16:19:30.590Z · LW(p) · GW(p)

When I get into a particular negative emotional state, I make up reasons that I feel that way. When I start a sentence with a bitter "Fine!" or "I should have known better," it's a guarantee that the next statement out of my mouth ("you obviously don't care", "there's no point in cooking for you because you hate food," etc.) will be something I know to be false but that, if it were true, would explain why I feel rotten. Physically, the cue is me turning away from the person I'm speaking to. The actual explanation, "I like tomatoes, and I want you to like them too" or "I'm tired" or "My brain chemistry might be out of whack" are not as satisfying to say out loud as a condemnation of the other person.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-04T21:22:57.366Z · LW(p) · GW(p)

(nods) Boy, am I familiar with this one.
That said, I have found that saying the actual explanation with all of the nonverbal signals of the false-but-satisfying condemnation is startlingly satisfying for me. My closer friends have learned to accept it as a legitimate move in our conversations; most of them respond to my explanation and ignore my bodyparl.
It puzzles third parties, though.

Replies from: juliawise
comment by juliawise · 2012-03-06T23:18:24.467Z · LW(p) · GW(p)

So a haughty back-turn combined with "I had a bad day and I'm taking it out on you" is satisfying? Hmm, I'll have to try that.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-03-06T23:34:39.301Z · LW(p) · GW(p)

Yup. Dunno if it works for anyone else, but it delights me that it works for me.

comment by Giles · 2012-03-03T16:15:19.433Z · LW(p) · GW(p)

Cue: I feel low status. In low status mode, I feel I'm on the defensive and I have something to prove. In high status mode, I think there are two effects which help avoid rationalization:

  • If I feel confident in what I'm saying, then I can entertain the opposing arguments because I don't feel I have anything to fear from them
  • If I feel high status, I feel less afraid of being forced to change my mind

Edit: ah, Vladimir_Nesov said something similar on the other thread

comment by Alexei · 2012-03-02T17:42:12.461Z · LW(p) · GW(p)

I notice I'm rationalizing when after I lose my first defensive argument, I have another lined up and ready to go. If that continues in succession (2 or 3), then I have to stop and rethink my position.

I have a thought in my head that tells me "I'm being irrational", and I try to shut it off, or not care about it. Usually this leads to an increased level of frustration and anger.

When in my mind I already realized I'll have to change my opinion, but I don't want to admit that right now, during the argument.

comment by Dr_Manhattan · 2012-03-02T16:57:35.223Z · LW(p) · GW(p)

I think I perceive pattern in my speech, possibly a change of tone in my voice, certain pattern of thoughts (repetitive) and goals (convincing others/self).

Replies from: AnnaSalamon
comment by AnnaSalamon · 2012-03-02T17:32:41.157Z · LW(p) · GW(p)

That's helpful. Can you be more specific?

Replies from: Dr_Manhattan
comment by Dr_Manhattan · 2012-03-02T19:08:32.797Z · LW(p) · GW(p)

Hard to be specific about the exact wetware patterns, but I think I can hear myself sounding like a salesman :).

Another signal is not recognizing any validity to opponent's argument (or 'recognizing' it just for show). Most of the time when you're "right" you should come to the conclusion that someone is very or even overwhelmingly wrong, but almost never completely wrong (unless the argument is purely mathematical).

comment by Giles · 2012-03-02T14:37:33.710Z · LW(p) · GW(p)

Cue: my internal monologue contains the sequence "... which is obviously just a rationalization, but..." and then proceeds as if it were true.

comment by Thomas · 2012-03-02T14:34:44.689Z · LW(p) · GW(p)

I never notice. Either I don't do it, either I just don't see it.

But I see other people rationalizing a lot.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2012-03-02T17:34:52.071Z · LW(p) · GW(p)

Have you tried asking those close to you (significant others, close colleagues, long-term best friends) when they see you rationalizing?

comment by timtyler · 2012-03-02T13:51:28.546Z · LW(p) · GW(p)

People persistently disagreeing with me is one sign.

The more numerous they are, the smarter they are, and the more coherent-sounding their arguments, the more likely it is that there's a problem at my end.

Of course, I often ignore such signals. For example, I still think that global warming is good, that our distant ancestors were probably clay minerals, that memetics is cool, that there will be a memetic takeover - and lots of other strange things.

comment by shokwave · 2012-03-03T23:24:11.063Z · LW(p) · GW(p)

Cue for rationalising: I feel like I'm 'rocking back' or 'rolling back down the bowl'. Hmm. Let me clarify.

A ball in a bowl will roll around when the bowl is shaken, but it goes up the side, reaches a zenith, and rolls back down. Similar for trying to scale a steep hill at a run; you go up, reach a zenith, come back down. And again for balance: you wobble in a direction, find a point of gaining balance, and return to center.

The cue is feeling that in a conversation, discussion, or argument. We sort of roll around discussing things, something comes up that I rationalise away, and it feels like I've regained my balance: the conversation tipped towards something, wobbled a bit, and then rocked back to center. Often it's in the form of reaching that zenith as a lull in the conversation, and I come in with "[reason], so we don't need to worry about that", where [reason] is rationalised.

comment by [deleted] · 2012-03-03T01:16:32.361Z · LW(p) · GW(p)

Cue: feeling of urgency, of having to come up with an counterargument to every possible criticism of my reasoning before someone raises it (I sometimes think by imagining a discussion with someone, so I don't have to necessarily anticipate actually talking about it for this feeling to come up).

comment by multifoliaterose · 2012-03-03T01:06:27.844Z · LW(p) · GW(p)

Cue: Non-contingency of my arguments (such that the same argument could be applied to argue for conclusions which I disagree with).

comment by sixes_and_sevens · 2012-03-02T17:17:44.734Z · LW(p) · GW(p)

The feeling that I'm getting into some sort of intellectual debt. Kind of like a little voice saying "you may say this fits with your understanding of the situation, but you know it doesn't really, and if it doesn't come back to bite you, it's because you're lucky, not because you're right. Remember that, bucko."

Quite why it uses the word "bucko" is beyond me. I never do.

comment by Giles · 2012-03-02T14:42:05.631Z · LW(p) · GW(p)

Cue: my internal monologue (or actual voice) is saying one thing, and another part of my mind that doesn't speak in words is saying "bullshit".

Replies from: AnnaSalamon
comment by AnnaSalamon · 2012-03-02T17:33:29.581Z · LW(p) · GW(p)

What does it feel like, when it says that? How do you notice?

Replies from: Giles
comment by Giles · 2012-03-03T03:47:25.138Z · LW(p) · GW(p)

It feels the same as when I feel that someone else is bullshitting.My mind thinks it can recognize "good" and "bad" arguing styles (presumably learned from lots of training data). I can skim through an essay and find points where it feels like it's slipping into irrationality without necessarily paying attention to what all the words are saying.

As for how I notice - I just do. My mind brings it to my attention. I assume there are things I can do to train this ability or put myself in the right mood for it, but that's not something I've explored.

Sorry if this wasn't useful. It's hard to describe how something feels from the inside and have it come out coherent.

comment by Vladimir_Nesov · 2012-03-02T13:09:07.521Z · LW(p) · GW(p)

Cue: An argument that is being advanced for or against a conclusion doesn't distinguish it from its alternatives (mostly notice in others).

comment by Steven_Bukal · 2012-03-02T10:02:24.465Z · LW(p) · GW(p)

Cue for noticing rationalization: In a live conversation, I notice that the time it takes to give the justification for a conclusion when prompted far exceeds the time it took to generate the conclusion to begin with.

Replies from: billswift, Will_Newsome
comment by billswift · 2012-03-02T12:13:31.355Z · LW(p) · GW(p)

That is nearly always true, not just when a person is rationalizing. If you are correctly performing incremental updates of your beliefs, going back and articulating what caused your beliefs you be what they currently are is time consuming, and sometimes not possible, except by careful reconstruction.

comment by Will_Newsome · 2012-03-02T10:08:09.363Z · LW(p) · GW(p)

I personally rarely find that to be evidence of rationalization; instead it generally means that I know various disjunctive supporting arguments but am loth to present any individual one for fear of it being interpreted as my only or strongest argument.

comment by AnnaSalamon · 2012-03-02T07:39:57.579Z · LW(p) · GW(p)

Cue for noticing rationalization: I feel bored, and am reciting thoughts I've already though of.

comment by Dues · 2014-07-14T04:40:24.428Z · LW(p) · GW(p)

Whenever I start to get angry and defensive, that's a sign that I'm probably rationalizing.

If I notice, I try to remind myself that humans have a hard time changing their minds when angry. Then I try to take myself out of the situation and calm down. Only then do I try to start gathering evidence to see if I was right or wrong.

My source on 'anger makes changing your mind harder' was 'How to Win Friends and Influence People'. I have not been able to find a psychology experiment to back me up on that, but it has seemed to work out for me in real life. It suggests that, if you think someone else is rationalizing, then the first step to changing their mind is to get them to be calm. It also seems to suggest that calming yourself down and distancing yourself from whatever generated the rationalization (a fight, a peer group, your parents, etc.) is what you need to do to work through a possible rationalization.

comment by alex_zag_al · 2012-07-08T15:31:53.731Z · LW(p) · GW(p)

For rationalizing an action - I realize that the action is something I really want to do anyway. And then I figure (right or wrong) that all the reasons I just thought up to do it are made up, and I try to think about why I want to do it instead.

comment by fubarobfusco · 2012-03-03T00:22:40.582Z · LW(p) · GW(p)

For me, it's caring about means vs. ends.

Rationalizing: I'm thinking about how to justify doing something I'm already comfortable with; adhering to a plan or position I've already accepted. So I invent stories about how the results will be acceptable, that I couldn't change them anyway, that if they are bad it will have been someone else's fault and not mine, etc.

Not rationalizing: I'm thinking about the ends I'm trying to accomplish. I'm not committed to an existing plan or position. I'm curious about what plan will achieve the desired results. If the results turn out bad, I'll critique my own plan and change it (I suppose we call that lightness) rather than blaming someone else.

comment by Sush · 2012-03-02T23:46:22.550Z · LW(p) · GW(p)

Like some other people have said, one of my biggest tip-offs is if I have a strong negative reaction to something. Often this happens if I'm reading a not-particularly-objective report, experiment, treatise or something which could have been written with strong biases. My mind tends to recoil at the obvious biases and wants to reject the entire thing, but then the rational part of me kicks in and forces me to read through the whole thing before parsing an emotional reaction. After all, a point of being rational is to be able to sieve through other writers' biases to see if they actually have important points buried inside, otherwise you don't know what accuracies you might miss, or what biases you're giving into yourself. I find this also happens if I myself have a bias that I've never thought of before; I instantly have a gut reaction that feels at first natural, but almost an instant later, very out of place. Then I realise that I haven't taken in all the information, I haven't evaluated it as objectively as I can, and I haven't arrived at an accurate conclusion. I've just gone. "No, x is bad because y makes me feel bad!".

The other thing (perhaps more importantly for my personal wellbeing) would be if I actually perform an action that is inconsistent with my rationalist world-view, such as if I did something illogical or something that would contradict a view that I hold as being morally and rationally just. These are usually things that I'd permit myself to do without thinking much when I was younger, but now would seem like abhorrent double-standards and 'unjust' behaviour (perhaps they'd even make me feel disproportionately guilty, but it seems to me that often acting illogically should make you feel that way!)

comment by David_Gerard · 2012-03-02T14:47:28.213Z · LW(p) · GW(p)

I feel really right about whatever it is, with many clever and eloquent arguments, but I also sorta know that I can't quite epistemologically justify it.

(Note that I may actually be right - that doesn't mean I wasn't just rationalising.)

comment by Will_Newsome · 2012-03-02T10:16:39.275Z · LW(p) · GW(p)

Cue: Any time I decide that it wouldn't be worth my time to closely examine counterarguments or evidence against a live hypothesis. This normally takes the form of a superficially plausible cost/benefit analysis with a conclusion along the lines of "if you examined every single crackpot theory then you wouldn't have any time to look at things that mattered", which can largely only be justified by a rationalized working model of how mental accounting works.

comment by Dmytry · 2012-03-02T17:39:14.592Z · LW(p) · GW(p)

The problem is not with 'rationalization'. Many mathematical proofs started with a [unfounded] belief that some conjecture is true, yet are perfectly valid as the belief has been 'rationalized' using solid logic.

The problem is faulty logic; if your logic is even a small bit off on every inference step, then you can steer the chain of reasoning towards any outcome. When you are using faulty logic and rationalizing, you are steering into some outcome that you want. When you are using faulty logic and you are actually thinking what is true, then you just accumulate error like a random walk, which gives much smaller error over time.

Other issue - most typically people who are rationalizing are not the slightest bit interested in catching themselves rationalize.

edit: to clarify. You may have a goal of winning a debate, not caring what is the true answer, and come up with an entirely valid proof, if for bad reasons. You may also have a goal of winning a debate, be wrong, and make up some fallacious argument, neglect to update your belief, et cetera. That happens when you are not restricting yourself to arguments that are correct. Or you may have a goal of winning a debate, be wrong, and fail to make an argument because you are successfully restricting yourself to arguments which are correct, and don't use fallacies to argue for what you believe in anyway.

In mathematics, the reasoning is fairly reliable, and it doesn't make a slightest bit of difference if you are arriving at a proof because you wanted to know if conjecture is really true, or because you wanted to humiliate some colleague you hate, or because you wanted not to lose debate and didn't want to admit you're wrong. With unreliable reasoning, on the other hand, you are producing mistakes whenever you are rationalizing or not, albeit when rationalizing you tend to make larger mistakes, or become a mistake factory. Still, you may start off with good intention to find out if some conjecture is true, and end up making a faulty proof, or you may start off with a very strong very ill founded belief about the conjecture and get lucky to be right, and find a valid proof. You can't always trust the arguments that you arrived at without rationalizing more than the ones you arrived at when rationalizing.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-03-03T05:30:26.055Z · LW(p) · GW(p)

Why is this getting down voted?

Replies from: Will_Newsome, Dmytry
comment by Will_Newsome · 2012-03-04T00:51:31.956Z · LW(p) · GW(p)

I've noticed my ability to predict my comment karma has gone down in the last few months, the weirdest example being some of my comments fluctuating about ten to fifteen points within a single day, prompting accusations that I had a bunch of sockpuppets and was manipulating the votes. It's weird and I don't have any good explanation for why LW would suddenly start voting in weird ways. As the site gets bigger the median intelligence tends to drop; does anyone know if there's been a surge in registration lately? It's true I've seen an abnormal amount of activity on the "Welcome to Less Wrong" post. I feel like we might be attracting more people that aren't high IQ, autistic, schizotypal, or OCPD, which would be sad.

Replies from: Dmytry
comment by Dmytry · 2012-03-06T10:10:34.419Z · LW(p) · GW(p)

I generally treat the votes as a mix of indication of clarity and of how much people like/dislike the notion.

Didn't expect any negatives on this post though; then I assumed something wasn't clear and added the edit. I still think that my point is entirely valid though. Good logic does not allow you to make a fallacious argument even if you want to, and bad logic often leads you to wrong conclusions even when you are genuinely interested in knowing the answer; furthermore the rationalization process doesn't restrict itself to assertions that are false.

Replies from: witzvo
comment by witzvo · 2012-07-15T03:48:14.439Z · LW(p) · GW(p)

Perhaps the concern is that the reduction to a logical question, i.e. to the set of premises and axioms, was the faulty part? After all a valid argument from false premises doesn't help anyone, but I'm sure you know that.

comment by Dmytry · 2012-03-03T07:27:03.475Z · LW(p) · GW(p)

Because someone's rationalizing something maybe ;)