Diffusing "I can't be that stupid"

post by Hazard · 2018-03-24T14:49:51.073Z · LW · GW · 5 comments

Why does it hurt so much when I think that people I care about are very wrong about something I think is important? If I'm talking with someone I'm not super close with, I'm usually very capable of not getting emotional or riled up when we find important disagreements.

After some though, it seems like my mind is running a process similar to the one below.

Quick summary:

For some reason, I don't want to accept that someone can matter and be worthy of care, and still be (from what I know) very wrong about very important things. In order to work around this, I slightly dehumanize people when I think they're majorly wrong. When I think someone I care about is majorly wrong, I bounce around in a loop

I'd guess that the experience of pain and discomfort that comes from disagreeing with close ones about important things is produced by spending time stuck in the loop on the left.

I should note that it feels uncomfortable to say that the above process is one that I routinely use, primarily because of how obviously broken it is upon reflection. You'd never catch me audibly saying, or even explicitly thinking, "Dante doesn't matter/isn't human because he doesn't agree with me."

There was a non-trivial delay between when I first though I could be doing some sort of dehumanization, and when I seriously considered it as a possibility. I don't like how long that took. I think bugs like this can often be hard to deal with because there's a sense of, "There's no way I could be that stupid."

Sometimes it seems like the mechanism is, "I would feel bad if I turned out to be 'that stupid', and I don't want to feel bad, so I'm not going to consider it." But it seems like there's also plenty of instances where it's, "I genuinely assign a low probability to being 'that stupid', and so I'm not going to investigate it."

When I queried this belief, I found something like, "Given that I am capable of feats of great intelligence, all parts of me should be very intelligent, because that's how intelligent things work[citation not need because I'm an implicit belief]." Which again, when I say it out loud I can see right through that.

Hmmmm... wait, so is my conclusion just that you should pay attention to your beliefs, and then ask, "Why do you believe what you believe?"

It sure is :)

Process to Apply This Post

Feel free to comment with examples from your own experience.

5 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2018-03-24T18:14:54.918Z · LW(p) · GW(p)
I should note that it feels uncomfortable to say that the above process is one that I routinely use, primarily because of how obviously broken it is upon reflection. You'd never catch me audibly saying, or even explicitly thinking, "Dante doesn't matter/isn't human because he doesn't agree with me."

Upvoted for major epistemic honesty that everyone would do well to have more of. :) (at least to the point of being honest to themselves, even if they wouldn't be up to admitting this kind of thing in public)

comment by Dagon · 2018-03-26T16:10:47.113Z · LW(p) · GW(p)

I'm confused at "not wanting to admit stupidity" being the main takeaway. Humility is worthwhile, and you can become more correct by recognizing that you might be wrong. But that doesn't seem to break the loop, in cases where you're (probably) right.

I generally ask a few more questions than "do they matter". I also ask "does it matter if one of us is wrong on this topic", and "is further argument helping to uncover any cruxes or refine either of our beliefs". It doesn't even occur to me that "do they matter" could change over a short time, but my beliefs about the topic's importance or my beliefs about the utility of the conversation could easily change.

Replies from: Hazard
comment by Hazard · 2018-03-26T19:54:46.879Z · LW(p) · GW(p)

The do's and don'ts of conversing with people I disagree with is a useful topic, but not the one I was writing about.

I notice that many of my conversations do not go they way they would if I was always operating on best principles. My claim was that far more often than I would like, I'm operating using a process that under reflection appears to be really dumb. The loop that I outlined is an example of such a process. I really think that is just a terrible flowchart to make decisions with. Yet sometimes I happen to be running it. If I can notice that I'm sometimes using that process, I can work to fix it. But often it's hard to notice that I'm running such a process, because I expected myself to be "to smart to fall for that".

Replies from: Dagon
comment by Dagon · 2018-03-27T19:38:26.970Z · LW(p) · GW(p)

Ah! I was taking "I'm not that dumb" at the object level of the disagreement, as preventing you from considering that your counterpart may be right. You meant it at the pattern level, where you didn't want to believe you were stuck in the loop in the first place.

That part makes sense, but I still think the underlying error is not considering that the disagreement may be unimportant without the person being unimportant, or that you might be wrong rather than them.

comment by Anthony Glaser (anthony-glaser) · 2018-03-27T20:03:42.664Z · LW(p) · GW(p)

What I see here is file-drawer problem. You remember all those times you felt right and ended up actually being right and conclude that you're right this time because you feel right. But you've ignored all those uncomfortable data points from when you felt right but ended up being wrong. A TED speaker once said something like: "What does it feel like to be wrong? It feels like being right, and it feels good. What feels bad is discovering that you were wrong after the fact."