What rationality failure modes are there?

post by Ulisse Mini (ulisse-mini) · 2024-01-19T09:12:57.924Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    13 Jan_Kulveit
    10 Gordon Seidoh Worley
    10 Chipmonk
    9 TAG
    4 Garrett Baker
    2 Joe_Collman
    2 Viliam
    1 gilch
    1 SilverFlame
None
No comments

How do people fail to improve their rationality? How do they accidentally harm themselves in the process? I'm thinking of writing a post "How not to improve your rationality" or "A nuanced guide to reading the sequences" that preempts common mistakes, and I'd appreciate hearing people's experiences. Some examples:

Answers

answer by Jan_Kulveit · 2024-01-20T00:07:47.140Z · LW(p) · GW(p)


- Too much value and too positive feedback on legibility. Replacing smart illegible computations with dumb legible stuff [LW · GW]
- Failing to develop actual rationality and focusing on cultivation of the rationalist memeplex  or rationalist culture instead
- Not understanding the problems with the theoretical foundations on which sequences are based (confused formal understanding of humans -> confused advice)

comment by Mo Putera (Mo Nastri) · 2024-01-22T04:26:38.692Z · LW(p) · GW(p)

Curious to see you elaborate on the last point, or just pointers to further reading. I think I agree in a betting sense (i.e. is Jan's claim true or false?) but don't really have a gears-level understanding.

answer by Gordon Seidoh Worley · 2024-01-19T21:00:14.054Z · LW(p) · GW(p)

Although dual process theory has its issues, folks have talked about the failure mode of prioritizing System 2 over System 1. This is a thing that the type of person who's likely to become a rationalists is already predisposed to do, and rationality writing gives them lots of advice to prioritize S2 over S1 even more. And while S2 is extremely valuable, especially for the art of rationality, it can't function well unless it's integrate with S1 and S2 is used to operate feedback loops to train S1 towards rationality, as without S1 being inline S2 will always be disembodied.

The archetypal example is in the category of what folks might call the Reddit Nerd: someone who lives on the computer, seems really smart, but has little to no success in life. They don't actually get the things they want because they live in their head and don't know how to take effective action, so they retreat to online forums and games (board games, MMOs, etc.) where they can be achieve some measure of success without having to deal with S1.

answer by Chipmonk · 2024-01-19T17:23:49.001Z · LW(p) · GW(p)

I think the more general form of the emotions thing is: reductionism and "i can't understand it consciously therefore it's not rational". 

The counter is deep respect for Chesterton's Fence.

This is also how many people get into woo

answer by TAG · 2024-01-19T22:33:27.341Z · LW(p) · GW(p)

Religiosity [LW · GW]. Only talking to other rationalists , only reading rationalist approves material,treating senior rationalists as authority figures , rejecting critiques of rationalist thought out of hand.

answer by Garrett Baker · 2024-01-20T01:24:23.652Z · LW(p) · GW(p)

Thinking too much about what your priors should be at the expense of actually learning about how the world is. Thinking in order to get better priors is tempting, but most priors you start with quickly get updated to be no different from each other.

answer by Viliam · 2024-01-19T22:04:54.874Z · LW(p) · GW(p)

If you read about rationality and feel smarter, you have completely missed the lesson.

("This is great! I’m going to use it all the time!" -- 1)

("Oops is the sound we make when we improve our beliefs and strategies" -- 2)

answer by gilch · 2024-01-20T21:57:54.441Z · LW(p) · GW(p)

Cultivating epistemic at the expense of instrumental rationality. They're both very important, but I think LessWrong has focused too much on the former. The explore-exploit tradeoff also applies to humans, not just to machine learning. Rationalists should be more agentic, applying what they've learned to the real world more than most seem to. Instead, cultivating too much doubt has broken our resolve to act.

comment by Mo Putera (Mo Nastri) · 2024-01-22T04:23:59.036Z · LW(p) · GW(p)

I'm not sure your last sentence is true, mainly because selection bias: a fair proportion of the more instrumental folks are too busy actually doing work IRL to post frequently here anymore (e.g. Luke Muehlhauser, who I still sometimes think of as the author of posts like How to Beat Procrastination [LW · GW] instead of his current role). 

answer by SilverFlame · 2024-01-20T17:02:29.861Z · LW(p) · GW(p)

The two failure modes I observe most often are not exclusive to rationality, but might still be helpful to consider.

  1. Over-reliance on System 2 processing for improvement and a failure to make useful and/or intentional adjustments to System 1 processing
  2. Failing to base value estimations upon changes you can actually cause in reality, often focusing upon "virtual" value categories instead of the ones you might systemically prefer (this is best presented in LoganStrohl's How to Hug the Query [LW · GW])

No comments

Comments sorted by top scores.