STRUCTURE: A Crash Course in Your Brain
post by Hazard · 2019-02-01T23:17:23.872Z · LW · GW · 4 commentsContents
Intro None 4 comments
This post is part of my Hazardous Guide To Rationality. [? · GW] I don't expect this to be new or exciting to frequent LW people, and I would super appreciate comments and feedback in light of intents for the sequence, as outlined in the above link. Also, note this is a STRUCTURE post, again see the above link for what that means.
Intro
Talking about truth and reality can be hard. First, we're going to take a stroll through what we currently know about how the human mind works, and what the implications are for one's ability to be right.
Outline of main ideas. Could be post per main bullet.
- The Unconscious exists
- There is "more happening" in your brain than you are consciously aware of
- S1 / S2 introduction (research if I actually recommend Thinking Fast and Slow as the best intro)
- Confabulation is a thing
- You have an entire sub-module in your brain which is specialized for making up reasons for why you do things. Because of this, even if you ask yourself, "Why did I just tip over that vase?" and get a ready answer, it is hard to figure out if that is a true reason for your behavior.
- By default, thoughts feel like facts.
- The lower-level a thought produced by your brain, the less it feels like, 'A thing I think which could be true of false" and the more it feels like, "The way the world obviously is, duh."
- Your intuitions do not have special magical access to the truth. They are sometimes wrong, and sometimes right. But unless you pay attention, you are likely to by default, believe them to be compleely correct.
- We are Predictably Wrong
- You don't automatically know what you actual beliefs are.
- You also have the ability to say "I believe XYZ" while having no meaningful/consequential relations of XYZ to the rest of your world model. You can also not notice that this is the case.
- Luckily, you do still have some non-zero ability to have anticipation/expectations about reality, and have world models/beliefs.
- When beliefs are secretly decisions, [LW(p) · GW(p)] not models.
4 comments
Comments sorted by top scores.
comment by Raemon · 2019-02-02T02:26:45.490Z · LW(p) · GW(p)
Just wanted to say I love both the idea of this sequence as well as the approach you’re taking so far (ie high level bullet lists that you’ll revisit and flesh out over time)
Replies from: Hazard↑ comment by Hazard · 2019-02-03T16:58:49.213Z · LW(p) · GW(p)
Thanks! As of now, is there any tool for migrating comments from one post to another? I just revised my approach to this sequence in a way that I expect to reduce the need for that (more clearly separating structure and content posts), but I'd imagine it could be useful.
Replies from: Raemon