Rationality: What's the point?
post by Hazard · 2019-02-03T16:34:33.457Z · LW · GW · 11 commentsContents
Who Is This For? What's the Pot of Gold at the End of the Rainbow? None 11 comments
This post is part of my Hazardous Guide To Rationality. [? · GW] I don't expect this to be new or exciting to frequent LW people, and I would super appreciate comments and feedback in light of intents for the sequence, as outlined in the above link.
A friend once articulated that he didn't like when things are taught, "Mr. Miygai style". A bunch of disconnected, unmotivated facts, exercises, and ideas are put before you, and it's only at the very end that it clicks and you see the hidden structure and purpose of everything you've learned.
Therefore, the very first post of this sequence is going to be a drive by of what I think some of the cool/useful/amazing things are that you can get out of The Way. I never would have become a close-up magician if I hadn't seen someone do incredible things that blew my mind.
Who Is This For?
As much as it pains me to say this, it might not really matter whether or not you follow The Way. It really depends on what you're trying to do. The guy who kicked of the contemporary rationality community, Eliezer Yudkowsky, notes that besides a natural draw based on personality, the biggest reason he's invested in rationality is because he really wants to make sure Friendly AI happens before Unfriendly AI, and turns out that's really hard.
[*add more*]
What's the Pot of Gold at the End of the Rainbow?
Things I claim you can get better at
- Believing true things and not believing false things.
- Arrive at true beliefs faster.
- "Failing mysteriously" less often
- Understanding how your own mind works.
Why some of the above things are awesome
- If you have something to protect, (you really want to make certain things happen) better models, more true beliefs, update speed, and being confused by lies, all make you more likely to make the changes you want to see in the world.
- If you get a kick out of more deeply grokking how the world around you works, a kick you will get.
- A lot of interpersonal problems come from two gaps:
- One between "How human minds work" and "How you think human minds work"
- One between "Your beliefs, feelings, and emotions" and "Your self-model of your beliefs, feelings, and emotions"
- Shorting those gap will result in less interpersonal problems.
11 comments
Comments sorted by top scores.
comment by Alex Vermillion (tomcatfish) · 2023-08-21T03:10:21.192Z · LW(p) · GW(p)
being confused by lies
This should say something more like "being confused by lies instead of taking them for sense", otherwise it sort of looks like it means "being confused away from the truth and into the lies" is a good thing
comment by Shmi (shminux) · 2019-02-03T18:58:39.183Z · LW(p) · GW(p)
Things I claim you can get better at
Believing true things and not believing false things.
Arrive at true beliefs faster.
"Failing mysteriously" less often
Understanding how your own mind works.
The first two are not even a real thing. There are no absolutely true or false beliefs, only useful and harmful. The last two have more merit. Certainly spending time on analyzing something, including your own mind, tends to increase one's knowledge of that something. I have also heard anecdotal evidence of people "failing mysteriously" less often, but I am not convinced that better understanding how your mind works makes you feel less, as opposed to fail less mysteriously. If anything, people who I see as succeeding more tend to talk about "post-rationality" instead of rationality.
Replies from: cousin_it, Hazard↑ comment by cousin_it · 2019-02-04T20:30:32.422Z · LW(p) · GW(p)
I think there are useful and harmful beliefs, and also there are true and false beliefs. Similar to how there are big and small apples, and also red and green apples. If you say truth and usefulness are the same quality, that's a strong claim - can you give some argument for it?
Replies from: Dagon, shminux, TAG↑ comment by Dagon · 2019-02-04T20:57:51.201Z · LW(p) · GW(p)
There are three important points on the continua for these properties of beliefs. Useful, useless, harmful for the benefit slider. For the accuracy dimension, there's true (perfectly predictive of future experiences) to random (not correlated) to false (consistently wrong about future experiences).
Plus a bunch of things incorrectly called "beliefs" that don't predict anything, so aren't gradable on these axes.
Replies from: TAG↑ comment by TAG · 2019-02-05T08:51:38.422Z · LW(p) · GW(p)
There is an important subset of beliefs which predict only if acted on, namely beliefs about how things should be.
Replies from: Dagon↑ comment by Dagon · 2019-02-05T17:43:02.412Z · LW(p) · GW(p)
Conditional beliefs (if X then Y) are just normal predictions, with the same truth and usefulness dimensions. That's distinct from beliefs about how things should be (often called "values") - these _also_ have truth (will it actually be good if it happens?) and usefulness (is there a path to there?) dimensions, but they are somewhat different in application.
Replies from: TAG↑ comment by TAG · 2019-02-05T19:02:02.290Z · LW(p) · GW(p)
Normative beliefs only have objective truth conditions if moral realism is true. But the model of an agent trying to realise its normative beliefs is always valid, however subjective they are. Usefulness, and in turn, the can only be defined in terms of goals or values.
↑ comment by Shmi (shminux) · 2019-02-05T08:30:58.379Z · LW(p) · GW(p)
I found the concepts of true and false to be quite harmful for rational discourse. People argue about what is true or not all the time without coming to an agreement. So I avoid those terms as much as possible. Usefulness is easier to determine and it is subjective, so there is less urge to argue about it.
Replies from: cousin_it↑ comment by cousin_it · 2019-02-05T12:34:40.711Z · LW(p) · GW(p)
Yes, it's easy to determine that beliefs flattering the king are useful, while beliefs about invisible forces causing the twitching of dead frog legs when struck by a spark are quite useless. But to determine the usefulness of these two beliefs correctly, you need to predict a few centuries ahead. Truth is easier.
↑ comment by Hazard · 2019-02-04T03:25:22.887Z · LW(p) · GW(p)
Would you buy the claim you can "Be more right and get less wrong"? (asked because I feel like I'm pointing to the same thing as the first bullet, but the first bullet is not phrased super well)
On the question of "does understanding your mind make you fail less often", I notice that there are 3+ cases that immediately jump to mind that match, "I didn't fail because I learned more about my mind". Do you think a lot of those cases I didn't fail for reasons other than understanding my mind, or do you expect that I'm racking up new-different failures as a result of understanding my mind more?
On post rationality, I just now read a bit more, and my reaction was, "Wait, wasn't that a key piece of rationality in the first place?" I'm interested to see if I've secretly always been of a post-rationalist persuasion, or if I'm not seeing the main angle of post-rationality.