Posts
Comments
I have never before tried explicitly writing rational ideas. So I tried: https://codingquark.com/2023/08/25/rational-laptop-insurance-calculation.html
What all did I do wrong? There are two obvious attack surfaces:
- That's not how the topic works!
- That's not how you write!
Will appreciate feedback, it will help me execute the Mission of Tsuyoku Naritai ;)
I was reading Obvious advice and noticed that at times when I'm overrun by emotions, or in a hurry to make the decision, or for some other reasons I'm not able to articulate verbally I fail to see the obvious. During such times, I might even worry that whatever I'm seeing is not one of the obvious — I might be missing something so obvious that the whole thing would've worked out differently had I thought of that one simple obvious thing.
Introspecting, I feel that perhaps I am not exactly sure what this "obvious" even means. I am able to say "that's obvious" sometimes on the spot and sometimes in hindsight. But when I sit down and think about it, I come up things like "what's obvious is what feels obvious!" and I am not satisfied really.
Can someone link me to resources to explore this topic further? A discussion here is appreciated as well.
Thank you!
Since I've been reading so much about guilt, I have been thinking about how many emotions I feel at once when something undesirable happens. It is no simple task for a human to handle such a huge set of variables. And yet somehow, these sequences are helpful.
Hey! Reading Lawful Uncertainty, Replacing Guilt, once again listening to HPMOR. I started out reading Meditations on Moloch this weekend but got steered to Replacing Guilt. Replacing Guilt is something I have not been able to help others with. So far, the tools suggested fit quite well with what I have figured out, but I have never been so clear as to be able to say "refinement, internalisation, realism". Given Nate's clarity, there are many things I had not thought about. I am having fun thinking about guilt with this much concreteness :D
What about you?
I stumbled upon LessWrong via AI & SlateStarCodex, but quickly noticed the rationality theme. My impressions on rationality were that of Sheldon Cooper-esque (the Big Bang theory) and I had put it aside as entertainment. I had read some of Eliezer's work earlier, such as staring into the singularity and saw these things called "sequences" under the library section. The first few posts I read made me think "Oh! Here are my people! I get to improve!"
But of course the library made it easy to notice HPMOR, and that's where I actually "began". I've listened to it twice so far. I have begun suggesting friends to give it to their kids in the rare cases that is possible (language barriers and general orientation being the primary barriers).
I grew up in Kutch. Looking back, I might have been an outlier as a kid, but then again, maybe not. I don't meet many "rationally oriented" people around here, and among the few I do know, I'd say I'm well-acquainted with many of them.
It is great to have the sequences, and the posts from all of you. I feel it is one of the rare places where I get to refine my thinking. I am going through the Sequences slowly. I noticed that if I actually talked the way sequences talk, when reasoning with people (ie, practicing rationality), they feel awkward. This has led me to a search space of sentences and analogies to use when I am talking to friends. Well, it has not been tough in past, but there are some more focused updates going on in my brain and talking like "Is this discussion availability heuristic?" seem to make people feel a bit off-putting. The process is great fun!
Thanks! And Hi!