Lsusr's Rationality Dojo

post by lsusr · 2024-02-13T05:52:03.757Z · LW · GW · 17 comments

Contents

  Why
    The problem
    The solution
  The right conditions
    The right topics
  Want to try it out?
    What to expect if you participate
None
17 comments

Why aren’t there dojos that teach rationality?

The Martial Art of Rationality [LW · GW] by Eliezer Yudkowsky

For the last 6 months, I've been running a dojo that teaches rationality.

Why

I was at an ACX meetup and met an acolyte who grew up in an evangelical Christian community. He had recently discovered the Sequences and was really excited about this whole Rationality thing. He was very confident in Yudkowsky's teachings.

I asked him a couple questions and he realized his beliefs were full of holes. He wondered how he could have understood so little. After all, he had read all of Yudkowsky's Sequences.

"I have read 100 books about chess," I said, "Surely I must be a grandmaster by now."

At that moment, he was enlightened.

The problem

The objective of rationality is to become right instead of wrong. Being wrong feels exactly like being right. We are not aware of our own biases. We are not aware of our own mistakes. We are not aware of the lies we tell ourselves. This is almost a tautology.

Other people are not tautologically blind to our mistakes in the same way. The simplest way to become less wrong is to have someone else point out your mistakes to you. Except this doesn't actually work. If I say "I'm right," and you say "you're wrong", then we get nowhere. The more we argue, the more frustrated we get.

The solution

There is a better way. I call it rhetorical aikido. Rhetorical aikido is a Daoist form of Socratic dialogue. The simplest form of rhetorical aikido has three steps:

  1. You let someone confidently state a belief  that you know is wrong.
  2. You let that same someone confidently state a belief  that contradicts .
  3. You let them notice that  contradicts .

Examples:
[I'm the guy in the dark green chair on your right.]

Notice that this technique follows Dale Carnegie's guidelines. You smile. You agree. You show genuine interest in the other person. You don't say "You're wrong". You never even say your own beliefs (unless asked). There's nothing for the person to get angry at because you never attacked them. Instead of criticizing, you point out errors indirectly, via a joke. You cheer them on as they dig their own grave. After all, you're trying to lose too.

Perhaps more importantly, this technique makes password-guessing [LW · GW] impossible. You're playing the bastard offspring of chess + Calvinball. There is no password to guess.

The right conditions

Rhetorical aikido is useful for diffusing conflicts at family gatherings and the like. If you want to go even further and deprogram people, it's best to have the following conditions:

This whole thing started with off-the-record conversations with my friend Justin. It took a year of iterations to figure out what worked best. Conversations turned into unpublished audio recordings turned into unpublished video recordings turned into structured video dialogues. Eventually, after recording a video, a different friend asked me what I thought about rationality dojos.

"Welcome to Lsusr's rationality dojo," I replied, "Today is not your first day."

The right topics

I've had great conversations about economics, business, racism, homophobia, IQ, war, history, psychology, rationality, ethics, Buddhism, meditation, social skills, Israel, Hamas, antimemetics, and the Matrix.

Therapy and self-help are bad topics because they attract solipsists who talk about their problems instead of solving their problems.

The worst topics are "some people argue" and "someone else is wrong". Simulacra are a distraction from base reality. You must come to a consensus about base reality before discussing simulacra.

Want to try it out?

If you want to become stronger, PM me or send me an email with the following information:

Don't worry about whether you're not smart enough, haven't read enough of Plato/Yudkowsky, etc. Earnestness and curiosity are more important. It doesn't matter if you're bad at public speaking either. This is how you get good at public speaking.

What to expect if you participate

This isn't a gotcha show. If you say "I don't want you to publish this part (or all) of the conversation," then I won't publish it. Just please do so before I edit the video, because editing takes a long time.

You don't have to be entirely on the receiving end of cross-examination, either. If you're already uncertain (or, better yet, confused) then it's fine to just ask me questions.

17 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2024-02-13T18:54:14.750Z · LW(p) · GW(p)

At that moment, he was enlightened.

I somehow felt fuzzy and nice reading this; it's so distinctly your writing style and it's nice to have you around, being you and writing in your familiar slightly quirky style. (It also communicated the point well.)

Replies from: lsusr
comment by lsusr · 2024-02-13T19:26:26.007Z · LW(p) · GW(p)

Thanks. ❤️

I stole that line from Eric Raymond who stole it from Zen.

comment by JenniferRM · 2024-02-15T22:17:20.814Z · LW(p) · GW(p)

This bit irked me because it is inconsistent with a foundational way of checking and improving my brain that might be enough by itself to recover the whole of the art:

Being wrong feels exactly like being right.

This might be true in some specific situation where a sort of Epistemic Potemkin Village is being constructed for you with the goal of making it true... but otherwise, with high reliability, I think it is wrong.

Being confident feels very similar in both cases, but being confidently right enables you to predict things at the edge of your perceptions and keep "guessing right" and you kinda just get bored, whereas being confidently wrong feels different at the edges of your perceptions, with blindness there, or an aversion to looking, or a lack of curiosity, or a certainty that it is neither interesting nor important nor good".

If you go confidently forth in an area where you are wrong, you feel surprise over and over and over (unless something is watching your mind and creating what you expect in each place you look). If you're wrong about something, you either go there and get surprised, or "just feel" like not going there, or something is generating the thing you're exploring.

I think this is part of how it is possible to be genre-savvy. In fiction, there IS an optimization process that IS laying out a world, with surprises all queued up "as if you had been wrong about an objective world that existed by accident, with all correlations caused by accident and physics iterated over time". Once you're genre-savvy, you've learned to "see past the so-called surprises to the creative optimizing author of those surprises".

There are probably theorems lurking here (not that I've seen in wikipedia and checked for myself, but it makes sense), that sort of invert Aumann, and show that if the Author ever makes non-trivial choices, then an ideal bayesian reasoner will eventually catch on.

If creationism was true, and our demiurge had done a big complicated thing, then eventually "doing physics" and "becoming theologically genre-savvy" would be the SAME thing.

This not working (and hypotheses that suppose "blind mechanism" working very well) is either evidence that (1) naive creationism is false, (2) we haven't studied physics long enough, or (3) we have a demiurge and is it is a half-evil fuckhead who aims to subvert the efforts of "genre-savvy scientists" by exploiting the imperfections of our ability to update on evidence.

(A fourth hypothesis is: the "real" god (OntoGod?) is something like "math itself". Then "math" conceives of literally every universe as a logically possible data structure, including our entire spacetime and so on, often times almost by accident, like how our universe is accidentally simulated as a side effect every time anyone anywhere in the multi-verse runs Solomonoff Induction on a big enough computer. Sadly, this is basically just a new way of talking that is maybe a bit more rigorous than older ways of talking, at the cost of being unintelligible to most people. It doesn't help you predict coin flips or know the melting point of water any more precisely, so like: what's the point?)

But anyway... it all starts with "being confidently wrong feels different (out at the edges, where aversion and confusion can lurk) than being confidently right". If that were false, then we couldn't do math... but we can do math, so yay for that! <3

Replies from: lsusr
comment by lsusr · 2024-02-15T23:52:38.747Z · LW(p) · GW(p)

How do you know that this approach doesn't miss entire categories of error?

Replies from: JenniferRM
comment by JenniferRM · 2024-02-18T16:26:26.590Z · LW(p) · GW(p)

I do NOT know that "the subjective feeling of being right" is an adequate approach to purge all error.

Also, I think that hypotheses are often wrong, but they motivate new careful systematic observation, and that this "useful wrongness" is often a core part of a larger OODA loop of guessing and checking ideas in the course of learning and discovery.

My claim is that "the subjective feeling of being right" is a tool whose absence works to disqualify at least some wrongnesses as "maybe true, maybe false, but not confidently and clearly known to be true in that way that feels very very hard to get wrong".

Prime numbers fall out of simple definitions, and I know in my bones that five is prime.

There are very few things that I know with as much certainty as this, but I'm pretty sure that being vividly and reliably shown to be wrong about this would require me to rebuild my metaphysics and epistemics in radical ways. I've been wrong a lot, but the things I was wrong about were not like my mental state(s) around "5 is prime".

And in science, seeking reliable generalities about the physical world, there's another sort of qualitative difference that is similar. For example, I grew up in northern California, and I've seen so many Sequoia sempervirens that I can often "just look" and "simply know" that that is the kind of tree I'm seeing.

If I visit other biomes, the feeling of "looking at a forest and NOT knowing the names of >80% of the plants I can see" is kind of pleasantly disorienting... there is so much to learn in other biomes!

(I've only ever seen one Metasequoia glyptostroboides that was planted as a specimen at the entrance to a park, and probably can't recognize them, but my understanding is that they just don't look like a coastal redwood or even grow very well where coastal redwoods naturally grow. My confidence for Sequoiadendron giganteum is in between. There could hypothetically be a fourth kind of redwood that is rare. Or it might be that half the coastal redwoods I "very confidently recognize" are male and half are female in some weird way (or maybe 10% are have even weirder polyploid status than you'd naively expect?) and I just can't see the subtle distinctions (yet)? With science and the material world, in my experience, I simply can't achieve the kind of subjective feeling of confident correctness that exists in math.)

In general, subjectively, for me, "random ass guesses" (even the ones that turn out right (but by random chance you'd expect them to mostly be wrong)) feel very very different from coherently-justified, well-understood, broadly-empirically-supported, central, contextualized, confident, "correct" conclusions because they lack a subjective feeling of "confidence".

And within domains where I (and presumably other people?) are basically confident, I claim that there's a distinct feeling which shows up in one's aversions to observation or contemplation about things at the edge of awareness. This is less reliable, and attaching the feelings to Bayesian credence levels is challenging and I don't know how to teach it, and I do it imperfectly myself...

...but (1) without subjective awareness of confidence and (2) the ability to notice aversion (or lack thereof) to tangential and potentially relevant evidence...

...I wouldn't say that epistemic progress is impossible. Helicopters, peregrine falcons, F-16s, and bees show that there are many ways to fly.

But I am saying that if I had these subjective senses of confidence and confusion lesioned from my brain, I'd expect to be, mentally, a bit like a "bee with only one wing" and not expect to be able to make very much intellectual progress. I think I'd have a lot of difficulty learning math, much less being able to tutor the parts of math I'm confident about.

(I'm not sure if I'd be able to notice the lesion or not. It is an interesting question whether or how such things are neurologically organized, and whether modular parts of the brain are "relevant to declarative/verbal/measurable epistemic performance" in coherent or redundant or complimentary ways. I don't know how to lesion brains in the way I propose, and maybe it isn't even possible, except as a low resolution thought experiment?)

In summary, I don't think "feeling the subjective difference between believing something true and believing something false" is necessary or sufficient for flawless epistemology, just that it is damn useful, and not something I'd want to do without.

comment by UnderTruth · 2024-02-13T22:06:16.993Z · LW(p) · GW(p)

I think I am unclear on whether this approach differs from a more traditional "Socratic" style dialogue, and if so, in what ways. Could you clarify?

Another thought that this post brings out, is that while I think techniques of this sort are useful in a number of ways, even beyond the direct dialogue itself (for example, in practicing the kind of lateral and analogy-based thinking required to fluidly keep up with the conversation while maintaining this style), there is clearly a limited set of opportunities for which they are suitable. Do you know of any existing "taxonomy" of conversational methods, classified with respect to the circumstances in which they are most effective?

Replies from: lsusr
comment by lsusr · 2024-02-13T22:40:20.843Z · LW(p) · GW(p)

I was wondering how long it would take for someone to ask these questions. I will paraphrase a little.

How does rhetorical aikido differ from well-established Socratic-style dialogue?

Socratic-style dialogue is a very broad umbrella. Pretty much any question-focused dialogue qualifies. A public schoolteacher asking a class of students "What do you think?" is both "Socratic" and ineffective at penetrating delusion.

The approach gestured at here is entirely within the domain of "Socratic"-style dialogue. However, it is far more specific. The techniques I practice and teach are laser-focused on improving rationality.

Here are a few examples of techniques I use and train, but which are not mandatory for a dialogue to be "Socratic":

  • If, while asking questions, you are asked "what do you believe" in return, you must state exactly what you believe.
  • You yield as much overt frame to the other person as possible. This is especially the case with definitions. In all but the most egregious situations, you let the other person define terms.
  • There are basic principles about how minds work that I'm trying to gesture at. One of my primary objectives in the foundational stages is to get students to understand how the human mind lazily [in the computational sense of the word "lazily"] evaluates beliefs and explanations. Socrates himself was likely aware of these mechanics but, in my experience, most teachers using Socratic methods are not aware of them.
  • I use specific conversational techniques to draw attention to specific errors. Which brings us to….

Is there any existing "taxonomy" of conversational methods, classified with respect to the circumstances in which they are most effective?

It depends on your goal. There are established techniques for selling things, seducing people, telling stories, telling jokes, negotiating, and getting your paper accepted into an academic journal. Truth in Comedy: The Manual of Improvisation is a peerless manual for improvisation. But it's not a rationalist handbook.

I have been assembling a list of mistakes and antidotes in my head, but I haven't written it down (yet?).

Here are a few quick examples.

  • The way to get an us-vs-them persuasion-oriented rambler to notice they're mistaken is via an Intellectual Turing Test. If they're a Red and assume you're a Blue, then you let them argue about why the Blues are wrong. After a while, you ask "What do you think I believe?" and you surprise them when they find out you're not a Blue. They realized they wasted their reputation and both of your time. One of my favorite sessions with a student started with him arguing against the Blues. He was embarrassed to discover that I wasn't a Blue. Then he spent an hour arguing about why I'm wrong for being a Green. The second time I asked "What do you think I believe?" was extra satisfying, because I had already warned him of the mistake he was making.
  • If someone is making careless mistakes because they don't care about whether they're right or wrong, you ask if you can publish the dialogue on the Internet. The earnest people clean up their act. The disingenuous blowhards slink away.
  • If someone does a Gish gallop, you ask them to place all their chips on the most important claim.
  • If someone says "Some people argue " you ask "Do you argue "? If yes, then they now have skin in the game. If no, then you can dismiss the argument.
Replies from: UnderTruth
comment by UnderTruth · 2024-02-16T22:21:06.887Z · LW(p) · GW(p)

Thank you for your reply and further explanation. Your examples are helpful, and on thinking about them, I'm led to wonder how these & other "techniques" serve the distinct goals of "Trying to arrive at The True Answer", "Trying to show this person that they have incoherent beliefs, because they have failed to properly examine them", and "Trying to converse in a manner that will engage this person, so that it has some real, hopefully positive, effect for them" -- and possibly others.

comment by abstractapplic · 2024-02-15T20:42:56.347Z · LW(p) · GW(p)

The objective of rationality is to become right instead of wrong.

 

I think this is technically false, in a subtle but important way. If I gained [knowledge of whether every six-digit number is prime] in exchange for [knowledge of whether wandering out into open traffic is a good idea], I'd have gleaned a net 899999 bits of right-ness, but it still wouldn't have been a worthwhile deal, or made me more rational in any practical sense. The missing gears are becoming right about important && relevant things, bothering to apply that knowledge, and - conditional on applying it at all - applying it well.

I think this project is good (Like, unusually good! It's a step forward! I enjoyed it, and I commend you for your service to the Cause!), but I notice a lack of emphasis on changing actions vs changing minds, both in this post and in the videos I watched, and I want to make sure you've noticed that too.

(And yes, I do recognize the irony of me pointing out a true thing about [pointing out true things without having an associated practical outcome] without having an associated practical outcome. Still think it's worth saying!)

Replies from: lsusr
comment by lsusr · 2024-02-15T22:16:51.301Z · LW(p) · GW(p)

The points you bring up are subtle and complex. I think a dialogue would be a better way to explore them rather than a comment thread. I've PM'd you.

comment by Mikhail Samin (mikhail-samin) · 2024-02-19T22:33:46.791Z · LW(p) · GW(p)

A more knowledgeable person can see holes regardless of who’s right, and so training deferring to what a teacher communicates just because they seem smart and can point out flaws seems wrong.

You smile. You agree. You show genuine interest in the other person. You don't say "You're wrong". You never even say your own beliefs (unless asked). There's nothing for the person to get angry at because you never attacked them. Instead of criticizing, you point out errors indirectly, via a joke. You cheer them on as they dig their own grave. After all, you're trying to lose too.

This is something that allows you to persuade people. If you have more background knowledge about something and can say something that’d make the person you’re talking to think you pointed out a flaw/a hole in their understanding of the issue, they might defer to you, thinking you’re smarter and you help. If instead of asking “what do you think? why do you think that?”, and letting the person think on their own, you instead ask questions that communicate your understanding, then I’m not sure this actually improves their thinking or even allows them to arrive to truer beliefs in a systematic way.

If your beliefs are false, they’ll update to your false beliefs; if your models are incomplete, they’ll believe in these incomplete models and won’t start seeing holes in them.

In the second video, you didn’t ask the person where’s the money coming from and where they go and who’s better off and who’s worse off; they didn’t try to draw any schemes and figure this out for themselves. Instead, they listened to you and agreed with what you communicated to them. They didn’t have a thought that if someone builds a cable, they must expect profits to cover the cost, despite someone else possibly trying to build a cable; they didn’t think that the money going into building a cable don’t disappear; they remain in the economy, through wages and costs of everything paid to everyone involved; the actual resources humanity spends on a cable are perhaps some fuel, some amount of material, and human time. Was it unethical to spend these resources that way? What does “unethical” even mean here? Was someone hurt during the construction, did people decide to get a worker’s job instead doing art? What about trading itself- what are the positive and negative externalities, what are the resources spent by humanity as a whole? What is the pot everyone competes for? Are they spending more resources to compete for it than the pot contains, or are they just eating all the free money on the table? Do they provide something valuable to the market, getting this pot in return? (Perhaps liquidity or a lot of slightly more up-to-date information?)

I have no idea how any of this works but to me, it looked like you made your arguments in a persuasive way, but my impression is the conversation you’ve had on the second video didn’t really improve general thinking/rationality skills of the person you were talking to.

Replies from: lsusr
comment by lsusr · 2024-04-22T21:00:14.473Z · LW(p) · GW(p)

The way I look at things, there are multiple steps to learning how to think better. The first step is realizing that your thoughts are an incoherent mess. Then you develop a taste for good reasoning. After that you can learn good thinking skills.

Whereas if you start by trying to learn good thinking skills, then it's very easy to say things that sound correct but are actually unsound.

I like to start from the beginning, and then take as long as is necessary with each step.

comment by wachichornia · 2024-02-15T14:56:57.196Z · LW(p) · GW(p)

Could a basic version of this that could help many people with their reasoning easily be set up as a GPT?

I tried it:

https://chat.openai.com/g/g-x4ryeyyCd-rationalist-dojo

But still unhappy with what I am getting. If you have a good prompt to find inconsistencies in your reasoning, please share it!

Replies from: lsusr
comment by lsusr · 2024-02-15T16:44:52.890Z · LW(p) · GW(p)

I tried that too. It didn't work on my first ~1 hour attempt.

comment by Mikhail Samin (mikhail-samin) · 2024-02-19T22:52:37.095Z · LW(p) · GW(p)

"I have read 100 books about chess," I said, "Surely I must be a grandmaster by now."

A nice argument; but looking back at it the second time, I think I actually expect someone who’s read 100 books on how to play chess to be better than me at chess. I expect someone who’s read the Sequences to be significantly better than baseline at being sane and to at least share some common assumptions about important things that would allow to have more productive communication. Even if one doesn’t have the skills to notice flaws in their thinking, reading the Sequences significantly increases the chance they’ll approach a bunch of stuff well, or if specific flaws are pointed out, will notice and try to correct them. (E.g., even if they can’t notice that an argument is about definitions, if you point this out, they’ll understand it; if they updated towards some belief after an event even though it happens just as often, relatively, in works where it’s true as in worlds where it’s false, they might understand why they should rollback the update.)

Being increasingly good at rationality means being wrong less and less. It doesn’t mean immediately stopping having any holes in beliefs. Noticing holes in your beliefs takes time and practice and reflection, and the skill of it is, indeed, not automatically downloaded from the Sequences. But it’s not really about holes in models in a moment of time; it’s about whether the models predict stuff better as time passes.

I guess, my point is people shouldn’t feel bad about having holes in beliefs or understanding “little” after reading the Sequences. It’s the derivative that matters

comment by RedMan · 2024-02-17T05:01:51.570Z · LW(p) · GW(p)

https://www.sciencedirect.com/science/article/abs/pii/S0091674923025435

Check it out, obesity can be treated with a vaccine.

They use the AAV vector that the J&J/astrazeneca vaccines used to encode a hormone that naturally occurs in the body, shot it into fat mice, and the fat mice started excreting all their visceral fat as sebum (so they got greasy hair).

Obesity is a public health emergency, there is no lasting treatment, diet and exercise don't work for most people.  This study used more mice than the vaccine booster study did, so I think it's enough to justify an emergency use authorization, and start putting it into arms.

Also, fat people are a burden on society, they're selfish, gluttinous, require weird special engineering like large seats, and are just generally obnoxious, so anyone who is at risk of obesity (which is everyone) should be mandated to get the anti fat shot, or be denied medical care for things like organ transplants.

 

Am i doin it rite?

comment by zoop · 2024-02-16T00:07:54.146Z · LW(p) · GW(p)

I don't think it works if there isn't a correct answer, e.g. predicting the future, but I'm positive this is a good way to improve how convincing your claims are to others.

If there isn't ground truth about a claim to refer to, any disagreement around a claim is going to be about how convincing and internally/externally consistent the claim is. As we keep learning from prediction markets, rationale don't always lead to correctness. Many cases of good heuristics (priors) doing extremely well. 

If you want to be correct, good reasoning is often a nice-to-have, not a need-to-have.