What is the next level of rationality?

post by lsusr, Yoav Ravid · 2023-12-12T08:14:14.846Z · LW · GW · 24 comments

Contents

  What Came Before Eliezer?
  Tangent about Trolling as a core rationality skill
  Back to "What's the next level of rationality?"
None
24 comments

This is part 1 of our dialogue series [? · GW] on the question "What is the next level of rationality?".

lsusr

Yudkowsky published Go Forth and Create the Art! [LW · GW] in 2009. It is 2023. You and I agree that, in the last few years, there haven't been many rationality posts on the level of Eliezer Yudkowsky (and Scott Alexander). In other words, nobody has gone forth and created the art. Isn't that funny?

What Came Before Eliezer?

Yoav Ravid

Yes, we agreed on that. I remarked that there were a few levels of rationality before Eliezer. The one directly before him was something like the Sagan-Feynman style rationality (who's fans often wore the label "Skeptics"). But that's mostly tangential to the point.

lsusr

Or perhaps it's not tangential to the point at all. Feynman was referenced by name in Harry Potter and the Methods of Rationality. I have a friend in his 20s who is reading Feynman for the first time. He's discovering things like "you don't need a labcoat and a PhD to test hypotheses" and "it's okay to think for yourself".

Yoav Ravid

How do you see it connecting to the question "What's the next level of rationality?"

lsusr

Yudkowsky is a single datapoint. The more quality perspectives we have about what "rationality" is, the better we can extrapolate the fit line.

Yoav Ravid

I see, so perhaps a preliminary to this discussion is the question "which level of rationality is Eliezer's?"?

lsusr

Yeah. Eliezer gets extra attention on LessWrong, but he's not the only writer on the subject of rationality. I think we should start by asking who's in this cluster we're pointing at.

Yoav Ravid

Alright, so in the Feynman-Sagen cluster, I'd also point to Dawkins, Michael Shermer, Sam Harris, Hitchens, and James Randi, for example. Not necessarily because I'm very familiar with their works or find them particularly valuable, but because they seem like central figures in that cluster.

lsusr

Those are all reasonable names, but I've never actually read any of their work. My personal list include Penn Jillette. Paul Graham and Bryan Caplan feel important too, even though they're not branded "skeptic" or "rationality".

Yoav Ravid

I've read a bit, but mostly I just came late enough to the scene and found Eliezer and Scott quickly enough that I didn't get the chance to read them deeply before then, and after I did I didn't feel the need.

Yoav Ravid

Yep, and Paul Graham is also someone Eliezer respects a lot, and I think might have even been mentioned in the sequences. I guess you could add various sci-fi authors to the list.

lsusr

Personally, I feel the whole thing started with Socrates. However, by the time I got around to cracking open The Apology, I felt like I had already internalized his ideas.

But I don't get that impression when I hang out with Rationalists. The median reader of Rationality: A-Z shatters under Socratic dialogue.

Yoav Ravid

I agree, though if we're trying to cut the history of rationality in periods/levels, then Socrates is a different (the first) period/level (Though there's a sense in which he's been at a higher level than many who came after him).

Yoav Ravid

I think Socrates' brilliance came from realizing how little capacity to know they had at the time, and fully developing the skill of not fooling himself. What others did after him was develop mostly the capacity to know, while mostly not paying as much attention to not fooling themselves.

I think the "Skeptics" got on this journey of thinking better and recognizing errors, but were almost completely focused on finding them in others. With Yudkowsky the focus shifted inward in a very Socratic manner, to find your own faults and limitations.

Tangent about Trolling as a core rationality skill

lsusr

I've never heard the word "Socratic" used in that way. I like it.

Another similarity Yudkowsky has to Socrates is that they're both notorious trolls.

Yoav Ravid

That made me laugh. It's true. I remember stories from the Sequences of Dialogues he had with people who he basically trolled.

lsusr

And there's a good reason for it. Trolling your students is absolutely necessary when teaching rationality. I troll my students/friends all the time. When I visited the Lightcone offices in Berkeley, I trolled them too.

Yoav Ravid

Ah, I see that you have written about this [LW · GW].

lsusr

Do you know why trolling is so important?

Yoav Ravid

I'm not sure I understand exactly how you use the concept, so tell me why you think It's so important.

lsusr

I could explain this in simple words. But I think it would be more fun and more educational if I trolled you instead. Are you okay with that?

Yoav Ravid

Haha, sure :) 

lsusr

You've convinced me. Trolling is unethical. Rationalist teachers shouldn't do it. Let's move on.

Yoav Ravid

lol, I didn't say anything so I couldn't have convinced you of anything :)

Perhaps you've convinced yourself, but I bet you haven't and you're just trolling :)

lsusr

(:

lsusr

A rationalist must be skeptical of authority. Suppose you are a teacher of rationality, and therefore an authority figure. How do you ethically teach your students to be skeptical of you?

Yoav Ravid

As an educator I do think about that a lot. On the one hand I want to tell the students everything I know that would be useful for them to know too, on the other hand I want to account for the possibility that I'm wrong, so I need to develop their ability to scrutinize what I say and check if it's actually true. 

So some teachers solve this by sacrificing either the first or the second part, because doing both of them well is harder, and that's unfortunate.

When I was in school I had a teacher who was very good at combining these. She'd start a topic by giving us a passionate speech, which made us care and told us what she believes, but then she made us dig into the subject and read various reports and come to our own conclusions. And it worked, many students did come to different conclusions.

I also think back to 'My Favorite Liar', where a teacher planted a falsehood in every lecture and told the students about it, so they would scrutinize his lectures to find the intentional error, but in the process also doubt and scrutinize everything else. And I guess you can call that a kind of trolling.

lsusr

Good. Very good!

lsusr

That is indeed a kind of trolling. After all, when a teacher is about to deceive you, she/he always lets you know in advance that you are about to encounter misinformation. That's how you know when you need to be skeptical.

lsusr

Do you understand?

Yoav Ravid

I think so. I suggested to specify "intentionally deceive you" and you rejected that. And I thought, but how can he let you know he's going to deceive you if he's not doing so intentionally? But since he might deceive you unintentionally all the time, then he has to let you know in advance that you might be deceived and should be skeptical. Is that the idea?

lsusr

[Note to readers: There's a feature in the LessWrong dialogue interface where Yoav can suggest a change to what I wrote. Yoav did so. I rejected the change.]

lsusr

That is the idea.

Back to "What's the next level of rationality?"

lsusr

Getting back to our original question, "What's the next level of rationality? [after Eliezer]", one of the (many) things he didn't get around to writing about is how important it is for rationalists to troll each other.

lsusr

Feynman was a troll too, by the way.

Yoav Ravid

Absolutely, even more so than Eliezer, I think. "Surely You're Joking, Mr. Feynman" is one of the funniest books I've read.

lsusr

It's hilarious.

lsusr

Besides the importance of trolling, what are some other facets of rationality that Eliezer never got around to writing about?

Yoav Ravid

Well, I think the best place to start with is the preface [LW · GW] Eliezer wrote in 2015 to 'Rationality: A-Z", where he lists 5 overarching errors he made in the sequences:

  1. Not writing with the intention of helping people do better in their everyday lives, instead of helping them solve big, difficult, important problems
  2. Focusing too much on how to learn the theory and not enough on how to practice it. 
  3. Focusing too much on rational belief, too little on rational action.
  4. Not organized the content in the sequences well (Things are much better now with the new sequences and the LW wiki)
  5. Speaking plainly about the stupidity of what appeared to be stupid ideas, Instead of writing more courteously.

I think the first 3 are relevant to our discussion.

Yoav Ravid

Some other points I'd add (some practical, some foundational/theoretical)

  1. The sequences and most of LW thereafter focused mainly on how to be more rational as an individual, and not on how to collaborate as rationalists or be more rational as a pair or a group.
  2. It overlooked the value of information in tradition. (things like the Lindy Principle, Chesterton's fence, etc)
  3. Related, it overlooked how many things, like certain biases, may actually be rational when analyzed better or considering our limitations.
  4. It's based on Bayesianism, which is a bit like General Relativity in that we know it's very much correct, but not fully, and there's something after it that should be even more correct than it. With Bayesianism the problem is that it assumes Logical Omniscience and observing the world from outside.
  5. Most of the foundational problems pointed out in the sequences — anthropic reasoning, reflective reasoning, strange loop circularity — haven't been solved. And though these aren't very relevant in day-to-day life, because they either don't come up or we have an intuition for the answer, these sure would be nice to solve, and it would show that rationality has firm foundations, for those who care about such things.

These have all been addressed to some degree after the sequences were written, of course, so this is not new or anything, and many of these are in the "water supply" to a degree (especially the value of information in tradition).

But it shows that we have no rationalist canon that actually encompasses modern rationalist thought, which is something we would need to foster a new phase/level of rationality.

lsusr

This is very helpful. You're pointing at topics I've wanted to write about, but have been unsure of how to approach. For example, I want to write a post about the benefits of hypocracy. (Most religious people are hypocrites. If you cure the hypocracy, some they may turn toward rationality, but others just end up as fundamentalist extremists.) It falls under your "overlooked value of tradition" umbrella.

But I think the most promising point might be "how to collaborate as rationalists or be more rational as a pair or a group". This wasn't so important when Eliezer was starting. After all, there was little community to coordinate. But I've been doing many Socratic dialogues, and often the first thing I have to do is teach my partner how to have a Socratic dialogue.

lsusr

That connects to "helping people do better in their everyday lives" and "[f]ocusing too much on how to learn the theory and not enough on how to practice it" too. 

Yoav Ravid

Yes, it was one of the first things that I wanted to write about on LW (I have a draft on pair rationality from January 2020), but I didn't feel I have a lot to say about it and I didn't have anyone else in my personal life who's as interested in rationality as me (still don't), so I didn't have the opportunity to develop that part on my own.

lsusr

It's pretty hard to develop the art of Socratic dialogue on your own. 😛😛

I've got a lot to say about Socratic dialogues but, as you pointed out, my writing is often very difficult for people to interact with.

I think the root problem is that when I'm writing for an abstract audience, I'm awful at guessing what readers will and won't understand. That's why I like these dialogues so much. I can just ask "Do you understand?"

Yoav Ravid

And it's working, I'm experiencing none of the difficulties I tend to experience with your writing.

lsusr

Then perhaps the next step of this rationality project is for you and me to do a Socratic dialogue about "how to do a Socratic dialogue".

Yoav Ravid

Alright, that sounds good. Let's pick it up from there next time [? · GW] :)

lsusr

(:

24 comments

Comments sorted by top scores.

comment by Yoav Ravid · 2023-12-12T08:49:17.842Z · LW(p) · GW(p)

Meta comment about the dialogue feature:

This was the first time I used the dialogue feature and it was a blast (much better experience than comment threads). Being able to see what the other person is writing as they write it, suggest edits, and swap things around is such a great user experience, and is so much closer to talking than any other form of written communication I used thus far. I kinda wish I had the option to use this format in each of my chats (Whatsapp, Discord, etc..).

I loved how this allowed the conversation to be free-flowing, and took us on interesting tangents that we probably wouldn't have gone on otherwise. OTOH, this might make it worse to read. I personally haven't found any dialogue great to read yet, and it might be related to this quality, but it seems they are definitely great to have. So perhaps what's needed is just to go the extra step and distill the dialogue afterward.

Two other points:

One thing I noticed is that we very often wrote meta notes that we later deleted, and it may be nice to have a box on the side for meta discussion, so you can keep the main thread clean.

I think it would also be nice if we could do inline reacts while editing, to be easily able to mark agreement on something (Like you would nod your head or go "aha" in the middle of a sentence to show that you agree).

Replies from: SaidAchmiz, MondSemmel
comment by Said Achmiz (SaidAchmiz) · 2023-12-12T18:54:01.390Z · LW(p) · GW(p)

I loved how this allowed the conversation to be free-flowing, and took us on interesting tangents that we probably wouldn’t have gone on otherwise. OTOH, this might make it worse to read. I personally haven’t found any dialogue great to read yet, and it might be related to this quality

I strongly agree with this. I have also not found any dialogue great to read, and that is definitely because of this exact quality.

So perhaps what’s needed is just to go the extra step and distill the dialogue afterward.

That is definitely needed, but “just” is very much the wrong word to use here. Distilling a dialogue would end up providing most of the value to readers—much more value than the un-distilled dialogue. Unfortunately, it would also require considerable effort from the dialogue participants. It would, after all, be much like writing a regular post…

Replies from: Perhaps
comment by Perhaps · 2023-12-12T18:59:34.785Z · LW(p) · GW(p)

It's possible that with the dialogue written, a well prompted LLM could distill the rest. Especially if each section that was distilled could be linked back to the section in the dialogue it was distilled from.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-12-12T19:07:43.511Z · LW(p) · GW(p)

Sure, it’s possible. I don’t trust LLMs nearly enough to depend directly on such a thing in a systematic way, but perhaps there could be a workflow where the LLM-generated summary is then fed back to the dialogue participants to sign off on. That might be a very useful thing for either the LW team or some third party to build, if it worked.

comment by MondSemmel · 2023-12-12T10:51:02.530Z · LW(p) · GW(p)

Remember to link to this feedback on Intercom, to increase the chance that the LW team sees it.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-12-12T10:57:08.293Z · LW(p) · GW(p)

Thanks for the reminder, I will :)

comment by Algon · 2023-12-12T13:02:25.409Z · LW(p) · GW(p)

I think there have been some attempts to describe a further level of rationality. They just haven't taken off. 

http://bewelltuned.com/ has been the most useful to me. Per bit, I'd say I prefer it to the Sequences. Though it is incomplete. Sadly, the author commited suicide after doing some crazy things to themselves. Raemon, who knows more of the details of their suicide than I do says their suicide wasn't really related to the content of BeWellTuned (see the comments on this [LW · GW] post).

I've been impressed by what little of LoganStrohl [LW · GW]'s work on naturalism [? · GW]I've read. It also seems like it'd mesh nicely with some BWT techniques that I've been practicing.

And Cedric Chin has made great strides in improving his own instrumental rationality, especially in regards to business expertise. I found his notes on the literature on expertise to be very useful for some research I did, and because it changed refined my understanding of how people get good at things.

comment by Vaniver · 2023-12-13T21:24:08.914Z · LW(p) · GW(p)

ctrl-f korz

hmmm

Replies from: Vaniver
comment by Vaniver · 2023-12-15T00:23:48.241Z · LW(p) · GW(p)

To explain: Alfred Koryzbski, the guy behind General Semantics, is basically "rationality from 100 years ago". (He lived 1879-1950.) He's ~2 generations before Feynman (1918-1988), who was ~one before Sagan (1934-1996), then there's a 2-3 generation gap to Yudkowsky (1979-). (Of course if you add more names to the list, the gaps disappear; reordering your list, you get James Randi (1928-2020), Dawkins (1941-), Hitchens (1949-2011), Michael Shermer (1954-), and Sam Harris (1967-) which takes you from Feynman to Yudkowsky, basically.)

He features in Rationalism before the Sequences [LW · GW], and is interesting both because 1) you can directly read his stuff, like Science and Sanity, and 2) most of his stuff has already made it to you indirectly, from the student's students. (Yudkowsky apparently wrote the Sequences before reading any Korzybski directly, but read lots of stuff written by people who read Korzybski.)

There are, of course, figures before Korzybski, but I think the gaps get larger / it becomes less obviously "rationalism" instead of something closer to "science". 

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-12-15T06:25:44.726Z · LW(p) · GW(p)

Ah, of course!

Yeah, if we went for a full history of rationality we definitely would have mentioned him. We haven't because I don't think he had much of an influence over the "Skeptics" brand of rationality, which we talked about as the popular form of rationality before Eliezer. I think one of the things that distinguished Eliezer's form of rationality was that he integrated Korzybski's ideas into it. 

comment by TAG · 2023-12-12T18:27:14.048Z · LW(p) · GW(p)

So nobody's interested in backtracking and fixing problems with the old stuff?

Replies from: sinclair-chen
comment by Sinclair Chen (sinclair-chen) · 2024-01-05T08:21:43.180Z · LW(p) · GW(p)

I don't think, like, re-editing AI to Zombies once again is valuable.

I do think, like, "come up with your own n virtues of rationality" is a good exercise. I think destruction & resynthesis could be more fruitful

comment by quetzal_rainbow · 2023-12-19T17:29:35.845Z · LW(p) · GW(p)

The problem here, I think, that there is no new level of rationality in a sense of qualitative change. Eliezer wrote down his knowledge, some unanswered questions, his tentative answers, went forth and created some of the Art, like functional decision theory. The rest is just continuation of this work.

comment by Ape in the coat · 2023-12-12T19:03:17.932Z · LW(p) · GW(p)

Most of the foundational problems pointed out in the sequences — anthropic reasoning, reflective reasoning, strange loop circularity — haven't been solved

 

I'm currently writing a series of posts on anthropic reasoning with the ultimate goal of solving it once and for all.

How do you imagine a satisfying solution? What are the problems you would like to be addressed and questions to be answered?

Likewise, what are the issues with reflective reasoning strange loop circularity?

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-12-13T06:31:36.200Z · LW(p) · GW(p)

I saw your series and I'm happy you're working on it. Unfortunately I'm not well versed enough in the subject (or probability in general) to say what a satisfying solution would look like or what exactly are the problems and questions I would like to be addressed and answered. For the same reason I'm also not really able to evaluate your work. I wish it got more attention from people who are more well versed in it.

Reflective Reasoning [? · GW] is something Eliezer and others wrote a lot about. "Strage loop circularity" is my name for something Eliezer gestured at a few times, which he called "Strange loops through the meta level". In Where Recursive Justification Hits Bottom [LW · GW] he justifies using Induction to justify induction and Occam's razor to justify Occam's razor, and says that it seems to him like it should be possible to formalize something that allows you to make valid "circular" reasoning like this, but still prevents invalid circular reasoning. I share his intuition, but don't have the capability to solve the problem. But if it is solved then it solves the Münchhausen trilemma, which is quite an annoying thorn.

Replies from: Ape in the coat
comment by Ape in the coat · 2023-12-13T14:28:58.386Z · LW(p) · GW(p)

In Where Recursive Justification Hits Bottom [LW · GW] he justifies using Induction to justify induction and Occam's razor to justify Occam's razor, and says that it seems to him like it should be possible to formalize something that allows you to make valid "circular" reasoning like this

 

Oh, so it was what I was thinking. Yeah, I've just been explaining how it all makes sense to a person on Astral Codex. I think Eliezer mostly solved Münchhausen trilemma in the very same essay, or at least provided crucial insight for it. But an accurate and detailed explanation definetely wouldn't harm. As soon as I finished with anthropics, I'll try to provide it.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-12-13T16:03:52.709Z · LW(p) · GW(p)

I think saying he "mostly solved" it goes too far, even he says so. But I definitely agree he provided crucial insight for it. I think I also added a bit in this comment [LW(p) · GW(p)].

As soon as I finished with anthropics, I'll try to provide it

Awesome. I hope people pay attention.

Btw here are the posts I can find where he talks about this:

And here he mentions it but doesn't talk primarily about it:

comment by metachirality · 2023-12-12T16:06:25.165Z · LW(p) · GW(p)

I'm really interested in a new Sequences. I don't think it would even be that hard to do, it's just not a thing that most rationalists find interesting in contrast to whatever else they're doing.

comment by Chris_Leong · 2023-12-12T10:08:58.328Z · LW(p) · GW(p)

I think that the next level after Eliezer would be the additions added by Scott Alexander. His most well-known posts are pretty much cannon at this point.

This addresses (1) with the reviews of Seeing as a State and The Secret of Our Success.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2023-12-12T10:38:08.713Z · LW(p) · GW(p)

Well, yes and no. The Secret of Our Success was indeed one of the things I thought about when I wrote that some of this has been addressed. But a handful of blog posts on this one problem don't constitute a new level (a paradigm, if you wish). Most of his other posts that became canon don't really go out of Eliezer's paradigm, they just expand it incredibly well.

We will know we've fully entered the new level/paradigm when we have a new Canon that answers all of these questions (and probably a few more) to some degree of completeness (having a canon also points to the need to have a certain level of consensus and common knowledge). The new level of rationality will be as distinct from Eliezer's level as Eliezer's level was distinct from the Feynman-Sagen level.

I think the informational value of tradition, and the progress-conservation tension, is indeed where we came farthest, and we mostly just need to collect everything that was written and distill it so it can become part of a future canon. After that, I think we came farthest on having an improved understanding of biases, but there's still some distance to go.

Other than that, I think we're quite far from a satisfying answer to the other problems, and so we're quite far from fully entering the next level.

comment by Mo Putera (Mo Nastri) · 2023-12-13T04:19:35.518Z · LW(p) · GW(p)

What do you think of David Chapman's stuff? I'm thinking of his curriculum sketch in particular. 

I don't think most rationalists were very excited by it though, e.g. Scott's brief look at it in 2013 (and David's response downthread) and an old comment thread I can no longer find between David and Kaj Sotala.

Replies from: lsusr
comment by lsusr · 2023-12-13T04:25:59.763Z · LW(p) · GW(p)

I don't plan to read David Chapman's writings. His website is titled "Meta-rationality". When I'm teaching rationality, one of the first things I have to do is tell students repeatedly is to stop being meta.

Empiricism is about reality. "Meta" is at least one step away from reality, and therefore at least one step farther from empiricism.

Replies from: AnthonyC, Mo Nastri
comment by AnthonyC · 2023-12-19T15:20:44.806Z · LW(p) · GW(p)

Telling people to stop being meta is very important, but I think you may be misunderstanding the way in which Chapman is using the term. AFAICT it's really more about being able to step back from your own viewpoint and assumptions and effectively apply a mental toolbox and different mental stances effectively to a problem that isn't trivial or already-solved. Personally I've found it has helped keep me from going too meta in a lot of cases, by re-orienting my thinking to what's needed.

comment by Mo Putera (Mo Nastri) · 2023-12-14T16:36:52.148Z · LW(p) · GW(p)

Chapman's old work programming Pengi with Phil Agre at the MIT AI Lab seems to suggest otherwise, but I respect your decision to not read his writings, since they mirror mine after attempting to and failing to grok him.