Which rationality posts are begging for further practical development?

post by LoganStrohl (BrienneYudkowsky) · 2023-07-23T22:22:04.389Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    20 Raemon
    13 Screwtape
    11 metachirality
    8 Daniel Kokotajlo
    6 Elizabeth
    5 junk heap homotopy
    4 trevor
    3 JamesFaville
    1 GdL752
    1 zrezzed
None
1 comment

Which posts seem to you like they have really important practical implications for how we ought to think and act, but don't seem to come with enough information about how to use the skills, or to develop them, or even what exactly the relevant skills are?

Context: I'd like to devote many of my hours in the upcoming few months to naturalist [? · GW] study of topics that are critical to fully understanding the most important existing resources in rationality. I mainly have my eye on posts from Eliezer's Sequences, but I'm also open to other essays, books, and even videos. I'll be taking extensive notes, and turning them into companion pieces to complement the original essays. (See "Lies, Damn Lies, and Fabricated Options [LW · GW]" and "Investigating Fabrication [LW · GW]" for an example of what I'm proposing.)

Answers

answer by Raemon · 2023-07-24T00:17:21.046Z · LW(p) · GW(p)

(I'm interpreting this in a "turn things into exercises" lens which may not have been what you meant but happens to be what I'm On About this week)

I feel like I have recently hit the level where "actually deeply grok Bayes mathematically" is plausibly a good step for me.

I second Tuning your Cognitive Strategie [LW · GW]". I just ran some workshops that were originally planning to teach it explicitly, and I ended up being convinced that it was better to teach a somewhat more general meta-reflection exercise as the intro-version-of-it.

Looking over the /bestoflesswrong [? · GW] page, some things that stick out:

Something in A Sketch of Good Communication [LW · GW] maybe should be operationalized as an exercise.

Babble [LW · GW] has already been turned into an exercise [LW · GW] but I think there's room to make exercises that are optimized a bit more for... also teaching relevant other useful skills? It felt the like Babble Challenge series was sort of unnecessarily fun whimsical in a way that was, like, cool if that whimsy was intrinsically motivating. (I think connecting it with your 

I would like more introspection/focusing-ish practica that are... tailored more for research? (or, oriented in directions other than... self-help? [my shoulder-Logan doesn't like me labeling the area "self-help" and it doesn't feel quite right to me either but it's what I can quickly come up with])

Inadequate Equilibria could probably turn more exercise-ish although it's also maybe similar to a lot of startup advice that already exists

I think a lot of alignment research stuff should ideally turn into something exercise-y.

Teaching how to Notice Frame Differences [LW · GW], and also generally how to operationalize frames better in various directions. (See Shared Frames Are Capital Investments in Coordination [LW · GW] for one aspect, and Meta-rationality and frames [LW · GW]). Various Framing Practicums [? · GW].

Paper-Reading for Gears [LW · GW

Gears-Level Models are Capital Investments [LW · GW] (I guess various stuff relating to forming gearsy models)

Integrity and accountability are core parts of rationality [LW · GW

Heads I Win, Tails?—Never Heard of Her; Or, Selective Reporting and the Tragedy of the Green Rationalists [LW · GW

Yes Requires the Possibility of No  

Rest Days vs Recovery Days [LW · GW

To listen well, get curious [LW · GW

The First Sample Gives the Most Information [LW · GW

I have a feeling some combination of Radical Probabilism [LW · GW] and Infra-Bayesian [LW · GW] and other "post-Bayes" epistemologies.

comment by Elizabeth (pktechgirl) · 2023-07-24T02:21:01.009Z · LW(p) · GW(p)

I've been teaching recently and babble was an obvious prerequisite to things like goal factoring and hamming problems. It's pretty easy to teach but I think even pretty marginal improvements would pay big dividends because it's upstream of so many things. 

Replies from: BrienneYudkowsky
comment by LoganStrohl (BrienneYudkowsky) · 2023-08-16T01:16:57.607Z · LW(p) · GW(p)

I wrote up "How To Think Of Things" for CFAR a while back. I probably wanna at least edit it some before making it a top level post, but I'm curious what you think of it.

comment by trevor (TrevorWiesinger) · 2023-07-24T01:42:31.956Z · LW(p) · GW(p)

I feel like I have recently hit the level where "actually deeply grok Bayes mathematically" is plausibly a good step for me.

I generally think that, for all math-mind integration stuff, it's helpful for people to first look at Michael Smith's recent twitter rant about how math education conditioned virtually all humans to hate math.

Not just because deconditioning those long years is a critical step for taking math and really feeling like working with it [LW · GW], but also because most of the people writing about math (e.g. textbooks, tutorials, etc) either were conditioned to hate math to some degree, or even if they thoroughly don't, their own reading will still be largely written by people who were conditioned to hate math to some degree, because literally everyone everywhere in society went through the same long years of math from the education system.

comment by MondSemmel · 2023-07-24T07:04:57.268Z · LW(p) · GW(p)

(I think connecting it with your

This sentence cuts off.

Replies from: vitaliya
comment by vitaliya · 2023-07-24T08:09:24.860Z · LW(p) · GW(p)

yes - a perfect in-situ example of babble's sibling, prune

answer by Screwtape · 2023-07-27T23:47:46.526Z · LW(p) · GW(p)

On the unofficial 2022 LessWrong Census, I asked what people thought the most important lesson of rationality was. (The actual text of the question was "If you pick one lesson of rationality that everyone in the world would magically and suddenly understand, what lesson do you pick?") The top answers were Conservation of Expected Evidence [LW · GW], Making Beliefs Pay Rent [LW · GW], and Belief In Belief [LW · GW]. Conservation of Expected Evidence I personally recall as mindblowing when I first read it and reflected on it.

The Litany of Gendlin and the Litany of Tarski combine in my own head to be a really useful countercharm against ugh fields around finding out information about unpleasant things, though I'm not entirely sure that's the direction The Meditation on Curiosity [LW · GW] was supposed to take me. That plus the offhand line that if you know what you'll think later you aught to think it now has sped up a lot of decisions that I otherwise would have agonized over in a manner that was, in hindsight, mostly wasted motion. Those aren't really a single post though.

For a single post with obvious implications on how to act which is not sufficient on its own to start acting like that, Hero Licensing [LW · GW] might be my favourite pick. I don't know how to spark the skills involved in just trying things other than being a Mysterious Old Wizarding at them and hoping something catches but it seems important. The Importance of Saying Oops [LW · GW] seems a lot more tractable as something to turn into an exercise though.

answer by metachirality · 2023-07-23T22:55:16.627Z · LW(p) · GW(p)

I know nothing about naturalism but cognitive tuning [LW · GW] (beyond just cognitive strategies) seems like its begging to be expanded upon.

comment by trevor (TrevorWiesinger) · 2023-07-24T01:24:38.073Z · LW(p) · GW(p)

It definitely looks like something that, just on its own, could eventually morph into a written instruction manual for the human brain (i.e. one sufficiently advanced to enable people to save the world).

answer by Daniel Kokotajlo · 2023-07-25T06:41:18.035Z · LW(p) · GW(p)

I beg forgiveness for linking to one of my own posts; it's what I know most:

My version of Simulacra Levels [LW · GW] lays out some distinctions which I think are important. I would love to see a set of practical exercises or reminder-rituals someone could do, that trains one to understand the distinctions and apply them effortlessly in real life, so that knowing what simulacra level you are speaking on comes as easily as knowing whether you are telling the truth or lying, or knowing whether you are saying something liberal or conservative. And then for advanced students, exercises that help you know what level you are thinking on. 

 

answer by junk heap homotopy · 2023-07-24T01:12:16.277Z · LW(p) · GW(p)

The part in That Alien Message [LW · GW] and the Beisutsukai shorts that are about independently regenerating known science from scratch. Or zetetic explanations [LW · GW], whichever feels more representative of the idea cluster.

In particular, how does one go about making observations that are useful for building models? How does one select the initial axioms in the first place?

answer by trevor · 2023-07-24T01:47:55.596Z · LW(p) · GW(p)

I'm thirding Tuning Your Cognitive Strategies. 

I also strongly recommend TsviBT's please don't throw your mind away [LW · GW], since prioritizing the state of being "on a roll" and genuinely having a good time looks like a really tractable way to boost brainpower to a large degree, even if it only works on a somewhat predetermined range of topics for each person.

answer by JamesFaville · 2023-07-24T00:36:42.429Z · LW(p) · GW(p)

How to deal with crucial considerations [? · GW] and deliberation ladders (link goes to a transcript + audio).

answer by GdL752 · 2023-07-24T15:37:27.853Z · LW(p) · GW(p)

Regarsing your example , Vacalav Smil has a lot of interesting books about the energy economy or resources as a whole (natural).

Exposing yourself to his ideas might open up some correlates or advise a heuristic you hadn't thought of in that context. "The energy economy of rationalist based thought process" or something like that.

answer by zrezzed · 2023-07-24T02:39:13.791Z · LW(p) · GW(p)

I think this is a great goal, and I’m looking forward to what you put together!

This may be a bit different than the sort of thing you’re asking about, but I’d love to see more development/thought around topics related to https://www.lesswrong.com/posts/XqmjdBKa4ZaXJtNmf/raising-the-sanity-waterline [LW · GW] .

Rationality is certainly a skill, and something better / more concise exposition on rationality itself can help people develop. But once you learn to think right, what are the some of the most salient object-level ideas that come next? How do we better realize values in the real world, and make make use of / propagate these better ways of thinking? Why is this so hard, and what are strategies to make it easier?

SSC/AXC is a great example of better exploring object-level ideas, and I’d love to see more of that type of work pulled back into the community.

1 comment

Comments sorted by top scores.

comment by trevor (TrevorWiesinger) · 2023-07-24T01:50:47.240Z · LW(p) · GW(p)

This seems high-impact. The impression I get from the current state of rationality is that it's really decentralized, and it would be really easy and really helpful to amalgamate/distill a large portion of it into a more optimal combination of words.