A 'Practice of Rationality' Sequence?

post by abramdemski · 2020-02-14T22:56:13.537Z · LW · GW · 17 comments

This is a question post.

Contents

  Answers
    27 Raemon
    15 Jan Kulveit
    15 Ustice
    13 Daniel Kokotajlo
    12 Davidmanheim
    12 Raemon
    8 romeostevensit
None
17 comments

(This is not your typical factual-answer question, but I think it makes sense to format this as a question rather than a post.)

TLDR: Recommend some posts for a "practice of rationality" sequence I want to curate! Proposing posts that should exist but don't is also cool.

I've been thinking recently that it would be nice if rationality were more associated with a practice -- a set of skills which you can keep grinding and leveling up. Testable rationality skills (like accuracy or calibration in forecasting) are obviously a plus, but I'm not referring exclusively to this -- some very real things can be hard to evaluate externally, such as emotional wellness.

A model I have in mind is meditation: meditation is easy to "grind" because the meditator gets constant immediate feedback about how well they're focusing (or at least, they get that feedback if they meet a minimum of focus required to keep track of whether they are focusing). Yet it's quite difficult to evaluate progress from the outside.

(In fact, when I mentioned this desire for a "practice" of rationality to one friend, they were like "I agree, and in fact I think the practice should just be insight meditation.")

This is basically reiterating Brienne's call for tortoise skills (see also [LW · GW]), except what I want to do is collect proposed things which could be part of a practice.

Obviously, some CFAR content could already qualify. CFAR doesn't exactly teach it that way -- as far as I've observed, CFAR's focus is on mindset interventions. "Mindset intervention" is the fancy psychology term for getting someone to think differently by having them do something once. For example, the point of "growth mindset" interventions is that you explain it once and this has long-lasting impact on someone's behavior. Another mindset intervention is: you ask people to write about what matters to them. Doing this once has shown long-term results.

In my first CFAR experience (which was an MSFP, fwiw), the phrase "It's not about the exercises!" was kind of a motto. It was explained at the beginning that CFAR teaches exercises not because people learn the exercises and then go out and use the exercises, but rather, going through the exercises a few times changes how you think about things. (The story was that people often go to a CFAR workshop and then improve a bunch of things in their life, but say "but I haven't been doing the exercises!".)

But many of the things CFAR teaches could be used as a practice, and (again referring to my first CFAR experience) CFAR does do some things which encourage you to look at them that way, like the follow-up emails which encourage you to overlearn one exercise per week (practicing that one thing a bunch so that it becomes an automatic mental motion).

Another example pointing at what I want here is bewelltuned.com. The content may or may not be right, but the sort of thing seems exactly right to me -- actionable skills you can keep working on regularly after getting simple explanations of how to do it. And furthermore, the presentation seems exactly right. LessWrong has a tendency to focus on wordy explanations of intellectual topics, which is great, but the bewelltuned style seems like an excellent counterbalance.

I'm using the "question" format so that answers can recommend specific things (perhaps represented by existing LW posts, perhaps not), whereas comments can discuss this more broadly (such as what more general criteria should be applied to filter suggestions, or whether this is even a good idea). The answer list here could serve as a big repository. I'll probably create a sequence which can be my own highly opinionated curation of the suggestions here, plus my own writing on the subject.

I originally intended Becoming Unusually Truth Oriented [LW · GW] to be the start of a sequence on the subject written entirely by me. However, some resulting discussion made me question my approach (hence the motivation for this question).

One friend of mine (going off of some of the discussion in comments to that post) voiced a concern about the rationality community falling into the same pitfalls as martial arts. Several articles about this have been written on LW. (I'm not finding all the ones I remember! If you put links to more of them in the comments I'll probably edit this to add them.) The concern is that a martial art of rationality [LW · GW] could lead to the same kinds of epistemic viciousness [LW · GW] which are seen in literal martial arts -- a practice divorced from reality due to the constraints and incentives of training/teaching.

That same friend suggested that the solution was to focus on empirically verifiable skills, namely forecasting. But in the in-person rationalist community in the bay area, I've encountered some criticism of extreme focus on forecasting which suggests that it's making the very mistake we're afraid of here -- Goodharting on the problem. One person asked me to give any examples of Superforecasting [LW · GW]-like skills resulting in actual accomplishments, suggesting that planning is the far more valuable skill and varies significantly from forecasting. Another person recounted their experience sitting down with several other rationalists to learn superforecasting skills. It was a group of rather committed and also individually competent rationalists, but they quickly came to the conclusion that while they could put in the effort to become much better at forecasting, the actual skills they'd learn would be highly specific to the task of winning points in prediction tasks, and they abandoned the project, concluding that it would not meaningfully improve their general capability to accomplish things!!

So, this seems like a hard problem.

What could/should be a part of a 'practice' of rationality?

Answers

answer by Raemon · 2020-02-16T01:11:04.165Z · LW(p) · GW(p)

I started writing out some notes on my current impressions of the "rationality skill tree". Then I had a vague sense of having written it before. It turned out to be background thoughts on why doublecrux is hard to learn, which (surprise!) I also thought were key background skills for many other rationality practices. 

I haven't rewritten this yet to be non-double-crux-centric, but think that'd be good to do someday. (The LW team has been chatting about wikis lately, and this feels like something I'd eventually want written up in a way it could be easily collaboratively added to)

Background beliefs (listed in Duncan's original post)

  • Epistemic humility ("I could be the wrong person here")
  • Good Faith ("I trust the other person to be believing things that make sense to them, which I'd have ended up believing if I were exposed to the same stimuli, and that they are generally trying to find the the truth")
  • Confidence in the existence of objective truth
  • Curiosity / Desire to uncover truth

Building-Block and Meta Skills

(Necessary or at least very helpful to learn everything else)

  • Ability to gain habits (see Trigger Action Plans, Reflex/Routines, Habits 101)
  • Ability to notice things (there are many types of things worth noticing, but most-obviously-relevant are)
    • cognitive states
    • ways-that-ideas-fit-together
    • physiological states
    • conversational patterns
    • felt senses (see focusing).
  • Ability to introspect and notice your internal states (Focusing )
  • Ability to induce a mental state or reframe [note: alas, the original post here is gone]
  • Habit of gaining habits

Notice you are in a failure mode, and step out. Examples:

  • You are fighting to make sure an side/argument wins
  • You are fighting to make another side/argument lose (potentially jumping on something that seems allied to something/someone you consider bad/dangerous)
  • You are incentivized to believe something, or not to notice something, because of social or financial rewards,
  • You're incentivized not to notice something or think it's important because it'd be physically inconvenient/annoying
  • You are offended/angered/defensive/agitated
  • You're afraid you'll lose something important if you lose a belief (possibly 'bucket errors')
  • You're rounding a person's statement off to the nearest stereotype instead of trying to actually understand and response to what they're saying
  • You're arguing about definitions of words instead of ideas
  • Notice "freudian slip" ish things that hint that you're thinking about something in an unhelpful way. (for example, while writing this, I typed out "your opponent" to refer to the person you're Double Cruxing with, which is a holdover from treating it like an adversarial debate)

(The "Step Out" part can be pretty hard and would be a long series of blogposts, but hopefully this at least gets across the ideas to shoot for)

Social Skills (i.e. not feeding into negative spirals, noticing what emotional state or patterns other people are in [*without* accidentaly rounding them off to a stereotype])

  • Ability to tactfully disagree in a way that arouses curiosity rather than defensiveness
  • Leaving your colleague a line of retreat (i.e. not making them lose face if they change their mind)
  • Socially reward people who change their mind (in general, frequently, so that your colleague trusts that you'll do so for them)
  • Ability to listen (in a way that makes someone feel listened to) so they feel like they got to actually talk, which makes them inclined to listen as well
  • Ability to notice if someone else seems to be in one of the above failure modes (and then, ability to point it out gently)
  • Cultivate empathy and curiosity about other people so the other social skills come more naturally, and so that even if you don't expect them to be right, you can see them as helpful to at least understand their reasoning (fleshing out your model of how other people might think)
  • Ability to communicate in (and to listen to) a variety of styles of conversation, "code switching", learning another person's jargon or explaining yours without getting frustrated
  • Habit asking clarifying questions, that help your partner find the Crux of their beliefs.

Actually Thinking About Things

  • Understanding when and how to apply math, statistics, etc
  • Practice thinking causally
  • Practice various creativity related things that help you brainstorm ideas, notice implications of things, etc
  • Operationalize vague beliefs into concrete predictions

Actually Changing Your Mind

  • Notice when you are confused or surprised and treat this as a red flag that something about your models is wrong (either you have the wrong model or no model)
  • Ability to identify what the actual Crux of your beliefs are.
  • Ability to track bits of small bits of evidence that are accumulating. If enough bits of evidence have accumulated that you should at least be taking an idea *seriously* (even if not changing your mind yet), go through motions of thinking through what the implications WOULD be, to help future updates happen more easily.
  • If enough evidence has accumulated that you should change your mind about a thing... like, actually do that. See the list of failure modes above that may prevent this. (That said, if you have a vague nagging sense that something isn't right even if you can't articulate it, try to focus on that and flesh it out rather than trying to steamroll over it)
  • Explore Implications: When you change your mind on a thing, don't just acknowledge, actually think about what other concepts in your worldview should change. Do this
    • because it *should* have other implications, and it's useful to know what they are....
    • because it'll help you actually retain the update (instead of letting it slide away when it becomes socially/politically/emotionally/physically inconvenient to believe it, or just forgetting)
  • If you notice your emotions are not in line with what you now believe the truth to be (in a system-2 level), figure out why that is.

Noticing Disagreement and Confusion, and then putting in the work to resolve it

  • If you have all the above skills, and your partner does too, and you both trust that this is the case, you can still fail to make progress if you don't actually follow up, and schedule the time to talk through the issues thoroughly. For deep disagreement this can take years. It may or may not be worth it. But if there are longstanding disagreements that continuously cause strife, it may be worthwhile.
answer by Jan Kulveit · 2020-02-17T23:38:27.069Z · LW(p) · GW(p)

Getting oriented fast in complex/messy real world situations in fields in which you are not an expert

  • For example, now, one topic to get oriented in would be COVID; I think for a good thinker, it should be achievable to have big-picture understanding of the situation comparable to a median epidemiologist after few days of research
      • Where the point isn't to get an accurate forecast of some global variable which is asked on metaculus, but gears-level model of what's going on / what are the current 'critical points' which will have outsized impact / ...
      • In my impression, compared to some of the 'LessWrong-style rationality', this is more heavily dependent on 'doing bounded rationality well' - that is, finding the most important bits / efficiently ignoring almost all information, in contrast to carefully weighting several hypothesis which you already have

Actually trying to change something in the world where the system you are interacting with has significant level of complexity & somewhat fast feedback loop (&it's not super-high-stakes)

  • Few examples of seemingly stupid things of this type I did
    • filled a lawsuit without the aid of a lawyer (in low-stakes case)
    • repaired various devices with value much lower than value of my time
    • tinkering with code in a language I don't know
    • trying to moderate Wikipedia article on highly controversial topic about which two groups of editors are fighting

One thing I'm a bit worried about in some versions of LW rationality & someone should write a post about is something like ... 'missing opportunities to actually fight in non-super-high-stakes matters'', in the martial arts metaphor.

answer by Ustice · 2020-02-14T23:20:56.024Z · LW(p) · GW(p)

I would add active and empathic listening, and nonviolent communication. By improving our skills at communicating and connecting with others, we improve both our effectiveness in cooperation as well as the quality of our relationships.

comment by romeostevensit · 2020-02-15T11:53:55.373Z · LW(p) · GW(p)

+1 exploring a technical topic with another person involves a lot of soft skills, and this might be one of the anti correlations that makes dramatic progress rare.

answer by Daniel Kokotajlo · 2020-02-16T14:42:29.796Z · LW(p) · GW(p)

I nominate this thing johnswentworth did [LW · GW]. In addition to the reasons he gives, I'll add that being able to learn on your own, quickly, seems like a good skill to have, and related to (though maybe not the same thing as) rationality.

answer by Davidmanheim · 2020-02-18T19:39:40.891Z · LW(p) · GW(p)

Prediction

Abram pointed out concerns about focusing rationality of prediction. I agree with those concerns, and have said before that many of the skills involved in prediction can be Goodharted well past the point of usefulness more generally. For example, frequently updating, tracking source of information to quickly capture when a question should have been considered resolved, or tracking the group aggregate are all effective strategies that are minimally helpful for other parts of rationality.

On the other hand, to argue via analogy, while martial arts are clearly too narrowly focused on form, and adapted to constraints rather than optimizing for winning, the best mixed martial artists, and I suspect many of the best hand-to-hand fighters in general, are experts in one or several martial arts. That's because even if the practice of any martial art was Goodharted well past the point of usefulness, and waste time because of that, fighters still need all of the skills that martial arts teach.

Similarly, I think that the best rationalists will need to be really good forecasters. The return will drop as you optimize too much for forecasting, obviously, but I imagine that there's a huge return on being in the top, say, 1% of humans at forecasting. That's not an incredibly high bar, since most people are really bad at this. I'll guess that the top 1% level would probably fall somewhere below the median of Good Judgement Open's active participant rankings - but I think it's worth having people interested in honing their rationality skills participate and improve enough to get to at least that level.

answer by Raemon · 2020-02-15T22:21:39.739Z · LW(p) · GW(p)

The usual caveats of "what do you mean by rationality?" seem likely to crop up immediately here. (i.e. epistemic vs instrumental). "Being able to form accurate beliefs" and "Being able to form good plans in confusing domains" seem like two main things you might want to train.

I think it's plausible that superforecasting (and "forming accurate beliefs" in general) doesn't lead to overwhelmingly great good life outcomes, but is still, like, a skill that is worth gaining for the same reason many other skills are: it's valuable to other people, and you might get paid for it (either by being directly economically valuable, or longterm-public-good valuable so philanthropists would subsidize it). 

How to Measure Anything seems to lay out one particular set of skills that fit at the intersection of epistemic and instrumental rationality. It doesn't give "exercises" but I think is designed for an environment (i.e. making decisions for organizations) where you have a reasonable stream of actions+feedback loops, albeit on a slower timescale.

The Hammer Time sequence is the obvious LessWrong place to start.

answer by romeostevensit · 2020-02-15T11:39:15.024Z · LW(p) · GW(p)

My light review of the pedagogical literature suggests four things with large effect size: deliberate practice, test taking, elaboration of context (cross-linking knowledge), and teaching the material to others.

I also suspect debate would make the cut if tested. I think there's too little of the good kind of fighting in a lot of discourse and I sort of blame California culture for not being a good influence here. I think the intuition of comparing to sparring is right as a sort of collaborative fight, also that fighting can and should be playful and exploratory. This is less scalable since it requires skill matched real time collaboration.

On the object level I'll reiterate that we're still failing to engage with korzybski's assertion that we'd be radically less confused if we trained up in noticing type errors in language/representation.

More speculatively: most people are excessively tense most of the time. Example: right now check your brow, jaw, throat, shoulders, gut, pelvis. Given the interaction between physiology and mindset, and given the need for exploratory research, this winds up being of deceptive importance. Relaxation is a trainable skill.

17 comments

Comments sorted by top scores.

comment by johnswentworth · 2020-02-15T23:45:20.169Z · LW(p) · GW(p)

I think we're overdue for a general overhaul of "applied epistemic rationality".

Superforecasting and adjacent skills were, in retrospect, the wrong places to put the bulk of the focus. General epistemic hygiene is a necessary foundational element, but predictive power is only one piece of what makes a model useful. It's a necessary condition, not a sufficient one.

Personally, I expect/hope that the next generation of applied rationality will be more explicitly centered around gears-level models [LW · GW]. The goal of epistemic rationality 2.0 will be, not just a predictively-accurate model, but an accurate gears-level understanding.

I've been trying to push in this direction for a few months now. Gears vs Behavior [LW · GW] talked about why we want gears-level models rather than generic predictively-powerful models. Gears-Level Models are Capital Investments [LW · GW] talked more about the tradeoffs involved. And a [LW · GW] bunch [? · GW] of [? · GW] posts [LW · GW] showed how to build gears-level models in various contexts.

Some differences I expect compared to prediction-focused epistemic rationality:

  • Much more focus on the object level. A lot of predictive power comes from general outside-view knowledge about biases and uncertainty; gears-level model-building benefits much more from knowing a whole lot about the gears of a very wide variety of systems in the world.
  • Much more focus on causality, rather than just correlations and extrapolations.
  • Less outsourcing of knowledge/thinking to experts, but much more effort trying to extract [LW · GW] experts' models, and to figure out where the models came from and how reliable the model-sources are.
comment by Eli Tyre (elityre) · 2020-02-17T23:21:04.711Z · LW(p) · GW(p)
In my first CFAR experience (which was an MSFP, fwiw), the phrase "It's not about the exercises!" was kind of a motto. It was explained at the beginning that CFAR teaches exercises not because people learn the exercises and then go out and use the exercises, but rather, going through the exercises a few times changes how you think about things. (The story was that people often go to a CFAR workshop and then improve a bunch of things in their life, but say "but I haven't been doing the exercises!".)

I want to flag that this is not a universal attitude at CFAR.

CFAR workshops have long taught that "The point of the workshop is not the techniques", but I, at least, have long been frustrated that almost no one actually uses the techniques, even though they provide a lot of value. It seems to me that folks sometimes use "the techniques are not the point" or the concept of "five second versions" as an excuse for not sitting down with paper to goal factor, or do IDC, or whatever, even though those mental processes are demonstrably useful for actually solving problems on the object level, and even getting at the mindset requires some dozens (at least) of mindful reps, which few people ever do.

Duncan once taught a mini-workshop where the moto was explicitly "People don't use the tools they have", and I've been trying to push a theme of something like "actually practice", in CFAR's recent instructor training, and with a new class on training regimes, for instance.

comment by Eli Tyre (elityre) · 2020-02-18T00:31:26.888Z · LW(p) · GW(p)

[The following is a bit of a ramble. I'm making a couple of different points, and have differing levels of confidence in each one, so I've segregated them, to avoid everything getting mixed together]

Point 1:

I think there's something that's a little bit missing in how this question is asked. Meditation (to my understanding) is the sort of thing where one can do basically the same activity for many millions of reps, and over time, one will build capacity along some dimension. This kind of training is quantitative, you keep getting a little better at a particular activity. (Another activity that is like this is literal weightlifting.)

But I think that quantitative training is the exception rather than the rule.

I currently think that rationality (and most skills for that matter), is better thought of as qualitative: made up of a bunch of very distinct, discrete, micro-skills, or TAPs. Each micro-skill requires a specific training exercise, but after you've robustly ingrained the TAP, you reliably execute the relevant motion when confronted with the relevant stimuli, you're done. You don't get meaningfully stronger at applying each one. You maybe have to do the exercise again on a review schedule. But once you train the TAPs of (for instance) "notice that my thoughts are feeling fake -> ask what do I actually believe?" or "feel frustrated -> orient on what's happening in this situation" [1] to reliability, it doesn't really make sense to try and get better at that kind of TAP. You just move on to training the next one to reliability.

Point 2:

That said, in order to have this not spin off into worthlessness, there needs to be grounded in some particular real world "activity" that you do reliably enough to get feedback (in the same way that if you're trying to get better at playing football, mostly you're doing drills, doing deliberate practice on specific low level skills, but you also want those skills to come together in the activity of "playing football games." This is closer to a "practice" like meditation.

Some "practices" in that vein, that come to mind:

Point 3:

But all of these (except maybe the first one?) are too narrow to be "rationality practice." Unless you care about any of these in particular, I think the main thing is just trying ambitious things in the world. Your low level skills should "come together" in making more money, or getting better grades, or finding the sort of significant other that you want, or successfully executing on some project, or something.

I think this tension is at the core of why we are not really a community of practice. Every community of practice that I know about, be that the Parkour community, or the Circlers, or your local football club, has some specific activity, that one 1) could reasonably spend hours doing, 2) could enjoy doing for its own sake and, 3) can meaningfully get better at. We decided that our thing was "winning", in general, so any particular activity will always be too narrow to capture what we care about.

This dynamic makes me sympathetic to this [LW(p) · GW(p)] comment: I think if you try and have a community who's central activity is "winning", you're going to find that "winning" is not the sort of thing that you can easily set up a practice regime at. But if you make your community about figuring out confusing questions, that is in fact something that you can do many reps of and get a lot better at.


[1] - Note that you have to train the true, sub-verbal, versions of these TAPs, not the words I'm using here.

Replies from: johnswentworth, elityre
comment by johnswentworth · 2020-02-18T07:42:34.736Z · LW(p) · GW(p)

Re:winning, I was recently thinking about how to explain what my own goals are for which rationality is a key tool. One catch phrase I like is: source code access.

Here’s the idea: imagine that our whole world is a video game, and we’re all characters in it. This can mean the physical world, the economic world, the social world, all of the above, etc. My goal is to be able to read and modify the source code of the game.

That formulation makes the role of epistemic rationality quite central: we're all agents embedded in this universe, we already have access to the source code of economic/social/other systems, the problem is that we don't understand the code well enough to know what changes will have what effects.

Replies from: elityre
comment by Eli Tyre (elityre) · 2020-02-18T22:31:26.819Z · LW(p) · GW(p)

I really like this framing, and it resonates a lot with how I personally think about my orientation to the world.

comment by Eli Tyre (elityre) · 2020-02-18T00:31:55.078Z · LW(p) · GW(p)

One way to think about it is that there are at least 3 kinds of "things" that one might want as part of their rationality practice:

1. Specific tools, schemas, frameworks for solving particular classes of problem. These are things like Goal Factoring or Double Crux. You will need to practice them (and maybe do practice on the individual sub skills, in order to have facility with them), but the main point of your tools is that you deploy them to solve a particular kind of problem.

2. Discrete training. Many TAPs, with associated practice exercises.

3. Continuous training. Single practices that you can just continue to churn on, for years, and which will continue to pay dividends.

comment by Charlie Steiner · 2020-02-17T07:46:00.928Z · LW(p) · GW(p)

I usefully demonstrated rationality superpowers yesterday by bringing a power strip to a group project with limited power outlets.

Now, you could try to grind this ability by playing improv games with the situations around you, looking for affordances, needs, and solutions. But this is only a sub-skill, and I think most of my utility comes from things that are more like mindset technology.

A personal analogy: If I want to learn the notes of a tune on the flute, it works fine to just play it repeatedly - highly grindable. If I want to make that tune sound better, this is harder to grind but still doable; it involves more skillful listening to others, listening to yourself, talking, trial and error. If I want to improve the skills I use to make tunes sound better, I can make lots of tunes sound better, but less of my skill is coming from grinding now, because that accumulation is slower than other methods of learning. And if I want to improve my ability to learn the skills used in making tunes sound better...

Well, first off, that's a rationality-adjacent skill, innit? But second, grinding that is so slow, and so stochastic, that it's hard to distinguish from just living my life, but just happening to try to learn things, and accepting that I might learn one-time things that obviate a lot of grinding.

So maybe the real grinding was bringing the power strip all along.

comment by Raemon · 2020-02-15T00:24:50.596Z · LW(p) · GW(p)

Another example pointing at what I want here is bewelltuned.com. The content may or may not be right, but the sort of thing seems exactly right to me -- actionable skills you can keep working on regularly after getting simple explanations of how to do it. And furthermore, the presentation seems exactly right. LessWrong has a tendency to focus on wordy explanations of intellectual topics, which is great, but the bewelltuned style seems like an excellent counterbalance.

Somewhat side note, but I think bewelltuned is deeply great, and a minor obstacle I've found is that because the author is dead, and there's no clear steward of the content, and the content is blatantly incomplete, I'm sort of unsure what to do with it. 

One option would be for someone else to rewrite it in their own voice, but honestly in most cases the original writing was quite clear, and rewriting it would just make it worse. I think some of parts of the models might turn out to be false and some instructions suboptimal, in which case rewriting makes sense, but for now AFAICT they're just sort of the best version of themselves around. 

edit to be clearer/more-actionable: 

I'd like to be able to copy posts over to LessWrong so they can be more tightly integrated into other sequences, but I don't know if that's reasonable, and don't know how to get into an epistemic state where it's either clearly reasonable (and I can do it) or clearly unreasonable (and I can move on). Does anyone know anyone who knew squirrel-in-hell/Maia/Chris Pasek well enough to figure out an answer to that?

Replies from: ChristianKl, cousin_it, Pattern
comment by ChristianKl · 2020-02-15T17:47:20.307Z · LW(p) · GW(p)

They were couchsurfing with me a few days around the LessWrong Community Weekend when they were still known as Chris. This means that I have a decent insight into them but I'm not one of the people who lived with them in Crow's nest.

At the time there morals were that people should interacted in a way that maximizes utility if they were a timeless decision based agent. I think they were vegan for some timeless decision theory based justification. I don't think the person I might back them would raise an objection.

It's my understanding that at the time of their death they were of the opinions that nothing matters. From that point of view there would also be no objection to his work being used by other people.

comment by cousin_it · 2020-02-15T21:28:23.605Z · LW(p) · GW(p)

I'm wary of such mind hacks, because they teach you to treat a person (yourself) as a machine. Most people have an instinct for human connection that refuses to be satisfied by machines, so gradually teaching yourself that you live in a world of machines can lead to isolation and emptiness. That might have contributed to SquirrelInHell's suicide, though I didn't know them in person.

Replies from: Raemon
comment by Raemon · 2020-02-15T22:23:37.952Z · LW(p) · GW(p)

I get that vibe from some of the overall things I've read and heard about re: Pasek. There's an outlook that's kinda excited about tinkering with the mind in ways that seem particularly prone to "dangerous mindhacks", and it does seem like this was relevant to Pasek overall. 

But I don't get that vibe much at all from bewelltuned – the things pitched there feel more like "pretty reasonable skills to develop" (and mostly aren't about connection, they're about inward facing skills that seem like they should be relevant no matter what your goals are)

comment by Pattern · 2020-02-15T19:12:32.399Z · LW(p) · GW(p)

Rather than gathering content here, we could recognize sequences on other sites.

Replies from: Raemon
comment by Raemon · 2020-02-15T19:52:56.984Z · LW(p) · GW(p)

I do think think that is also something good to do. In this case I'm actually worried about the site eventually disappearing (I suppose you could link to archive.org versions) 

(sequences linking to other sites will also have some properties like a lack of "read next post". Currently also won't have hover-previews as you're skimming the sequence list, although I hope we'll build out hover previews for most external sites at some point) [edit: actually I guess if it's a linkpost the linkpost can still have it's own hover-preview-text, so maybe that part is fine]

comment by Raemon · 2020-02-17T23:09:39.516Z · LW(p) · GW(p)

Curated. "How do we actually practice rationality" seems like one of the key questions we should be collectively trying to answer. It seems good to both collect/distill our existing resources, as well as to periodically remind people that creating practical exercises is a useful thing to do.

comment by Pattern · 2020-02-15T00:32:53.111Z · LW(p) · GW(p)

The TL:DR comment on this is also the conclusion.


It was a group of rather committed and also individually competent rationalists, but they quickly came to the conclusion that while they could put in the effort to become much better at forecasting, the actual skills they'd learn would be highly specific to the task of winning points in prediction tasks, and they abandoned the project, concluding that it would not meaningfully improve their general capability to accomplish things!!

What you (can) learn from something might not be obvious in advance. While it's possible they were right, it's possible they were wrong.

And if you're right, then doing the thing is a waste, but if you are wrong then it's not.*

*Technically the benefit of something can equal the cost.

U(x) = Benefit - Cost. The first is probabilistic - in the mind, if not in the world. (The second may be as well, but to lesser extent.)

If this is instead modeled using a binary variable 'really good (RG)', the expected utility of x is roughly:

Outcome_RG*p_RG + Outcome_not*(1-p_RG) - cost

But this supposes that the action is done or not done, ignoring continuity. You to superforecaster you, is a continuum. If this is broken up into into intervals of hours then there may exist both a number of hours x and y, such that U(x)-cost >0, but U(y)-cost < 0. The continuous generalization is the derivative of 'U(x hours) - cost', and it becomes zero where the utility has stopped increasing and started increasing (or when the reverse holds). This leaves the question of how U(x) is calculated, or estimated. One might imagine that this group could have been right - perhaps the low hanging of fruit of forecasting/planning is Fermi estimates, and they already had that skill/tool.

Forecasting (predicting the future) is all well and good if you can't affect something, but if you can then perhaps planning (creating the desired future) is better. The first counterexample that comes to mind is that if you could predict the stock market in advance, then you might be able to make money off of that. This example seems unlikely, but it suggests a relationship between the two - some information about the future is useful for 'making plans'. However, while part of what information that will/could be important in the future may be obvious, that leaves:

  • how to forecast information about the future that's obviously useful (if the forecast is correct)
  • the information that's not obviously useful, but turns out to be important later (This is usually lumped under 'unknown unknowns', but while Moravec's paradox** can be cast as an unknown unknown, the fact that no one had built a machine/robot that did x yet, could be considered known.)

**Moving is harder than calculating.

Replies from: romeostevensit, Pattern
comment by romeostevensit · 2020-02-15T11:45:26.268Z · LW(p) · GW(p)

Since one can't do most of the things in the world for oneself, expert judgement has to be one of the upstream skills chosen for investment/cultivation.

comment by Pattern · 2020-02-15T00:33:10.752Z · LW(p) · GW(p)

TL:DR;

And while prediction may be a skill, even if a project 'fails' it can still build skills/knowledge. On that note:

What could/should be a part of a 'practice' of rationality?

What skills/tools/etc. will (obviously) be useful in the future? and

What should be done about skills/tools/etc. that aren't obviously useful in the future now, but will be with hindsight?