What are good rationality exercises?

post by Ben Pace (Benito) · 2020-09-27T21:25:24.574Z · LW · GW · 9 comments

This is a question post.

Contents

  Answers
    18 AllAmericanBreakfast
    13 mr-hire
    12 adamzerner
    11 adamzerner
    9 Liron
    8 adamzerner
    8 adamzerner
    6 Gunnar_Zarncke
    6 Daniel Kokotajlo
    5 Ben Pace
None
9 comments

I want to know what are good rationality exercises.

I was just on a call with Liron and PhilH, hanging out after the weekly LessWrong weekend event, and we discussed exercises that could happen on LessWrong.

Here is the list we generated:

Another user on the call (whose name I forget) suggested it could be fun to have a daily Fermi Estimate on LessWrong, where everyone submits their number and the model they used to reach the number. I think this would be quite exciting.

Please write answers with other exercises that you think are or might be great for rationality training, some explanation of why you think it could be good, and a suggestion of how it could be incorporated into LessWrong. I'll probably add some of the above myself.

Answers

answer by DirectedEvolution (AllAmericanBreakfast) · 2020-09-28T01:59:54.457Z · LW(p) · GW(p)

Things that interest me:

  • Let's go exploring. Eliezer took a pretty low-bar activity (fan fic) and created something original (HPMOR). Why don't we pick some notorious areas of the internet where we think a little LW-style overthinking could go a long way?
  • A rational approach to cultivating imagination, creativity, and meditation. We have so many tools here for modeling questions of fact. Can't rationality help us develop the right side of the brain as well as the left?
  • Business ideas we could collaborate on, that hinge primarily on rational thinking, learning how to learn, and conscientiousness.

I would not participate in activities that boil down to arbitrary left-brain problem solving.

answer by Matt Goldenberg (mr-hire) · 2020-09-28T20:52:43.904Z · LW(p) · GW(p)

"Doing impossible things"

  • Get 100 strangers to show up at a specific place at a specific time.
  • Make $5,000 counterfactual dollars in a weekend.
  • Be featured in a major print publication in less than a month.
  • etc.
answer by Adam Zerner (adamzerner) · 2020-09-28T23:48:00.791Z · LW(p) · GW(p)

Answer: Writing Your Hypothetical Apostasy

See Write Your Hypothetical Apostasy on Overcoming Bias.

Imagine, if you will, that the world's destruction is at stake and the only way to save it is for you to write a one-pager that convinces a jury that your old cherished view is mistaken or at least seriously incomplete.  The more inadequate the jury thinks your old cherished view is, the greater the chances that the world is saved.  The catch is that the jury consists of earlier stages of yourself (such as yourself such as you were one year ago).  Moreover, the jury believes that you have been bribed to write your apostasy; so any assurances of the form "trust me, I am older and know better" will be ineffective.  Your only hope of saving the world is by writing an apostasy that will make the jury recognize how flawed/partial/shallow/juvenile/crude/irresponsible/incomplete and generally inadequate your old cherished view is.

I'm not sure exactly how this fits in to group rationality practice. I personally am always more motivated to write when it's something that I will publish, so having a place where we publish hypothetical apostacys could be useful for motivational reasons. It would also be useful because you'd get feedback on your thought process, although that point could be made for many other exercises.

comment by Ben Pace (Benito) · 2020-09-29T04:21:47.231Z · LW(p) · GW(p)

Oh yeah, this one's great. Thanks for reminding me.

answer by Adam Zerner (adamzerner) · 2020-09-27T23:01:59.231Z · LW(p) · GW(p)

Answer: Check My Understanding

Here's how it'd work. Suppose I want to improve my understanding of Aumann's Agreement Theorem. I would write up my thoughts, doing my best to explain what I know about it. Then other people would comment on what I'm missing and where I went wrong.

This seems useful for a few different reasons:

  • As an author, the comments provide you with personalized feedback and allow you to "fill in the gaps".
  • As an author, the act of doing the initial write-up seems like it'd be very beneficial. Ditto for readers writing out their comments. (I have the Feynman Technique in mind.)
  • As a reader, you may have a decent understanding of Aumann's Agreement Theorem, but seeing it explained by a different author might help some things "click" for you (I have Non-Expert Explanation in mind).
answer by Liron · 2020-09-28T15:30:20.488Z · LW(p) · GW(p)

I was thinking that if the sequences and other LW classics were a high school class, we could make something like an SAT subject test to check understanding/fluency in the subject, then that could be a badge on the site and potentially a good credential to have in your career.

The kinds of questions could be like:

1.

If a US citizen has a legal way to save $500/year on their taxes, but it requires spending 1 hour/day filling out boring paperwork on 5 days of every week, should they do it?

a. Virtually everyone should do it

b. A significant fraction (10-90%) of the population should do it

c. Virtually no one should do it

2.

With sufficient evidence and a rational deliberation process, is it possible to become sure that the Loch Ness Monster does/doesn't exist?

a. We CAN potentially become sure either way

b. We CAN'T potentially become sure either way

c. We can only potentially become sure that it DOES exist

d. We can only potentially become sure that it DOESN'T exist

comment by Adam Zerner (adamzerner) · 2020-09-28T18:08:44.998Z · LW(p) · GW(p)

I recall reading educational psych stuff about how the act of both 1) creating and 2) answering questions like this is a great way to deepen your understanding.

answer by Adam Zerner (adamzerner) · 2020-09-28T18:03:50.484Z · LW(p) · GW(p)

Answer: Betting With Real Money

From the end of Inadequate Equilibria:

I don’t have good, repeatable exercises for training your skill in this field, and that’s one reason I worry about the results. But I can tell you this much: bet on everything. Bet on everything where you can or will find out the answer. Even if you’re only testing yourself against one other person, it’s away of calibrating yourself to avoid both overconfidence and underconfidence, which will serve you in good stead emotionally when you try to do inadequacy reasoning. Or so I hope.

Eliezer seems to be referring to real money here. And I recall him talking elsewhere about how it is useful to put real money on the line.

This meshes with my experiences playing poker. It's one thing to study and learn that X is a mistake. It's another thing to make the mistake of X and lose a big pot because of it. There's something about losing real money that cements it in your head. And I'm not just referring to my own experiences. From talking to other poker players, it seems that this is the norm.

However, real money is a touchy subject and I'm not sure how we would actually pull this off. But I figure that there is still value in bringing it up.

comment by mingyuan · 2020-09-29T02:34:35.165Z · LW(p) · GW(p)

Betting with real money is definitely a useful way of probing at your own confidence (I don't do it much at all due to general underconfidence, but it's sure helped me nail down the feeling of being really sure of something), and a lot of my rationalist friends do it on a handshake-agreement basis. However, any way of formalizing this would turn LW (or whatever institution) into a gambling site, which is illegal :/

Replies from: adamzerner, jacobjacob
comment by Adam Zerner (adamzerner) · 2020-09-29T04:20:17.532Z · LW(p) · GW(p)

There may be some creative non-formal solutions though.

  • On one end of the spectrum you could have a token system and leave it up to the users to figure out actually exchanging money themselves (a lot of poker apps do this).
  • Getting less hands-on, you could do away with the tokens and just act as a matchmaker, getting two parties who want to make a bet in touch with each other and they could handle it from there.
  • Getting even less hands-on, you could just function as a place to discuss bets you may want to make in the real world. Eg. sports betting or stock picking (I guess there's not too many examples of this).
comment by jacobjacob · 2020-09-30T02:27:27.127Z · LW(p) · GW(p)

There could be ways of making it legal given that we're a non-profit with somewhat academic interests. (By "making" I mean actually changing the law or getting a No-Action Letter.) Most people who do gambling online do it for profit, which is where things get tricky. 

answer by Adam Zerner (adamzerner) · 2020-09-27T22:45:45.987Z · LW(p) · GW(p)

Answer: Discussing Updates

See the Updates Thread [LW · GW]. Basically, taking note of the belief updates you perform and discussing why you performed them. What did you previously believe, what do you currently believe, and why did the data you observed move you from there to here?

answer by Gunnar_Zarncke · 2020-09-28T14:15:05.718Z · LW(p) · GW(p)

Making bets is good exercise too. If you can't find other people to bet with you can also make public predictions.

answer by Daniel Kokotajlo · 2020-09-28T07:47:04.534Z · LW(p) · GW(p)

When I first read the sequences, I thought "What do I know and how do I think I know it?" was pretty banal and useless -- didn't everyone know that? Philosophy 101, question your beliefs, look for hidden assumptions, etc.

The older I get the more I come to think that no, not everyone knows this, and even the people who know it don't practice it enough. I'm not sure though.

comment by johnswentworth · 2020-09-28T20:36:40.333Z · LW(p) · GW(p)

I think of "What do I know and how do I think I know it?" as the "root cause" of essentially all other epistemic rationality - i.e. if you're sufficiently good at that one skill, all the others will follow naturally from it. Conversely, that suggests it's really difficult to get really good at it: if I'm missing any other epistemic rationality skill, it means I'm not good enough at "What do I know and how do I think I know it?".

I'd say the "obvious" version of the skill involves activities which look like questioning beliefs, looking for hidden assumptions, etc. But these are surface-level activities which don't necessarily trace the whole belief-generating pipeline. The full skill is about modelling the entire physical process which created your map from the territory.

One example I've thought about recently: we've had a bunch of posts lately on simulacrum levels [? · GW]. Personally, I saw most of the ideas in those posts as kind-of-obvious applications of the general principle/habit "when you hear words, don't ask what they literally signify, ask what physical process generated them and what that implies about the world". (Or the HPMOR version: “Professor Quirrell didn't care what your expression looked like, he cared which states of mind made it likely.”) This is a principle/habit which naturally pops out of modelling the physical process which produces your own beliefs, whenever someone's words appear in the belief-production pipeline.

answer by Ben Pace · 2020-09-27T21:29:12.347Z · LW(p) · GW(p)

Answer: Fermi Estimates

Fermi estimates are attempts to answer a quantitative question using order-of-magnitude style reasoning. These are questions like "How many people fly on airplanes each day?" or "How many atoms are in my arm?". In contrast to things like calibration practice, these are much more generative, attempting to tie together parts of your world model to come up with a model that answers a question.

On LessWrong, this could be practically implemented by having a set of 100-1000 questions that users can do either in a weekend blitz, or spaced out over time. A user who got 100 correct (within a factor of 2x) could have a sign on their profile indicating that they completed this task. It could also be implemented as a daily/weekly question for users to answer and then compare notes on.

9 comments

Comments sorted by top scores.

comment by Adam Zerner (adamzerner) · 2020-09-27T22:37:26.691Z · LW(p) · GW(p)

The CFAR Handbook has a lot of good ones.

Replies from: Alexei, thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2020-09-27T22:48:07.078Z · LW(p) · GW(p)

According to a vague feeling of a couple of people I know, the CFAR handbook is tricky enough that reading it without doing CFAR could be dangerous.

Replies from: adamzerner, habryka4
comment by Adam Zerner (adamzerner) · 2020-09-28T17:51:51.879Z · LW(p) · GW(p)

It seems very plausible that you'd get more value out of them after having gone through CFAR. But it seems implausible that you'd get zero or negative value out of them without having gone through CFAR. At least in terms of expected value.

comment by habryka (habryka4) · 2020-09-28T05:14:22.826Z · LW(p) · GW(p)

Nah, I don't think that's a real concern. Or at least I really don't see much danger in the things in there, and have worked a lot with it in the past.

comment by Adam Zerner (adamzerner) · 2021-12-13T00:39:12.971Z · LW(p) · GW(p)

I think this excerpt from Rationality: From AI to Zombies' preface says it all.

It was a mistake that I didn't write my two years of blog posts with the intention of helping people do better in their everyday lives. I wrote it with the intention of helping people solve big, difficult, important problems, and I chose impressive-sounding, abstract problems as my examples.

In retrospect, this was the second-largest mistake in my approach. It ties in to the first-largest mistake in my writing which was that I didn't realize that the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory. I didn't realize that part was the priority; and regarding this I can only say "Oops" and "Duh."

Yes, sometimes those big issues really are big and really are important; but that doesn't change the basic truth that to master skills you need to practice them and it's harder to practice on things that are further away. (Today the Center for Applied Rationality is working on repairing this huge mistake of mine in a more systematic fashion.)

comment by Mary Chernyshenko (mary-chernyshenko) · 2020-09-28T16:41:05.701Z · LW(p) · GW(p)

This "incorporated into LW" condition is a tight leash; and it reminds me of why I don't usually... recommend LW to my friends.

Some matters are too personal to talk about on the Internet. Like marital infidelity, which 1) is something outside of many people's experiences, 2) definitely seems to require tons of instrumental rationality even on the best of days, 3) has (ethical) implications which real people often don't take into account despite other real people often expecting them to (but knowing they won't), and 4) unlike acceptable LW material with which it shares the above characteristics, it hurts. And so it is with some other things that actual adults have to deal with.

Unless you speak about something already in the past. Maybe we should have a Cemetery of Failed Things in our City. (Our current Cemetery of Failed Things holds several startups and personal habits, which is, wow, how lucky we are.)

Replies from: mary-chernyshenko
comment by ChristianKl · 2020-09-28T16:11:54.006Z · LW(p) · GW(p)

Basic have-you-read-the-sequences knowledge test (e.g. "Which of the following is an example of 'belief as attire'?")

This might be combined with calibration training.