Rationality Exercises Prize of September 2019 ($1,000)

post by Ben Pace (Benito) · 2019-09-11T00:19:51.488Z · LW · GW · 18 comments

Contents

  Why exercises?
  What does this look like?
  What am I looking for in particular?
  Give me examples of things you think could have exercises?
None
18 comments

Added: Prizewinners announced in this [LW(p) · GW(p)] comment below.

This post is an announcement of a prize for the best exercises submitted in the next two weeks on a topic of your choice, that are of interest to the LW community. We're planning to distribute $1,000, where $500 of that will go to the first place.

To submit some exercises, leave a comment here linking to your exercises by midnight at the end of Friday 20th September PDT (San Francisco time). You can PM one of us with it if you want to, but we'll be publishing all the entries that win a prize.

Why exercises?

I want to talk about why exercises are valuable, but my thinking is so downstream of reading the book Thinking Physics, that I'd rather just let its author (Lewis Carroll Epstein) speak instead. (All formatting is original.)

The best way to use this book is NOT to simply read it or study it, but to read a question and STOP. Even close the book. Even put it away and THINK about the question. Only after you have formed a reasoned opinion should you read the solution. Why torture yourself thinking? Why jog? Why do push-ups?
If you are given a hammer with which to drive nails at the age of three you may think to yourself, "OK, nice." But if you are given a hard rock with which to drive nails at the age of three, and at the age of four you are given a hammer, you think to yourself, "What a marvellous invention!" You see, you can't really appreciate the solution until you first appreciate the problem.
What are the problem of physics? How to calculate things? Yes - but much more. The most important problem in physics is perception, how to conjure mental images, how to separate the non-essentials from the essentials and get to the hear of a problem, HOW TO ASK YOURSELF QUESTION. Very often these questions have little to do with calculations and have simple yes or no answers: Does a heavy object dropped at the same time and from the same height as a light object strike the earth first? Does the observed speed of a moving object depend on the observer's speed? Does a particle exist or not? Does a fringe pattern exist or not? These qualitative questions are the most vital questions in physics.
You must guard against letting the quantitative superstructure of physics obscure its qualitative foundation. It has been said by more than one wise old physicist that you really understand a problem when you can intuitively guess the answer before you do the calculation. How can you do that? By developing your physical intuition. How can you do THAT? The same way you develop your physical body - by exercising it.
Let this book, then, be your guide to mental pushups. Think carefully about the questions and their answers before you read the answers offered by the author. You will find many answers don't turn out as you first expect. Does this mean you have no sense for physics? Not at all. Most questions were deliberately chosen to illustrate those aspects of physics which seem contrary to casual surmise. Revising ideas, even in the privacy of your own mind, is not painless work. But in doing so you will revisit some of the problems that haunted the minds of Archimedes, Galileo, Newton, Maxwell, and Einstein. The physic you cover here in hours took them centuries to master. Your hours of thinking will be a rewarding experience. Enjoy!

What does this look like?

Here are exercises we've had on LessWrong in the past.

In my primer on Common Knowledge [LW · GW], I opened with three examples and asked what they had in common. Then, towards the end of the post, I explained my answer in detail. I could've trivially taken those examples out from the start, included all the theory, and then asked the reader to apply the theory to those three as exercises, before explaining my answers. There's a duality between examples and exercises, where they can often be turned into each other.

But this isn't the only or primary type of exercise, and you can see many other types of exercise in the previous section that don't fit this pattern.

What am I looking for in particular?

I'm interested in exercises that help teach any key idea that I can't already buy a great textbook for, although if your exercises are better than those in most textbooks, then I'm open to it too.

Let me add one operational constraint: it should be an exercise that more than 10% of LessWrong commenters can understand after reading up to one-to-three posts you've specified, or after having done your prior exercises. As a rule I'm generally not looking for a highly niche technical problems. (It's fine to require people to read a curated LW sequence.)

I asked Oli for his thought on what makes a good exercise, and he said this:

I think a good target is university problem sets, in particular for technical degrees. I've found that almost all of my learning in university came from grappling with the problem sets, and think that I would want many more problem sets I can work through in my study of both rationality and AI Alignment. I also had non-technical classes with excellent essay prompts that didn't have as clear "correct" answers, but that nevertheless helped me deeply understand one topic or another. Both technical problem sets and good essay prompts are valid submissions for this prize, though providing at least suggested solutions is generally encouraged (probably best posted behind spoiler tags).

(What are spoiler tags? Hover over this:)

This is a spoiler tag! To add this to your post, see the instructions in the FAQ that's accessible from the frontpage on the left-menu.

(Also see this comment section [? · GW] for examples of lots of people using it to cover their solutions to exercises.)

Give me examples of things you think could have exercises?

I think exercises for any curated post or curated sequence on LessWrong is a fine thing. I've taken a look through our curated posts, here are a few I think could really benefit from great exercises (though tractability varies a lot).

I think technical alignment exercises will be especially hard to do well, because many people don't understand much of the work being done in alignment, and the parts that are easy to make exercises for often aren't very valuable or central.

Some of Nick Bostrom's ideas would be cool, like the unilateralist's curse, or the vulnerable world hypothesis, or the Hail Mary approach to the Value Specification Problem.

Feel free to leave a public comment with what sort of thing you might want to try making exercises for, and I will reply with my best guess on whether it can be a good fit for this prize.

18 comments

Comments sorted by top scores.

comment by Ben Pace (Benito) · 2019-10-22T23:23:49.023Z · LW(p) · GW(p)

This comment records the prizewinners of the Rationality Exercises Prize of September 2019 ($1,000), and shares some of my thoughts on the submissions.

Rationality Exercise Prizewinners

A prize of $125 goes to elriggs, for their sequence Arguing Well [LW · GW]. I did the first post, which had 11 simple exercises on Scott Alexander’s fallacy of ‘Proving too much’, plus 2 reflection steps.

A key thing that elriggs did (and TurnTrout below), was to not divorce the exercises from the understanding and the explanation - they weren’t added on at the end, but were part of the learning. Elriggs’ reminded me a bit of The Art of Problem Solving, where the act of solving the problems how you discover the mathematics.

Each post combines of a wealth of examples with points where you stop and try to generalise the rule / make your algorithm explicit. The latter part especially helped me resolve some confusions I had. I wrote down my experience and more specific thoughts doing the exercises in a comment here [LW(p) · GW(p)].

A prize of $125 goes to whales, for Exercises #1 and #4. Exercise #4 was ~75% of the reason I gave whales a prize, and is (roughly) about having an integrated world-model by looking at social science results.

I’m having a hard time saying why I liked whales’ exercises. If I try to point at what I liked about them, I’ll say that I think they were picked to be fairly simple-yet-confusing, and also permitted clear answers at the end - not unlike all the problems in the book Thinking Physics - and they helped me to notice my confusion along the way. Something about them felt very provocative/opinionated in a positive way, which differed from the other prizes. I approached them expecting to get something out of them, and managed to get out value pretty proportional to what I put in. I wrote about my experiences doing the exercises in a comment here [LW(p) · GW(p)].

A prize of $250 goes to lifelonglearner, for their post Calibrating With Cards [LW · GW]. This was primarily a lesson in close-up card magic, but one that used principles rationality sufficiently well that it helped me learn some general principles.

As I said in my comment [LW(p) · GW(p)] on lifelonglearner’s post, I really appreciated being guided in what to notice. Normally instructions tell me what to achieve, and instructions for how to achieve it, but lifelonglearner also spent a lot of of words on where I should focus my attention, which feels like a key insight about how to learn.

A prize of $500 goes to TurnTrout, for his sequence Reframing Impact [? · GW] (which was submitted via private message). These posts were about open problems in AI alignment, where TurnTrout was attempting to explain his solution simply enough that you could derive it for yourself in the course of reading the posts.

The posts had lots of concrete examples that you were explicitly invited to use to form categories. They also had questions and worked-examples incorporated in the reading and understanding, and built up to a key test, where you try to solve the problem for yourself before you see the author explain their solution. I read the first three posts in the sequence, did the 15 minute exercise on the Deducing Impact post, and wrote down my thoughts [LW(p) · GW(p)].

As I said above, a key thing that TurnTrout did was make sure the exercises were not divorced from the understanding and the explanation - they weren’t added on at the end or anything, but part of the learning. One thing TurnTrout’s exercises reminded me of is my experience of CFAR workshops, where I repeatedly get given space to solve a problem myself before an instructor tell me their solution/explanation. Of all the submissions, the ideas and the exercises felt most intertwined in this post, which is the main reason it gets the first place prize.

Thoughts on other entries

The main reason I didn’t give entries prizes was that I had a hard time either following the explanations or doing the exercises. I will name Mr-Hire’s Exercises for Overcoming Akrasia and Procrastination [LW · GW] as especially good - I found the triggers didn’t quite match my experiences at the time, and didn’t end up finding a good way to practise the recommended actions, but I think the writing was detailed and expect some others will find it useful. If anyone does them, I hope they write their experiences in a comment on his post.

comment by Matt Goldenberg (mr-hire) · 2019-09-16T11:55:37.733Z · LW(p) · GW(p)

My submissions are as comments on this post: https://www.lesswrong.com/posts/bbEzsn9b6R64vHX8G/exercises-for-overcoming-akrasia-and-procrastination [LW · GW]

I'll likely add one or two more over the next couple days.

comment by Logan Riggs (elriggs) · 2019-09-15T17:28:31.945Z · LW(p) · GW(p)

I'm submitting my sequence Arguing Well [LW · GW]. As far as the sequence is planned, most of the posts will be exercise based, and it's planned to be finished by Friday.

comment by Julija Kobrinovich (julija-kobrinovich) · 2019-09-20T17:24:21.457Z · LW(p) · GW(p)

I want to share an exercise [LW · GW] that I have been using for a long time - to integrate ideas and subagents (if to speak using multiagent model of mind)

comment by riceissa · 2019-10-22T04:29:46.336Z · LW(p) · GW(p)

Were the winners ever announced? If I'm counting correctly, it has now been over four weeks since September 20, so the winners should have been announced around two weeks ago. (I checked for new posts by Ben, this post, and the comments on this post.)

Replies from: Benito
comment by Ben Pace (Benito) · 2019-10-23T01:22:19.502Z · LW(p) · GW(p)

Now announced, see the relevant top-level comment in this thread. Thanks for checking.

comment by whales · 2019-09-11T00:50:24.076Z · LW(p) · GW(p)

My past occasional blogging included a few exercises that might be of interest. I'm pretty sure #4 is basically an expanded version of something from the Sequences, although I don't recall which post exactly. Others are more open ended. (Along the lines of #5 I've been casually collecting examples of scientific controversy and speculation with relatively clear-cut resolutions for the purposes of giving interested laypeople practice evaluating these things, to the extent that's possible. I don't know if I'll ever get around to writing something up, but if anyone has their own examples, I'd love to hear about them.)

Replies from: Benito
comment by Ben Pace (Benito) · 2019-10-08T23:22:21.951Z · LW(p) · GW(p)

I did #4 and #1. Here is what I wrote for each section of #4 (note: this will spoil your ability to do the exercise if you read it).

1. How do you explain these effects?

Seems like a trick question. Like, I have models of the world that feel like they might predict effects 2 and 3, and I can sort of wrangle explanations for 1 and 4, but my split-second reaction is “I’m not sure these are real effects, probably none replicate (though number 2 sounds like it might just be a restatement of a claim I already believe)”.

2. How would you have gone about uncovering them?

As I think about trying to determine whether someone did their diet for ethical reasons, I immediately feel highly skeptical of the result. I think that the things people will tick-box as ‘because I care about animals’ does not necessarily refer to a deep underlying structure of the world that is ‘ethics’, and can refer to one of many things (e.g. exposure to effective guilt-based marketing, reflections on ethical philosophy, the ownership of a dog/cat/pet from an early age, etc). But I guess that just doing a simple questionnaire isn’t of literally zero value.

Loyalty two feels like a thing I could design a better measure for, but I worry this is tangled up with me believing it’s true, and thus illusion-of-transparency assuming people mean the same thing as me if they check-box ‘loyalty’.

Number 3 seems totally testable and straightforward.

Number 4 seems broadly testable. Creativity could be done with that “list the uses of a brick” test, or some other fun ones.

I notice this makes me more skeptical about the first two ‘results’ and more trusting of the last two ‘results’.

3. These are all reversed, and the actual findings were the opposite of what I said. How do you explain the opposite, correct effects?

Ah, the classic ‘I reversed the experimental findings trick’. Well, I guess I did fine on it this time. Oh look, I just managed to think of an explanation for number 2, which is that a more discerning audience of less loyal customers increases adversarial pressures among service providers, raising the prices. Interesting. I think I mostly am noticing how modern psychological research methodology can be quite terrible, and that such a questionnaire without incorporating a thoughtful model of the environment will often be useless. Model-free empirical questions can be overdetermined by the implicit model.

4. Actually, none of these results could be replicated. Why and how were non-null effects detected in the first place? Answers using your designs from (2) are preferable.

Okay. Science is awful.

---

More general thoughts: This helped me notice how relying on assuming a simple empirical psychological claim like this shouldn't be used as evidence about anything. That pattern-matches to radical skepticism, but that's not what I mean. I think I’m mostly saying context-free/theory-free claims are meaningless in psychology/sociology, or something like that.

And #1.

The only thing I can come up with is that the graph doesn’t prove causality in any particular way. (it did take me like 3 whole minutes to come up with noticing correlation isn't causation - I was primarily looking for things like axis labelled in unhelpful ways or something). I can tell a story where these are uncorrelated and everyone is dumb. I can tell a story where decreasing wages is the *explanation* for why debt is growing - it was previously in equilibrium, but now is getting paid off much more slowly. I can tell a story of active prevention, whereby because wages are going down, the government is making students pay less and store more of it as debt so they still have a good quality of life immediately after college.

Again, I’m noticing how simple context-free/theory-free claims do not determine an interpretation.

While the post promised answers in the comments, there were no comments, neither on the post or on the linked Washington Post article, so I'm not sure what the expected take-away was.

Replies from: whales
comment by whales · 2019-10-09T19:56:54.561Z · LW(p) · GW(p)

Hm, not sure what happened to the Washington Post comments. Sorry about that. Here's my guess as to what I was thinking:

The axes are comparing an average (median income) to a total (student loan debt). This is generally a recipe for uninformative comparisons. Worse, the average is both per person and per year. So by itself this tells you little about the debt burden shouldered by a typical member of a generation. For example, you could easily see growth in total debt while individual debt burden fell, depending on the growth in degrees awarded and the typical time to pay off debt. If you wanted to make claims about how debt burdens individuals, as the blurb does, you'd have to look at what's happening with the typical debt of recent graduates.

But of course you can't stop there and say, "Ah, Peter Thiel is trying to mislead me, I'm going to disbelieve what I see as his point." Recent-graduate debt has been increasing, just not as much as the graph suggests. And maybe total student loan debt is a significant number in its own right?

(I don't know if I had intended the above as "the answer"; more likely, I just wanted people thinking about it more thoroughly than some of the commentary I had seen at the time. You also make good points.)

Thanks for trying these out. I don't think I ever heard in detail from anyone who did (beyond "this was neat"). If I were writing them today I'd be less coy about it.

comment by Jason Ken · 2019-09-17T20:20:03.479Z · LW(p) · GW(p)

I wouldn't really term this set exercises, as more of a continuous course of mental acuity development. Try to use the recommended exercise book as an observational material. People's opinions differ, but clues are meant to be studied. You can check it out through this link

https://www.lesswrong.com/posts/Pr2PcAqKRu9qKBCHd/principles-of-perception [LW · GW]

comment by [deleted] · 2019-09-08T01:56:10.932Z · LW(p) · GW(p)

I recently wrote about three things you can try with cards to see what your internal calibration feels like. They have some question prompts, but the gist of it is something to do, rather than something with a direct answer.

https://www.lesswrong.com/posts/Ktw9L67Nwq8o2kecP/calibrating-with-cards [LW · GW]

comment by romeostevensit · 2019-09-08T00:56:36.368Z · LW(p) · GW(p)

Not an explicit exercise and likely mentioned or alluded to somewhere in the forecasting stuff, but remember that any time you are about to run any sort of outside view/reference class forecast you have the opportunity to get some calibration by first trying to answer the question yourself based on what you know, including your confidence bounds on your model. When you subsequently look up the data and get any surprises, you can ask yourself why you are surprised which helps you to figure out what generators the experts have that you don't.

Since you can do this all the time (how many google searches do you do a day?) it gives you a lot more data than one time exercises.

It could use a clever anchor phrase for memory purposes. Open to suggestions.

comment by Shmi (shminux) · 2019-09-08T00:05:18.209Z · LW(p) · GW(p)

Not trying to win anything, but maybe my old post can be of some interest: https://www.lesswrong.com/posts/PDFJPxPope2aDtmpQ/a-simple-exercise-in-rationality-rephrase-an-objective [LW · GW]


Replies from: Benito
comment by Ben Pace (Benito) · 2019-09-11T00:40:57.047Z · LW(p) · GW(p)

I like it! If you had a bunch more worked examples and hid your answers behind spoiler tags or rot13, that'd be a solid submission.

comment by Matt Goldenberg (mr-hire) · 2019-09-11T00:37:07.005Z · LW(p) · GW(p)

When you say exercise, is it just "thing you can do to practice a skill?"

It's not clear to me if you mean something more specific here.

Replies from: Benito
comment by Ben Pace (Benito) · 2019-09-11T00:38:54.644Z · LW(p) · GW(p)

Sure, that sounds right. In this case it's often practise using a concept. I am kinda hoping to define it extensionally by pointing to all the examples.

comment by abramdemski · 2019-09-13T03:33:29.135Z · LW(p) · GW(p)

By asking people to leave a comment here linking to their exercises, are you discouraging writing exercises directly as a comment to this post? (Perhaps you're wanting something longer, and so discouraging comments as the arena for listing exercises?)

Replies from: Benito
comment by Ben Pace (Benito) · 2019-09-13T04:15:17.919Z · LW(p) · GW(p)

I didn’t think about it much, just copied what I remembered the Alignment Prize as doing - but yes, writing them here is totally fine, probably the way many people would want to do it, and I’m not discouraging it at all :)

Though even with like 3 exercises that build, if they have answers, many folks might want to put their answers in comments like on the fixed point exercises, and that would benefit from their own posts.