If there were an interactive software teaching Yudkowskian rationality, what concepts would you want to see it teach?

post by MikkW (mikkel-wilson) · 2020-09-02T05:37:08.758Z · LW · GW · 7 comments

This is a question post.

Contents

  Answers
    7 Raj Thimmiah
    3 algon33
    3 AnthonyC
    1 rain8dome9
    1 Raj Thimmiah
    1 Raj Thimmiah
None
7 comments

I've been noticing some complaints (such as this post by Richard Ngo [LW(p) · GW(p)]) lately about the quality of the modern LW community's contribution to the big picture of humanity's knowledge.

Ideally, if it were the case that reading something automatically made you internalize deeply everything it said, then just by having a group of people who have read The Sequences, you'd have a superteam of intellectuals. And while I do think LW is a pretty cool group of smart thinkers, that isn't fully the case- just reading The Sequences isn't enough. To really internalize the lessons that one must learn, one must apply the principles, push against the problem, see where their understanding needs improvement, and where they are good enough.

The simplest form of this is having a high-quality Anki deck that tests users on the principles, both by testing recall of the stated principle itself, and even more importantly, giving them test cases where they can apply the principles (in the same vein as Ankifying medium-difficulty multiplication problems). I have seen some rationality-themed Anki decks, but many of the cards are poorly formatted (both esthetically and in terms of learnability), and are also poorly curated. Ideally, if there were to be an Anki deck, it would be well formatted, and the cards would be carefully chosen to maximize quality of information.

Another idea that I've been thinking about is making explorables, a la Nicky Case, that would introduce important rationality concepts. This would have the advantage of providing more flexibility in experience than Anki, but also would sacrifice the benefits of having already implemented SRS.

My question is: if there were to be either an Anki deck or an explorable teaching concepts from The Sequences, targeted primarily as an aide for current LW users, but also as an introduction aimed at the public at large, what concepts from The Sequences would you most want to see covered?

Answers

answer by Raj Thimmiah · 2020-09-04T08:32:00.343Z · LW(p) · GW(p)

On seeing the title of this post again, I'm reminded of an obvious answer: teach people how to decide what to learn for themselves. Sort of like the feed a man a day vs. teaching fishing thing.

I don't think there's a more useful meta thing to learn since that's what you need to figure out everything else for yourself.

answer by algon33 · 2020-09-03T18:36:46.053Z · LW(p) · GW(p)

Having an Anki deck is kind of useless in my view as engaging with the ideas is not the path of least resistance. There's a tendency to just go "oh, that's useful" and do nothing with it because Anki/Supermemo are about memorisation. Using them for learning, or creating, is possible with the right mental habits. But for an irrational person, that's exactly what you want to instill! No, you need a system which fundamentally encourages those good habits.

Which is why I'm bearish about including cards that tell you to drill certain topis into Anki since the act of drilling is itself a good mental habit that many lack. Something like a curated selection of problems that require a certain aspect of rationality, spaced out to aid retention would be a good start. But

Unfortunately, there's a trade off between making the drills thorough and reducing overhead on the designer's part. If you're thinking about an empircally excellent, "no cut corners" implementation of teaching total newbs mental models, I'd suggest DARPA's Digital Tutor [LW · GW]. As to how you'd replicate such a thing, the field of research described in here seems a good place to start.

comment by Raj Thimmiah (raj-thimmiah) · 2020-09-03T20:24:54.911Z · LW(p) · GW(p)

Could you rewrite some of the first paragraph? I read it 2-3 times and was still kind of confused.

Funny you linked commoncog, was about to link that too. Great blog.

Replies from: algon33
comment by algon33 · 2020-09-03T21:20:35.481Z · LW(p) · GW(p)

Here's the re-written version, and thanks for the feedback.

Having an Anki deck is kind of useless in my view. When you encounter a new idea, you need to integrate it with the rest of your thoughts. For a technique, you must integrate it with your unconscious. But often, there's a tendency to just go "oh, that's useful" and do nothing with it. Putting it into space repitition software to view later won't accomplish anything since you're basically memorising the teacher's password. Now suppose you take the idea, think about it for a bit, and maybe put it into your own words. Better both in terms of results and using Anki as you're supposed to.

But there are two issues here. One, you haven't integrated the Anki cards with the rest of your thoughts. Two, Anki is not designed such that the act of integrating is the natural thing to do. Just memorising it is the path of least resistance, which a person with poor instrumental rationality will take. So the problem with using Anki for proper learning is that you are trying to teach insturmental rationality via a method that requires instrumental rationality. Note its even worse teaching good research and creative habits, which requires yet more instrumental rationality. No, you need a system which fundamentally encourages those good habits. Incremental reading is a litttle better, if you already have good reading habits which you can use to bootstrap your way to other forms of instrumental rationality.

Now go to pargraph two of the original comment.

P.S. Just be thankful you didn't read the first draft.

Replies from: raj-thimmiah
comment by Raj Thimmiah (raj-thimmiah) · 2020-09-04T08:30:17.304Z · LW(p) · GW(p)

Haha, thanks for the rewrite, makes much more sense now.


tradeoff cognitive buck

Completely agree: too easy to cram mindlessly with Anki, I think in large part because of how much work it takes to make cards yourself.

I'm a bit skeptical of the drilling idea because cards taking more than 5 seconds to complete tend to become leeches and aren't the kind of thing you could do long-term, especially with Anki's algorithm. Still worth trying though, would be interested to hear if you or anyone else you know has gotten much benefit from it.

With the thoroughness vs. designer complexity, I think all the options with Anki kind of suck (mainly because I don't think they would work for my level of conscientiousness, at least).

If end users make their own cards, they'll give up (or at least most people would, I think. It's not very fun making cards from scratch).

If you design something for end users (possibly with some of the commoncog tacit knowledge stuff) I think it's sort of beneficial but you wouldn't get same coherence boost as making stuff yourself. Too easy to learn cards but not actually integrate them, usably. It also seems like a pain to make.

For declarative knowledge, I think the best balance for learning is curating content really well for incremental reading alongside (very importantly) either coaching* or more material on meta-skills of knowledge selection to prevent people from FOMO memorizing everything. I think with SuperMemo it wouldn't be hard to make a collection of good material for people to go through in a sane, inferential distance order. Still a fair bit of work for makers but not hellish.


I'm very, very, very curious about the tacit knowledge stuff. I still haven't gotten through all of the commoncog articles on tacit knowledge, though I've been going through them for a while, but in terms of instrumental rationality they seem very pragmatic. (I particularly enjoyed his criticism of rationalists in Chinese Businessmen: Superstition Doesn't Count [by which he means, superstition doesn't mess much with instrumentality]. I still have yet to figure out how to put any of it to use.



*while teaching people how to do IR, I've found direct feedback while people are trying it works well. It took me ages to be any good at IR (5 months to even start after buying supermemo and then another like 3 to be sort of proficient) while I can get someone to me after 1-2 month proficiency in a single ~2 hour session. Works wonders in areas where you can do lots of trial/error with quick feedback.

comment by MikkW (mikkel-wilson) · 2020-09-04T16:36:08.235Z · LW(p) · GW(p)

I think one crux between us is the degree to which "memory is the foundation of cognition", as Michael Nielsen once put it. Coming from the perspective that this is true, it seems to me that a natural consequence of a person memorizing even a simple sentence, and maintaining that memory with SRS, is that the sentence needs to be compressed in the mind to ensure that it has high stability, and can be recalled even after having not been used for many months, or even years.

In order to achieve this compression, it is inevitable that the ideas represented by the sentence will become internalized and integrated deeply with other parts of the mind, which is exactly what is desired. This process is a fundamental part of how the human mind works, and applies even in the mind of a person with low "rationality".

Replies from: algon33
comment by algon33 · 2020-09-04T17:06:57.375Z · LW(p) · GW(p)

Recall that memories are pathway dependant i.e. you can remember an "idea" when given verbal cues but not visual ones. Or given cues in the form of "complete this sentence " and "complete this totally different sentence expressing the same concept". If you memorise a sentence and can recall it any relevant context, I'd say you've basically learnt it. But just putting it into SRS on its own won't do that. Like, that's why supermemo has such a long list of rules and heuristics on how to use SRS effectively.

Replies from: raj-thimmiah
comment by Raj Thimmiah (raj-thimmiah) · 2020-09-04T22:29:59.415Z · LW(p) · GW(p)

Agree on this, memory coherence is pretty important. Cramming leads to results sort of like how you can't combine the trig you learned in highschool with some physics knowledge: there aren't good connections between the subjects, leaving them relatively siloed.

It requires both effort and actually wanting to learn a thing for the thing to integrate well. We tend to forget easily the things we don't care about (see school knowledge).

answer by AnthonyC · 2020-09-02T16:31:58.280Z · LW(p) · GW(p)

Like you said, reading isn't enough. I think two of the key challenges for such software would be limiting inferential distance for any particular user, and giving practice examples/problems that they actually care about. That's much easier with a skilled mentor than with software, but I suspect it would be very helpful to have many different types of contexts and framings for whatever you try to have such software teach.

My first semester college physics class, the first homework set was all Fermi problems, just training us to make plausible assumptions and see where they lead. Things like "How many words are there in all the books in the main campus library?" or "How many feathers are there on all the birds in the world?" Even though this was years before the sequences were even written, let alone when I read them, it definitely helped me learn to think more expansively about what kinds of things count as "evidence" and how to use them. It also encourages playfulness with ideas, and counters the sense of learned helplessness a lot of us develop about knowledge in the course of our formal schooling.

Actually - beyond specific skills, it might be helpful to think about trying to foster the 12 virtues. Not just exercises, but anecdotes to motivate and show what's possible in interesting and real contexts, games that are fun to experiment with, things like that.

comment by Raj Thimmiah (raj-thimmiah) · 2020-09-03T20:23:14.025Z · LW(p) · GW(p)

Inferential distance based knowledge systems would be super cool. There are lots of stats ideas I'd like to engage in but ordering is too much of a pain.

The mentor thing is also true, I think for math in particular. Math/physics are the only subjects where I'd hesitate to just learn them by myself.

answer by rain8dome9 · 2021-03-23T17:55:53.225Z · LW(p) · GW(p)

Less wrong deck exists now though it seems incomplete missing things like Inferential Distance.

answer by Raj Thimmiah · 2020-09-03T09:31:06.810Z · LW(p) · GW(p)

Aside from memorizing declarative knowledge, the question of how to acquire tacit knowledge is very interesting.

I don’t have any current great ideas (other than adding in hammer time like practical tests into things) but I think commoncog’s blog is very interesting, especially the stuff about naturalistic decision making. https://commoncog.com/blog/the-tacit-knowledge-series/ (Can’t link more specifically, on mobile)

answer by Raj Thimmiah · 2020-09-03T09:25:42.695Z · LW(p) · GW(p)

Anki deck is a bad idea because as you said: a. formulation b. poor coherence (when you’re stuffing things other people though was cool in your brain it won’t connect with other things in your brain as well as if you’d made the deck

I think incremental reading with supermemo is a decent option. I’ve taught a few rat adjacenct people supermemo and the ones that have spent time on the sequences inside it have said it’s useful. I’m not sure how to summarize it well but basically, anki let’s you memorize stuff algorithmically while incremental reading let’s you learn (algorithmically) then memorize.

I’d be surprised if after day a year of using IR on the sequences you weren’t at least a fair bit more instrumental

(If you want to give it a try I’ll gladly teach you. I don’t think there’s any more efficient way to process declarative information)

7 comments

Comments sorted by top scores.

comment by rsaarelm · 2020-09-03T09:13:59.362Z · LW(p) · GW(p)

We're already drowning in inert content, I don't see how adding more would help. We've had a way to get something like the martial art of rationality [LW · GW] since ancient Athens, which is structured interaction with an actual human mentor who knows how to engage with the surrounding world and can teach and train other people with face-to-face interaction. This thing isn't mechanizable, like arithmetic or algebra is, so simple interactive programs are not going to be much better than just a regular book. This also isn't a not mechanizable but still clearly delimited topic like wood-carving or playing tennis, so you can't even say you're unquestionably doing the thing when going it alone, even though you might do better with some professional training. What you're trying to teach is the human ability to observe an unexpected situation, make sense of it and respond sensibly to it at a level above baseline adult competency, and the one way we know how to teach that is to have someone competent in the thing you're trying to learn you can interact with.

Like, yeah, maybe this will help, but I can't help but feel that people are compulsively eating ice [LW · GW] and this is planning an ice shavings machine for your kitchen instead of getting an appointment for for having your blood work done.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-09-03T17:54:18.734Z · LW(p) · GW(p)

While I agree with you that face to face interaction with a skilled mentor is the most effective way to learn complex skills such as rationality, that will fundamentally always be limited by the supply of humans who are both sufficiently skilled in the art, and are sufficiently good teachers, and who also have nothing better to do with their time.

So we really shouldn't look at this as either/or - we should, on the one hand, make sure there's good availability and supply of the best opportunity possible (face-to-face with skilled mentors), but also for the vast majority of learners for whom it isn't feasible to provide skilled human guidance, we need to provide the highest-quality content that can easily be scaled. There are flaws I see in the current best scalable solution (primarily stemming from a lack of interactivity), and I'm currently in a better position to attempt to address that issue than to improve the availability of human mentors

comment by Raghu Veer S (raghu-veer-s) · 2020-09-02T06:11:16.889Z · LW(p) · GW(p)

Have you used Syntorial, the synth-learning/tutoring software? I think it makes great use of adaptive interactivity(learning), which I feel tools like brilliant or explorable explanations, although great in terms of UX, lack severely. In fact, I have also found Syntorial to be very effective in terms of memory-related things like remembering patches etc. I think it has that neat quality of helping with both learning/doing and remembering what you learn. Maybe you could look into that too for some inspiration.

Replies from: mikkel-wilson
comment by MikkW (mikkel-wilson) · 2020-09-04T16:50:05.207Z · LW(p) · GW(p)

I agree that Syntorial has better interactivity than many of the "explorables" that have become popular lately, and I agree that high interactivity is vital for maximizing learning.

As far as the actual implementation of Syntorial, beyond the fact that it succeeds at having high interactivity, I find that the user experience lacks flow, and I find it fairly unengaging - in particular the videos slow down the pace, and I would generally want to skip them, but often don't because I worry about missing important information - which is something I hope to do better at in any software I may produce. I think the game Exapunks, while the system it teaches is a fictional system made up for the game, is a good example of a fairly high flow + high interactivity way of teaching skills.

I also think of the edutainment games I played as a kid, it's hard for me to highlight which ones I think are particularly good, since I haven't used them in a very long time, but I know they did a good job of using interactivity to force me to understand the concepts they taught. And I played them voluntarily, so they must have had at least decent flow.

comment by mingyuan · 2020-09-03T02:20:21.482Z · LW(p) · GW(p)

This isn't a direct answer, but seems related.

I've been thinking about how I came to learn the concepts around here, and I realized that the most helpful thing was probably the seminars at Open Phil. I think MOOCs which people work through together (like on some sort of schedule, maybe in groups coordinated through LW) would replicate that experience fairly well. I was specifically thinking of an intro to AI risk course - because even though I'd been in the community for years at that point and had read all of the standard intros (including Superintelligence), I didn't really internalize the arguments or have an understanding of what the field of AI safety looked like until that seminar.

Making a MOOC seems like a lot of work, but there's already a lot of good content out there - obviously there's tons of writing on the topic, and for 'lectures' some of Rob Miles' stuff could probably work (with his consent). So the major remaining hurdle after corralling all of that material into a manageable syllabus would be developing discussion questions and/or setting up small discussion groups that would meet regularly over Zoom.

In any case, I think a lot of people have noticed this problem - one of the main things I hear when I ask people what they'd like to see from LW is some sort of more structured learning thing - and there have already been lots of attempts to solve it, e.g. by developing teaching modules for local groups, writing stuff on the LW wiki, inventing Arbital, etc. etc. etc. (Notably all of these projects have basically been abandoned.) Maybe it's just fine to have a lot of people throwing themselves at the problem from different angles, I'm not sure. I'd love a bigger discussion on this topic.

Replies from: Pattern
comment by Pattern · 2020-09-03T03:02:33.892Z · LW(p) · GW(p)

What's a MOOC? (And do you have any good/representative examples?)

Replies from: Kaj_Sotala