Training Better Rationalists?
post by Davis_Kingsley · 2021-08-05T11:13:16.743Z · LW · GW · 7 commentsThis is a question post.
Contents
Answers 9 mike_hawke 9 ChristianKl 5 River 4 eg 2 Dagon None 7 comments
I was recently listening to a podcast discussion that included two people that had been involved in military special operations units -- one in the Navy SEALs and one in the US Army Special Forces. I was struck by their extremely high level of training, dedication, commitment, and overall ability -- but also by also how this had in large part been squandered on fighting a destructive and unproductive war in Afghanistan, supporting questionable CIA operations, and so on.
It occurs to me that people in the rationalist community are at least notionally working on much more important causes, but with far less training, commitment, personal skill, etc.
This leads to my question -- what would it look like if similar levels of training effort, funding, selection, etc. were going into preparing rationalists to do as much good in the world as possible as is currently going into preparing elite military units? (I don't think that this would necessarily look much like elite military training, to be clear!)
If this first exercise comes up with anything that seems promising -- are there ways that we could potentially 80/20 this and get much of the goods without high costs?
(nb: this post is just personal musings and not "official" on behalf of CFAR or any other organization.)
Answers
I think I would be a much better-trained rationalist if I did my basic rationality practices as regularly as I do physical exercise. The practices are:
- I keep a list of things I have changed my mind about. Everything from geopolitics to my personal life.
- I hardly ever go looking outside my echo chamber in search for things that can challenge/correct my beliefs because I find it very effortful & unpleasant. (You know what else is effortful? Pushups).
- I sometimes write letters to my future or past selves. I tried giving comprehensive life advice to my 16-year-old self, and ended up learning a lot about advice and spurious counterfactuals...
- I sometimes do the Line of Retreat [LW · GW] negative visualization. For immediate things, I tend to do it out loud while on a walk. For political beliefs, I slowly add to a private document over time and occasionally review it.
- I maintain a list of my disagreements with various public thinkers. Helps me separate tribal thinking from truth-seeking.
- I made an Anki deck for memorizing my defensive epistemology heuristics: "is this explainable by selection effects?", Proving Too Much, "is this claim consistent with their previous claim?", Reversal Test, etc.
- I notice I'm at a point where I can make surprisingly good fermi estimates if I spend a couple minutes thinking really hard, usually not otherwise. Feels like there's room for improvement.
- Hard to practice with regularity, but these days I try to restrain myself from joining into an in-progress debate when I overhear one, and instead sit on the sideline and patiently wait for openings to point out (double) cruxes [? · GW].
- Prompt myself to answer, "what would a slightly improved version of me do in this situation? What would I think if I were more rested and better hydrated?" It's embarrassing how much mileage I have gotten out of role-playing as myself.
- Privately journaling about my internal conflicts or difficult feelings. Simple but underpracticed (much like sit-ups).
- I wrote down a page of predictions about the state of crypto tech in 2031, aiming for maximum specificity & minimal future embarrassment. Similar for Twitter in this post [LW · GW]. I guess I might call this "conscientious futurism" or just "sticking my neck out".
- Pro/Con lists. They're effortful & time-intensive. But so is half-focused vacillating, which is what I do by default.
So yeah, those are my rationality exercises, and I really wish I practiced them more regularly. It's not exactly high-level SEAL-inspired training, and it's pretty hard to verify [? · GW], but...it feels like it makes me more rational.
The Navy SEALs not only spend a lot of training effort, funding and selection on training individuals but they spent a good portion on research into how to train.
One aspects to researching how to train an ability is to have a way to measuring progress. I think the Navy SEALs put their trainees at the end of the training through tests to evaluate whether to give them proper Navy SEAL status, so you likely can focus the training on improving on clear metrics.
If we had clear metrics based on which we could measure progress on training rationality, we could put efforts into maximizing those.
This may be a result of selection - the military is a couple of orders of magnitude bigger than the rationalist community, and you heard the best of the best that they have.
↑ comment by Davis_Kingsley · 2021-08-05T14:56:10.820Z · LW(p) · GW(p)
True, but the mechanisms that cause people to want to join the military (and elite military units in particular) are in my view in scope for this discussion. What would it look like for the rationalist community to be a thing that many intelligent, highly motivated people aspire to join?
My impression is that SEALs are exceptional as a team, much less individually. Their main individual skill is extreme team-mindedness.
Maybe if we can identify an enemy who's going to shoot at us, we can select and instill that level of commitment. I suspect it comes from a pre-rational part of human motivation, and is not available to the vast majority of rationalists.
↑ comment by jeronimo196 · 2021-08-20T00:12:23.932Z · LW(p) · GW(p)
After the training begins, something like 80% of the recruits drop out during Hell Week. Seals are selected for their motivation, which is not available to everyone headed for a warzone.
On the other hand, if you'd really like an existential treat to get you going, you may consider looking into the problem of goal alignment in AGI, or aging.
7 comments
Comments sorted by top scores.
comment by mingyuan · 2021-08-05T17:35:10.443Z · LW(p) · GW(p)
I listened to David Goggins' account of Navy SEAL training last year. They encourage you to push yourself so hard that you are at genuine risk of death or permanent disability. The first two times Goggins tried to get through he failed out because of injuries, even though he was willing to — and did — run many miles on literally broken legs. He only made it through the third time because hell week got cut short due to someone in their cohort DYING (from participating in some kind of swimming exercise while very sick with pneumonia).
I actually found the book incredibly inspiring, though it did not make me think anyone should model themselves after the Navy SEALs in particular. I also don't think someone should run 100 miles in 24 hours with zero training and continue despite the fact that their legs are breaking and they're shitting and pissing blood while they run, which is another thing that Goggins did.
One training exercise in the book that seemed more reasonable to me (more like an exercise and less like abject torture) was an orienteering-type thing (for I think the Army Rangers?), where the terrain was treacherous and unfamiliar and the weather dangerously cold at night. I think it's a good test of rationality to put yourself in a genuinely high-stakes situation like that — as long as one of the choices you're allowed to make is to call for help if you are genuinely afraid for your life. That was an option in the case of the Rangers orienteering challenge, but my point is that the thing that's bad about SEAL hell week is that you're considered a pussy if you quit, even if it's out of genuine and reasonable fear for your life.
The book overall is about the idea that your limits are fake, and humans can accomplish things that seem like they should be physically impossible as long as they just don't give up. I think that's a concept we could work with.
I think there are quite a few rationalists who challenge themselves to do fairly hard things, like founding a successful startup, putting together a large conference on short notice at the age of 18, or publishing a good post on rationality every day for a month, things kind of like that. I think I've challenged myself a lot more than I would have if I weren't in the rationalist community, but I don't think I've ever tried to do something that I felt was impossible. (I think a precious few rationalists have faced the impossible — probably Holden and Eliezer, to name any at all — but they're very much the exception rather than the rule.)
Here are some things that feel impossible:
- Write something as groundbreaking as the sequences, starting in one week (that's your planning period) and posting every day for at least a year
- Cause the public collective consciousness and ~all significant intellectuals in the US to take x-risk (and especially AI x-risk) seriously, within the year
- Note that I very much do not suggest that people throw themselves at this task!
- Make a novel discovery in particle physics (or a similar well-established field that you've never studied before), within six months
- Without piggybacking on any existing space exploration project, put a spacecraft of your own design / owned by you on the moon within five years
- Found a new country that gets recognized by the UN
And here are some things where I can see a path to accomplishing them, but where that path feels incredibly hard and scary — these examples are specific to me:
- Become fluent in Mandarin, both speaking/listening AND reading/writing, in the next three months
- I have a lifetime of failure to learn Mandarin behind me, including one academic year when I really actually tried, also Mandarin is just really fucking hard
- Run a marathon within the next year
- I have a chronic leg injury that makes running essentially impossible, that feels insurmountable but probably in reality is not
- Make a million dollars in the next six months just via investing/betting
- I am a very risk-averse person and was raised to fear the stock market
- Permanently fix my depression and anxiety
- It's probably not impossible but jeeeeeeeeeeeeeeezzzz
- Found and run a company, like, one with actual employees and investors and a goal of growth (not just a one-person LLC, that's cheating)
- This just sounds awful in every way; I hate dealing with people and money and feel super unqualified for all of this
Again these will be different for different people. I think Eliezer's quest to lose weight qualifies somewhere around here. I think things in this class are probably better candidates for serious rationality training exercises than the first list, though, maybe that's wrong.
Anyway the goal is not to teach object-level skills, but to cause people to change their outlook on tasks that seem impossible. I think that's one really important skill for rationalists/EAs to have, though not the only important skill. In any given quest you will probably learn additional useful object-level skills.
So idk those are some thoughts on one aspect of the thing. Didn't properly feel like an answer so here it is as a comment instead.
Replies from: lsusr, ChristianKl↑ comment by lsusr · 2021-08-05T21:52:07.355Z · LW(p) · GW(p)
Become fluent in Mandarin, both speaking/listening AND reading/writing, in the next three months
I have a lifetime of failure to learn Mandarin behind me, including one academic year when I really actually tried, also Mandarin is just really fucking hard
I wrote software that's designed for this specific application. It's basically homebrew Anki with the brakes removed, hooked up to a tokenizer, a dictionary, machine translation and a text-to-speech API. The system is unpolished, but it is in a usable state. (I use it everyday.) The whole thing is a web app, so it requires no technical knowledge to use. I'm looking for beta users in case anyone wants to try something "incredibly hard".
Replies from: MSRayne, brendan.furneaux↑ comment by brendan.furneaux · 2021-08-07T04:36:08.347Z · LW(p) · GW(p)
Specifically for Mandarin, or other languages as well?
Replies from: lsusr↑ comment by lsusr · 2021-08-07T04:51:10.853Z · LW(p) · GW(p)
Specifically for Mandarin, but I can add additional major languages just by writing a tokenizer for them. I'm working on a new system built around GPT-3 that I hope to launch August 14th. The new system should be able to support any major language right out of the box. (I don't know if I can meet this ship date. The schedule is extremely ambitious. Moreover, OpenAI might reject the use case on the grounds it is too free-form.) It'll also be orders of magnitude more expensive to use. Right now, I'm estimating $6 per hour.
↑ comment by ChristianKl · 2021-08-05T18:14:09.890Z · LW(p) · GW(p)
Found a new country that gets recognized by the UN
Given current available crypto-technology I have the impression that there's currently a window for funding states but I'm uncertain whether talking about the how to is a good idea given that it possible gives AGI's a more power.
comment by Gunnar_Zarncke · 2021-08-05T22:30:54.692Z · LW(p) · GW(p)
Related: On the Loss and Preservation of Knowledge [LW · GW]