7 traps that (we think) new alignment researchers often fall into
post by Akash (akash-wasil), Thomas Larsen (thomas-larsen) · 2022-09-27T23:13:46.697Z · LW · GW · 10 commentsContents
1. They think they need to be up-to-date on the literature before they can start contributing their own ideas. 2. They end up pursuing proxy goals. 3. They assume that if they don’t understand something, it’s because they are dumb (as opposed to thinking that the writer explained it unclearly, that the writer doesn’t understand it, or that the claim is wrong). 4. They rarely challenge the ideas and frames of authority figures. 5. They don’t distinguish between “intuitions” and “hypotheses”. 6. They end up working on a specific research agenda given by a senior researcher (e.g., Circuits or Infra-Bayesianism), without understanding why this is useful for solving alignment as a whole. 7. They spend too much time on the fundamental math and CS behind alignment (e.g., trying to complete all of MIRI's course recommendations or John Wentworth's study guide) or getting degrees in Math/CS. None 10 comments
We've noticed that new alignment researchers (and sometimes experienced alignment researchers) often fall into similar traps.
Here are 7 traps that we often see in new alignment researchers:
1. They think they need to be up-to-date on the literature before they can start contributing their own ideas.
Suggestion: Staying up-to-to-date on the literature is useful. But you don’t need to read everything before you contribute your own ideas. You can write naive hypotheses [LW · GW] and try to generate new ideas [LW · GW]. For many people, it’s easier to come up with (certain types of) new ideas before reading all of the literature. Once you’ve read the literature, it can be harder to ignore the ideas and frames of others. (see also Cached Thoughts [? · GW])
2. They end up pursuing proxy goals.
A. They end up starting projects (often projects that take 3+ months) without a clear theory of change. They forget the terminal goal (e.g., reducing x-risk) and end up pursuing a proxy goal that looks like it’s helping (e.g., skill-up in ML). To be clear, many proxy goals (like skilling up in ML) are not inherently bad. But they don’t have a sense of why they’re learning ML, which subproblems they’re hoping to solve with ML, and which specific subskills in ML (and other fields) might be helpful. They don’t have a sense of how long they should spend skilling up, what else they should be learning, or what evidence they should be looking for to tell them to stop (or keep going).
B. They lose sight of the terminal goal. The real goal is not to skill-up in ML. The real goal is not to replicate the results of a paper. The real goal is not even to “solve inner alignment.” The real goal is to not die & not lose the value of the far-future.
Suggestion: Keep the terminal goal in mind. Try to have a clear idea of how your actions are getting you closer to the terminal goal. Sometimes, you won’t have clear answers (and indeed if you rely on having a clear end-to-end impact story before doing anything, you may fall into the trap of doing nothing). But notice when you’re doing things that don’t have a clear path toward the terminal goal. Occasionally ask yourself if there are projects that seem useful but don’t actually matter. Be cautious about spending many months on projects unless they have a justifiable theory of change.
3. They assume that if they don’t understand something, it’s because they are dumb (as opposed to thinking that the writer explained it unclearly, that the writer doesn’t understand it, or that the claim is wrong).
Suggestion: If you don’t understand something, have some probability on “This thing actually makes sense and I simply don’t understand it.” But do not discard hypotheses like “this thing is poorly written”, “this thing is confusing”, “the author doesn’t even understand this thing fully yet”, or “this thing is wrong.” Ask people you respect if they understand the thing. Ask them to explain it to you. Ask them to explain any jargon they use. Ask them to explain it as if they were talking to an intelligent high school student. If there are ideas that consistently fail the “high school student” check, stay open to the possibility that the idea hasn’t been properly understood, explained, or justified. If you like writing, you can also post comments on LessWrong, but I find talking to usually be ~10x better because it is so much higher bandwidth.
4. They rarely challenge the ideas and frames of authority figures.
For example, people hear that there is “inner alignment and outer alignment” or they hear that “inner alignment is the most important problem”. And then they start trying to solve inner alignment and outer alignment. And they don’t realize that this is a model. This is not the truth. This is a model.
Suggestion: Remember that the things you read are claims and the frames you hear are models. Some of these frames are unhelpful. Some of these frames are helpful but suboptimal. Notice when you are relying on the same frames as others.
5. They don’t distinguish between “intuitions” and “hypotheses”.
If Alice and Bob disagree (e.g., about whether or not evolution analogies are useful), it’s easy for them to say “ah, we just have different intuitions about the problem” and then they move on. This is unacceptable in other scientific fields and often serves as a curiosity-stopper [LW · GW].
Suggestion: Intuitions should be processed through reasoning and logic to figure out if intuitions are fallacies or hypotheses. If you realize that you have “intuitions” about a topic, take that as an opportunity to examine these intuitions more clearly. Where do they come from? Are there any hypotheses you can make based on them?
6. They end up working on a specific research agenda given by a senior researcher (e.g., Circuits or Infra-Bayesianism [? · GW]), without understanding why this is useful for solving alignment as a whole.
Ending up in this situation is a) not helpful for building up your inside views & b) makes it harder to research, because you don't understand the constraints on your solution.
Suggestion: Try to solve the whole alignment problem. In doing this, think about 1) the key barriers that all of your proposed solutions are running into, and 2) the tools that you have. These solutions (probably) won’t be good, but they are super useful for building inside views. A useful exercise is to build an ‘alignment game tree’, where you (and maybe a few friends) propose solutions to alignment, then break those solutions, then create patches, iteratively.
7. They spend too much time on the fundamental math and CS behind alignment (e.g., trying to complete all of MIRI's course recommendations [AF · GW] or John Wentworth's study guide [LW · GW]) or getting degrees in Math/CS.
These normally take multiple years, and yet I claim that you can get near the frontier of alignment knowledge in ~6 months to a year.
Suggestion: Definitely learn linear algebra, multivariable calculus (the differentiation part, integration doesn’t come up often), probability theory, and basic ML very well. Past that, I recommend learning things as they come up in the course of working on alignment.
While we believe these traps are common, and we want more researchers looking out for them, we also encourage you to consider the law of equal and opposite advice. For each of these traps, it is possible to fall too far in the opposite direction (e.g., never spending time learning relevant math/CS concepts).
10 comments
Comments sorted by top scores.
comment by Richard_Ngo (ricraz) · 2022-10-04T21:36:16.366Z · LW(p) · GW(p)
All of these sound sensible. But for 1-6, when I reverse the advice, it sounds roughly equally sensible, and it feels very hard for me to know whether people are more often erring in one direction other. So I'm wary of believing the overall claim that these are major traps for new alignment researchers.
Replies from: thomas-larsen, DarkSym↑ comment by Thomas Larsen (thomas-larsen) · 2022-10-08T22:58:16.855Z · LW(p) · GW(p)
I have the intuition (maybe from applause lights [LW · GW]) that if negating a point sounds obviously implausible, then the point is obviously true and it is therefore somewhat meaningless to claim it.
My idea in writing this was to identify some traps that I thought were non obvious (some of which I think I fell into as new alignment researcher).
↑ comment by Shoshannah Tekofsky (DarkSym) · 2022-10-08T09:53:21.264Z · LW(p) · GW(p)
What would the sensible reverse of number 5? I can generate those them for 1-4 and 6, but I am unsure what the benefit could be of confusing intuitions with testable hypotheses?
Replies from: ricraz↑ comment by Richard_Ngo (ricraz) · 2022-10-08T15:01:02.370Z · LW(p) · GW(p)
Reversal: when you have different intuitions about high-level questions, it's often not worth spending a lot of time debating them extensively - instead, move onto doing whatever research your intuitions imply will be valuable.
Replies from: DarkSym↑ comment by Shoshannah Tekofsky (DarkSym) · 2022-10-08T18:11:30.870Z · LW(p) · GW(p)
ah, like that. Thank you for explaining. I wouldn't consider that a reversal cause you're then still converting intuitions into testable hypotheses. But the emphasis on discussion versus experimentation is then reversed indeed.
comment by Jon Garcia · 2022-09-28T04:58:57.918Z · LW(p) · GW(p)
- Also, coming up with your own ideas first can help you better understand what you find in the literature. I've found that students learn more readily when they come to a subject with questions already in mind, having tried to figure things out on their own and realized where they had gaps in their mental framework, rather than just receiving a firehose of new information with no context.
- Perhaps try pursuing a number of proxy goals for short, pre-defined periods, while tracking whether each proxy goal is likely to be instrumental for reaching the terminal goal. Assessing the instrumentality of each proxy should be easier once you've started to get a sense of where each project can lead, and abandoning those that are clearly not going to be fruitful should be easier if you don't plan on going all-in from the start.
- Don't be afraid to ask stupid questions. We often tend to refrain from asking questions that we predict would cause those more experienced to perceive us as idiots. Ignore those predictions. Even when the answer is obvious to everyone else, it will help the writer practice clarifying their ideas from a new perspective, which could even help them understand their own work better. And sometimes everyone else is just afraid to look like idiots, too.
- Try steel-manning the best argument you can come up with against an authority's position. Ideas that can withstand the harshest scrutiny are those worth keeping. Ideas that can be destroyed by the truth should be. Help the intellectual community filter the chaff from the wheat.
- Good hypotheses always entail predictive models. If you can't program it, you don't really understand it.
- I can't think of anything else to add to this one.
- Also, don't wait until you've learned linear algebra, multivariable calculus, probability theory, and machine learning before starting to tackle the alignment problem. It's easier to learn these things once you already know where they will be useful to you. Plus, we may not have enough time to wait on mathematicians to come up with provable guarantees of AI safety.
comment by Stephen McAleese (stephen-mcaleese) · 2022-09-28T22:12:11.771Z · LW(p) · GW(p)
I really like this post because it's readable and informative. For the second problem, pursuing proxy goals, I recommend also reading about a related problem called the XY problem.
On point 4: many popular alignment ideas are not models of current systems, but models of future AI systems. Accuracy is then lost not only from modeling the system but also from having to create a prediction about it.
comment by FR_Max · 2022-09-29T17:11:06.817Z · LW(p) · GW(p)
Thank you for your post! It really is. New alignment researchers often fall into is assuming that AI systems can be aligned with human values by solely optimizing for a single metric. Thanks again for the deep insight into the topic and the recommendations.
comment by Noosphere89 (sharmake-farah) · 2022-09-28T01:09:23.523Z · LW(p) · GW(p)
B. They lose sight of the terminal goal. The real goal is not to skill-up in ML. The real goal is not to replicate the results of a paper. The real goal is not even to “solve inner alignment.” The real goal is to not die & not lose the value of the far-future.
I'd argue that if they solved inner alignment totally, then the rest of the alignment problems becomes far easier if not trivial to solve.
Replies from: Thane Ruthenis↑ comment by Thane Ruthenis · 2022-09-28T02:55:13.509Z · LW(p) · GW(p)
But solving inner alignment may not be the easiest way to drive down P(doom), and not the best way for a given person specifically to drive down P(doom), so keeping your eyes on the prize and being ready to pivot to a better project is valuable even if your current project's success would save the world.