12 career-related questions that may (or may not) be helpful for people interested in alignment research

post by Akash (akash-wasil) · 2022-12-12T22:36:21.936Z · LW · GW · 0 comments

Contents

  On Plans
  On getting feedback
  Misc
None
No comments

Epistemic status: Some people tell me that these kinds of questions are helpful sometimes

At EAGxBerkeley, people interested in alignment research often asked me for career advice. I am not a technical alignment researcher, but some people claim they find my style helpful (my “style” usually is heavy on open-ended questions, reflecting things back at people, and noticing things that people aren’t considering).

I noticed a few questions that came up frequently in my 1-1s. Here are some examples:

On Plans

  1. What are your transformative AI timelines (and to what extent do your plans currently make sense given your timelines?)
    1. Example: If you put a ~50% chance of TAI within the next 10-15 years, does your current plan let you contribute in time? If your current plan involves gaining credentials for 5-10 years, how much time does that leave you to contribute? Have you considered alternatives that involve spending a greater proportion of remaining time on direct work? 
  2. Forget about your current plan. Think about the goal you’re trying to achieve. If you work backwards from the goal, what are the most efficient ways to achieve it? What steps will actually make you likely to achieve your goal [LW · GW]?
  3. Is there any way you can replace your current plan with one that’s 10X more ambitious? What would a desirable tail outcome look like?
  4. Is there any way you can take your current plan and achieve it 10X faster?
  5. Are there any options outside of the “default path” that you might be neglecting to consider? Are there any options that are unconventional or atypical that you might want to consider more carefully? (e.g., taking a gap year, traveling to AIS hubs)
    1. Note of course that sometimes the “default path” is actually the best option. I ask this question because I think people generally benefit from carefully considering options, and “weird” options are often the options that people have most neglected to consider.
  6. Have you considered potential downsides of your current plan? Who might be able to help you identify more of them? How might you mitigate them?

On getting feedback

  1. Who would you ideally get mentorship from?
    1. Could you reach out to them and ask, “is there anything I can do (or produce) to help you evaluate whether or not I’d be a good fit for your mentorship?”
  2. Who would you ideally get feedback from?
    1. Could you reach out to them with a 1-3 page google doc that describes the way you’re currently thinking about alignment, which threat models you find most plausible, and what your current plans are?
  3. Have you considered posting your current thoughts about X on LessWrong, naming your uncertainties, and seeing if anyone has useful feedback?
  4. What are some ways you could test X hypothesis quickly? How could you spend a few days (or hours) on Y to help you find out if it’s worth spending weeks or months on it? Are there any lean tests [LW · GW] you can perform?

Misc

  1. What problems/subproblems do you currently view as most important, and what threat models seem most plausible?
  2. Are there any crucial considerations that you might be missing? What are some things that, if true, might completely change your trajectory?

Disclaimer: Note of course that all of these questions have failure modes, and it’s important to not just apply them naively. For example, the fact that someone could imagine a plan that’s 10X more ambitious than their current plan does not automatically mean that they should pursue it; the fact that someone can identify a way to contribute faster does not necessarily mean it’s better than an option that involves spending more time skilling up. Nonetheless, I think people should consider these questions more, and I think at least a few people have found them helpful.
 

0 comments

Comments sorted by top scores.