How should potential AI alignment researchers gauge whether the field is right for them?

post by TurnTrout · 2020-05-06T12:24:31.022Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    9 adamShimi
    4 DanielFilan
    4 G Gordon Worley III
None
No comments
undefined

Answers

answer by adamShimi · 2020-05-06T14:11:37.413Z · LW(p) · GW(p)

(Caveat: as an aspiring AI Safety researcher myself, I'm both qualified and unqualified to answer this. Also, I'll focus on technical AI Safety, because it's the part of the field I'm most interested in.)

As a first approximation, there is the obvious advice: try it first. Many of the papers/blog posts are freely available on the internet (which might not be a good thing, but that's a question for another time), and thus any aspiring researcher can learn what is going on and try to do some research.

Now, to be more specific about AI safety, I see at least two sub-questions here:

  • Am I the right "kind" of researcher for working in AI Safety? Here, my main intuition is that the field needs more "theory-builders" than "problem-solvers", to take the archetypes of Gower's Two Cultures of Mathematics. By that I mean that AI Safety has not yet cristallize into a field where the main approaches and questions are well understood and known. Almost every researcher has a different perspective on what is fundamental in the field. Therefore, the most useful works will be the ones that clarify, deconfuse and characterize the fundamental questions and problems in the field.
  • Can I get a job at a research lab in AI Safety? Of course, new researchers can also get funding, from the Long Term Future Fund for example. But every grant write-up that I saw mentioned a recommendation by someone already in the field. So even looking out for funding probably requires to make some team interested in you. As for the answer to the question, it really depends on the lab (because they all have different approaches to AI Safety). For example, MIRI is interested in brilliant programmers (if possible in Haskell) that can understand and master complex maths and dependent type theory; CHAI is interested in researchers with (or able to build) an expertise in the theory of deepRL; OpenAI is interested in both good researches in practical deepRL and researchers in the theoretical computer science used by Christiano's agenda; and so on. The great thing about most of these labs is that you can find someone to ask questions on what they are looking for.
comment by Gordon Seidoh Worley (gworley) · 2020-05-06T18:02:06.478Z · LW(p) · GW(p)
Am I the right "kind" of researcher for working in AI Safety? Here, my main intuition is that the field needs more "theory-builders" than "problem-solvers", to take the archetypes of Gower's Two Cultures of Mathematics. By that I mean that AI Safety has not yet cristallize into a field where the main approaches and questions are well understood and known. Almost every researcher has a different perspective on what is fundamental in the field. Therefore, the most useful works will be the ones that clarify, deconfuse and characterize the fundamental questions and problems in the field.

To add on to this, it also means it's going to be somewhat hard to know if you're right kind of researcher or not because the feedback cycle is long and you may be doing good work but it's work that will take months or years to come together in a way that can be easily evaluated by others.

This doesn't mean it all looks maximally like this. This is less of an issue with, say, safety research focused on machine learning than safety research focused on theoretical AI systems we don't know how to build yet or safety research focused on turning ideas about what safety looks like into something mathematical precise enough to build.

Thus a corollary of this answer might be something like "you might be the right kind of researcher only if you're okay with long (multi-year) feedback cycles".

Replies from: adamShimi
comment by adamShimi · 2020-05-06T18:54:32.902Z · LW(p) · GW(p)

I agree, but I'm not sure if it's really linked to the division between problem solvers and theory builders. Because you can have very long feedback loops in problem solving -- think Wiles and Fermat's last theorem. That being said, I think the advantage of the problem solvers is that they tend to attack problems that are recognized as important, and thus the only uncertainty is in whether they can actually solve it. Whereas deconfusion or theory building is only "recognized" at the end, when the theory is done and it works and it captures something interesting.

answer by DanielFilan · 2020-08-13T05:11:14.891Z · LW(p) · GW(p)

I'd say a pretty good way is to try out AI alignment research as best you can, and see if you like it. This is probably best done by being an intern at some research group, but sadly these spots are limited. Perhaps one could factor it into "do I enjoy AI research at all", which is easier to gain experience in, and "am I interested in research questions in AI alignment", which you can hopefully determine through reading AI alignment research papers and introspecting on how much you care about the contents.

answer by Gordon Seidoh Worley (G Gordon Worley III) · 2020-05-06T18:09:27.415Z · LW(p) · GW(p)

In my mind it's something like you need:

  • strong interest in solving AI safety
  • being okay with breaking new ground and having to figure out what "right" means
  • strong mathematical reasoning skills
  • decent communication skills (you can less rely on strong existing publication norms and may have to get more creative to convey your ideas than in other fields)
  • the courage and care to work on something where the stakes are high and if you get it wrong things could go very badly

I think people tend to emphasize the technical skills the most, and I'm sure other answers will offer more specific suggestions there, but I also think there's an import aspect of having the right mindset for this kind of work such that a person with the right technical skills might not make much progress on AI safety without these other "soft" skills.

No comments

Comments sorted by top scores.