Current AI Safety Roles for Software Engineers
post by ozziegooen · 2018-11-09T20:57:16.159Z · LW · GW · 9 commentsContents
OpenAI Safety Team Ought MIRI CHAI Deepmind Safety Team General AI work at OpenAI / DeepMind / Google Brain None 9 comments
[Note: Please make sure to see the comments for other, newer information]
I've had several conversations over the last few months with engineers who were trying to enter the field of AI safety. It became evident that I was giving pretty much the same advice to all of them, so I finally decided to write it up.
Some more context; late last year it became evident to me that funding was becoming much less of an issue around AI safety and engineering expertise was becoming more of an issue. I decided to leave my job to work in the area. I spent some time consulting for Ought, then eventually came to a point where it seemed more useful to do some self-studying for a while. During that period I spoke to several people in the Bay Area about engineering needs at AI safety organizations.
The hiring situation still seems a bit confusing to me. There are a lot of EA engineers who seem to want to do direct EA work, but are not sure what jobs they could get. Most AI safety organizations seem to really desire to find more good employees (and the AI-oriented ones, engineers), but are still fairly selective in their choices. I think that these organizations have typically been able to be selective, would prefer to do so when possible, and also have special demands that come from being small, new, and theoretical / EA.
If you are an engineer desiring to work in an EA organization soon or in the future, I suggest either getting really good at a few skills particularly useful to EA organizations (reinforcement learning, functional programming, ML), getting really good at startup engineering skills, or getting good at non-engineering skills desired by EA organizations. From what I've seen, spending marginal years on "generic medium-large company backend skills" is often not that useful for future EA positions at this point or expected to be in the future.
The following list represents the main organizations I've considered for work around AI safety, starting as an engineer without particular ML experience. If you are interested in all eng. positions in EA, I recommend 80k's job list. Also, 80,000 hours recently released an interview with two ea-aligned ML engineers, which I recommend if you are interested in more detail.
OpenAI Safety Team
I think the OpenAI safety initiatives may be some of the most visible AI-safety work at the moment. I believe the team has around 4-7 researchers and 2-5 research engineers. They are looking for more of both, but the research engineering position is likely more obtainable for people without a good amount of existing expertise. From what I understand, they believe they have many "shovel-ready" ideas that can be given to research engineers, and could absorb more research engineers for this purpose. They seem to intend to grow considerably in the next few years.
Their team is pretty focused on reinforcement learning, and this is the main unique requirement for new recruits. This is something that is very self-learnable; in fact, team members were quite friendly to me in recommending specific ways to self-study (mainly by replicating many of the main papers in reinforcement learning.) They just released a project to help people self-educate in Deep RL.
This educational effort seems like it would take around 2-8 months full-time work for an experienced programmer. If you don't have the money to take time off to do this, I personally recommend reaching out to EA community members/grant organizations to ask for it. If you are a good programmer without the time to study RL, you may want to get in contact with them anyway, I imagine they may be willing to take some non-RL people with enough general software experience. Link.
Also note that OpenAI is actively hiring a frontend engineer, who would work partly with the safety team.
Ought
Ought is currently looking for one full-stack web developer, with strong fundamentals in computer science and functional programming. One of the main next projects is about building a new system, likely from scratch, that appears to be quite demanding in terms of sophistication. Also, they are looking for a COO, and have a preference for people with programming experience, so if that sounds interesting to you I suggest reaching out. I'd personally be happy to chat about the organization, this is the one I have the most experience with.
If you are interested in reading about Ought, I recommend starting with their blog and then going through much of the rest of their website. While they are pretty new, they do have buy-in from OpenPhil, FHI, and Paul Christiano, and are respected within the main EA safety community. Right now they are relatively small; this could be good for someone who likes getting involved early on, but bad for people who like structure. Link.
MIRI
MIRI seems to be looking for software engineers who are generally very capable. Machine learning experience is nice but definitely not necessary. Similar to Ought they seem to be looking for people with strong fundamentals and functional programming experience/interest, though less focus is on architecture experience. The work is very secretive, so be prepared to accept that aspect of it. Also, note that the culture is quite specific; the interview process is quite selective for culture fits, and I recommend taking that into consideration as to if it would be a long-term fit for you (for the right people it seems fantastic). I believe they would like to hire several engineers in the next few years. Their bar is high in my opinion, in part because there are some strong candidates. Of course, that also means that if you do join, you would be working with some pretty smart people. Link.
CHAI
I've had the privilege of spending some time this past summer attending a few CHAI events and similar, and have found the crowd fairly diverse and friendly. The organization is basically made up of several PHD candidates working on a variety of different projects around AI safety. This seemed like the youngest of the AI safety groups (in regards to the age of personnel, not the age of interest in the subject). They are hiring for Research Engineers to support their work (I think in the beginning they're really looking for one good one to try out); I believe in this role you would basically be assisting a few of them on some work that's particularly needing of engineering support. The work may be pretty diverse for this reason (a few months on one project, then a few on another), which comes with costs and benefits. I think this position is probably the most overlooked on this list, and as such, it may be the most available to engineers without much specialized experience. The position requires some ML experience, but not as much as I initially feared; I think it may be possible that with 1-3 online courses, your ML skill may be useful enough to be relevant there for introductory work. They also seem willing to help train candidates that would dedicate enough time afterward to the cause.
Deepmind Safety Team
I don't know very much about Deepmind's safety team, but I have heard that it too is trying to grow. One main differentiator is that it's based in London.
General AI work at OpenAI / DeepMind / Google Brain
My general impression is that the direct-ai-safety approaches above are considered the most valuable, but there are lots of other ML / AI safety positions that could be good for career building or corporate influence. I have not done much research into these positions.
9 comments
Comments sorted by top scores.
comment by Buck · 2018-11-10T04:40:13.666Z · LW(p) · GW(p)
I believe they would like to hire several engineers in the next few years.
We would like to hire many more than several engineers--we want to hire as many people as engineers as possible; this would be dozens if we could, but it's hard to hire, so we'll more likely end up hiring more like ten over the next year.
I think that MIRI engineering is a really high impact opportunity, and I think it's definitely worth the time for EA computer science people to apply or email me (buck@intelligence.org).
comment by AdamGleave · 2018-11-17T00:02:48.333Z · LW(p) · GW(p)
Description of CHAI is pretty accurate. I think it's a particularly good opportunity for people who are considering grad school as a long-term option: we're in an excellent position to help people get into top programs, and you'll also get a sense of what academic research culture is like.
We'd like to hire more than one engineer, and are currently trialling several hires. We have a mixture of work, some of which is more ML oriented and some of which is more infrastructure oriented. So we'd be willing to consider applicants with limited ML experience, but they'd need to have strengths in other areas to compensate.
If anyone is considering any of these roles and is uncertain whether they're a good fit, I'd encourage you to just apply. It doesn't take much time for you to apply or for the organisation to do an initial screening. I've spoken to several people who didn't think they were viable candidates for a particular role, and then turned out to be one of the best applicants we'd received.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2018-11-17T00:10:12.893Z · LW(p) · GW(p)
+1 on the last para, has repeatedly been my experience that the best qualified candidates for a job were not sure that they were and thought this meant they shouldn't apply, which is a quite unfortunate default decision.
comment by catherio · 2018-11-10T07:04:32.800Z · LW(p) · GW(p)
Important updates to your model:
- OpenAI recently hired Chris Olah (and his collaborator Ludwig Schubert), so *interpretability* is going to be a major and increasing focus at that org (not just deep RL). This is an important upcoming shift to have on your radar.
- DeepMind has at least two groups doing safety-related research: the one we know of as "safety" is more properly the "Technical AGI Safety" team, but there is also a "Safe and Robust AI team" that does more like neural net verification and adversarial examples.
- RE "General AI work in industry" - I've increasingly become aware of a number of somewhat-junior researchers who do work in a safety-relevant area (learning from human preferences, interpretability, robustness, safe exploration, verification, adversarial examples, etc.), and who are indeed long-term-motivated (determined once we say the right shibboleths at each other) but aren't on a "safety team". This gives me more evidence that if you're able to get a job anywhere within Brain or DeepMind (or honestly any other industry research lab), you can probably hill-climb your way to relevant mentorship and start doing relevant stuff.
Less important notes:
- I'm at Google Brain right now, not OpenAI!
- I wrote up a guide which I hope is moderately helpful in terms of what exactly one might do if one is interested in this path: https://80000hours.org/articles/ml-engineering-career-transition-guide/
- Here's a link for the CHAI research engineering post: https://humancompatible.ai/jobs#engineer
↑ comment by ozziegooen · 2018-11-10T11:37:49.419Z · LW(p) · GW(p)
Thanks for the updates. Sorry about getting your organization wrong, I changed that part.
comment by philip_b (crabman) · 2018-12-07T02:12:18.268Z · LW(p) · GW(p)
Could you please elaborate what kind of culture fit MIRI require?
Replies from: ozziegooen↑ comment by ozziegooen · 2019-01-20T22:16:14.256Z · LW(p) · GW(p)
It's kind of hard to describe. In my mind, people who are passionate about advanced mathamatics, LessWrong/Eliezer's writing, and AI safety should be be a good fit. You could probably tell a lot just by reading about their current team and asking yourself if you'd feel like you fit in with them.
comment by Mati_Roy (MathieuRoy) · 2018-11-28T20:56:44.395Z · LW(p) · GW(p)
EA safety community
Lapsus? ^_^
Replies from: ozziegooen↑ comment by ozziegooen · 2018-11-30T14:47:46.034Z · LW(p) · GW(p)
Good point, fixed. I think about terms the "AI safety community" and "EA safety community" to be focusing on the same thing, use them interchangeably sometimes.