Do any AI alignment orgs hire remotely?

post by RobertM (T3t) · 2022-02-21T22:33:04.765Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    6 Michael Chen
    3 Xodarap
    2 Yonatan Cale
    2 Yonatan Cale
    2 Michael Chen
None
No comments

tl;dr: qualified software engineer considering what their next job might be; now thinking about direct work as a serious option.

Previous plan was something like:

SELECT * FROM big_tech_co
WHERE location = 'remote'
ORDER BY team_fit, salary

For a variety of reasons, I'm not a huge fan of this plan anymore.

New plan:

  1. Check the job pages of all the AI alignment orgs I know
  2. Check 80000 Hours jobs board in case I missed something
  3. ???
  4. Post question on LW

I didn't find anything looking at the job pages of the AI alignment orgs that I'm familiar with, and 80000 Hours didn't bring up anything that fit the bill either, so here we are.

Me:

You:

Does anyone know of any orgs that I might have missed?

Most of the other orgs I'm familiar with seem to be doing differently-targeted work (i.e. Ought), or are doing work which seems to boil down to "capabilities advancement", but I'm open to arguments here if I've misjudged one or more of them.

Answers

answer by mic (Michael Chen) · 2022-04-28T23:36:28.925Z · LW(p) · GW(p)

The Fund for Alignment Research is a new organization to help AI safety researchers, primarily in academia, pursue high-impact research by hiring contractors. They're a group of researchers affiliated with the Center for Human-Compatible AI at UC Berkeley and other labs like Jacob Steinhardt's at UC Berkeley and David Krueger's at Cambridge. They are hiring for:

  • Research Engineer (20–40 hours/week, remote or in Berkeley, $50–100/hour) – looking for 2–3 individuals with significant software engineering experience or experience applying machine learning methods.
  • Communications Specialist and Senior Communications Specialist (10–40 hours/week, remote or in Berkeley, $30–80/hour) – communicating high-impact AI safety research. This could be via technical writing/editing, graphics design, web design, presentation development, social media management, etc.

If you have any questions about the role, please contact them at hello@alignmentfund.org.

comment by RobertM (T3t) · 2022-04-30T01:38:31.196Z · LW(p) · GW(p)

Appreciate the recommendation.  Around April 1st I decided that the "work remotely for an alignment org" thing probably wouldn't work out the way I wanted it to, and switched to investigating "on-site" options - I'll write up a full post on that when I've either succeeded or failed on that score.

On a mostly unrelated note, every time I see an EA job posting that pays at best something like 40-50% of what qualified candidates would get in the industry, I feel that collide with the "we are not funding constrained" messaging.  I understand that there are reasons why EA orgs may not want to advertise themselves as paying top-of-market, but nobody's outright said that's what's going on, and there could be other less-visible bottlenecks that I haven't observed yet.

Replies from: wassname
comment by wassname · 2023-02-11T23:30:24.128Z · LW(p) · GW(p)

For what it's worth I was in a similar boat, I've long wanted to work on applied alignment, but also stay in Australia for family reasons. Each time I changed job I've made the same search as you, and ended up just getting a job where I can apply some ML to industry. Just so that I can remain close to the field.

For all the call for alignment researchers, most org's seem hesitant to do the obvious thing which would really expand their talent pool. Which is open up to remote work.

Obviously they struggle to manage and communicate remotely, which prevents them from accessing a larger and cheaper pool of global talent. However they could accelerate alignment by merely supplementing with remote contractors or learning to manage remote work.

Replies from: T3t
comment by RobertM (T3t) · 2023-02-12T07:25:13.732Z · LW(p) · GW(p)

For what it's worth, I've updated somewhat against the viability of remote work here (mostly for contingent reasons - the less "shovel-ready" work is, the more of a penalty I think you end up paying for trying to do it remotely, due to communication overhead).  See here [LW · GW] for the latest update :)

answer by Xodarap · 2022-03-14T23:27:05.425Z · LW(p) · GW(p)

We (the Center for Effective Altruism) are hiring Full-Stack Engineers. We are a remote first team, and work on tools which (we hope) better enable others to work on AI alignment, including collaborating with the LessWrong team on the platform you used to ask this question :)

comment by RobertM (T3t) · 2022-03-15T06:52:17.306Z · LW(p) · GW(p)

Interesting, was this recently posted?  Do you mind if I DM you with some questions?

Replies from: Xodarap
comment by Xodarap · 2022-04-01T16:30:31.123Z · LW(p) · GW(p)

Sure, feel free to DM me.

answer by Yonatan Cale · 2022-06-11T11:25:14.840Z · LW(p) · GW(p)

Anthropic will want you to be in their office at California for at least 25% or so of the time (based on one discussion with them, please correct me if you learn otherwise)

answer by Yonatan Cale · 2022-06-11T11:23:46.736Z · LW(p) · GW(p)

Have you considered CEA [EA · GW]? Not a perfect fit, but they're remote-first and I personally think they help with alignment research indirectly by building the EA community and improving lesswrong.com as well (they use the same code). It's really important, I think, for these places to be (1) inviting, (2) promote good complicated (non toxic) discussions, and (3) connect people to relevant orgs/people, including to AI Safety orgs.

Again, not sure this is what you're looking for. It resonates with me personally

answer by mic (Michael Chen) · 2022-03-02T22:23:10.378Z · LW(p) · GW(p)

I'm curious why you think Ought doesn't count as "an organization that works either directly on AI alignment, or a 'meta' org that e.g. better enables others to work on AI alignment". More on Ought [LW · GW]

It might be worth a shot to quickly apply to speak with 80,000 Hours and see if they have any suggestions.

Fathom Radiant, an ML hardware supplier, is also hiring remotely. Their plan is apparently to offer differential pricing for ML hardware based on the safety practices, in order to help incentivize safer practices and help safety research. I'm not totally sold but my 80,000 Hours adviser seemed like a fan. You can speak with Fathom Radiant to learn more about their theory of change.

comment by RobertM (T3t) · 2022-03-03T07:17:39.786Z · LW(p) · GW(p)

I'm not particularly sold on how Ought's current focus (Elicit) translates to AI alignment.  I'm particularly pessimistic about the governance angle, but I also don't see how an automated research assistant is moving the needle on AI alignment research (as opposed to research in other domains, where I can much more easily imagine it being helpful).

 

This is possibly a failure of my understanding of their goals, or just of my ability to imagine helpful ways to use an automated research assistant (which won't be as usable for research that advances capabilities?).  I'm certainly open to changing my mind here.  The other problem is that they seem to mostly be looking for front-end/full-stack engineers, and while I'm ok with working on the front-end of an existing codebase, I would not describe that as my particular area of expertise.

 

I also forgot to include Anthropic on my list.

 

Fathom seems interesting, but fundamentally the idea seems to be "apply economic incentives to try to tilt the field in a more safety-conscious direction", which has most of the same problems as "throw money at the problem of AI alignment" but one step removed.  They're also grappling with the fact that their R&D efforts are fundamentally capabilities-focused and their only levers are to say "we'll charge you more" or "we won't sell this to you", which I can't say seems like a huge improvement over "this doesn't actually exist for you to buy".

 

I may very well solicit a consultation with 80000 Hours soon :)

No comments

Comments sorted by top scores.