[Closed] Hiring a mathematician to work on the learning-theoretic AI alignment agenda

post by Vanessa Kosoy (vanessa-kosoy) · 2022-04-19T06:44:18.772Z · LW · GW · 21 comments

Contents

  Requirements
  Job Description
  Terms
None
21 comments

UPDATE: The position is now closed. My thanks to everyone who applied, and also to those who spread the word.

The Association for Long Term Existence and Resilience (ALTER) is a new charity for promoting longtermist[1] causes based in Israel. The director is David Manheim, and I am a member of the board. Thanks to a generous grant by the FTX Future Fund Regranting Program, we are recruiting a researcher to join me in working on the learning-theoretic research agenda [AF · GW][2]. The position is remote and suitable for candidates in most locations around the world.

Apply here.

Requirements

Job Description

The researcher is expected to make progress on open problems in the learning-theoretic agenda. They will have the freedom to choose any of those problems to work on, or come up with their own research direction, as long as I deem the latter sufficiently important in terms of the agenda's overarching goals. They are expected to achieve results with minimal or no guidance. They are also expected to write their results for publication in academic venues (and/or informal venues such as the alignment forum), prepare technical presentations et cetera. (That said, we rate researchers according to the estimated impact of their output on reducing AI risk, not according to standard academic publication metrics.)

Here are some open problems from the agenda, described very briefly:

Terms

The position is full-time, and the candidate must be available to start working in 2022. The salary is between 60,000 USD/year to 180,000 USD/year, depending on the candidate's prior track record. The work can be done from any location. Further details depend on the candidate's country of residence.


  1. Personally, I don't think the long-term future should override every other concern. And, I don't consider existential risk from AI especially "long term" since it can plausibly materialize in my own lifetime. Hence, "longtermist" is better understood as "important even if you only care about the long-term future" rather than "important only if you care about the long-term future". ↩︎

  2. The linked article in not very up-to-date in terms of the open problem, but is still a good description on the overall philosophy and toolset. ↩︎

21 comments

Comments sorted by top scores.

comment by rank-biserial · 2022-04-21T14:59:05.479Z · LW(p) · GW(p)

How did you choose the salary range?

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2022-04-24T08:35:25.734Z · LW(p) · GW(p)

The point of reference was salaries of academics in the US, across all ranks.

comment by Algon · 2022-04-19T10:25:35.274Z · LW(p) · GW(p)

If you could choose anyone to work on this, who would you choose?

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2022-04-20T16:43:29.715Z · LW(p) · GW(p)

I dunno, maybe Maria-Florina Balcan or Constantinos Daskalakis?

Replies from: RedMan
comment by RedMan · 2022-04-21T13:23:20.057Z · LW(p) · GW(p)

Assuming this is serious, have you reached out to them?

The salary offer is high enough that any academic would at least take the call. If they're not interested themselves, you might be able to produce an endowment to get their lab working on your problems, or at a bare minimum, get them to refer one or more of their current/former students.

Replies from: LGS
comment by LGS · 2022-04-24T23:33:20.124Z · LW(p) · GW(p)

The salary is not that high. If Costis or Nina earn less than $150,000 USD/year, I will eat my hat. $200k is more likely.  Also, their job comes with tenure (and access to the world's top graduate students), and you're unlikely to get them to quit it.

(It is true that they might refer some of the open problems to their graduate students, though.)

Replies from: joel-burget
comment by Joel Burget (joel-burget) · 2022-04-25T14:37:37.585Z · LW(p) · GW(p)

Academics not willing to leave their jobs might still be interested in working on a problem part-time. One could imagine that the right researcher working part-time might be more effective than the wrong researcher full time.

comment by Davidmanheim · 2022-04-19T06:49:20.707Z · LW(p) · GW(p)

Please feel free to repost this  elsewhere, and/or tell people about it. 

And if there is anyone interested in this type of job, but is currently still in school or for other reasons is unable to work full time at present, we encourage them to apply and note the circumstances, as we may be able to find other ways to support their work, or at least collaborate and provide mentorship.

Replies from: noah-topper
comment by Noah Topper (noah-topper) · 2022-04-19T14:53:12.073Z · LW(p) · GW(p)

But even in the case of still being in school, one would require the background of having proved non-trivial original theorems? All this sounds exactly like the research agenda I'm interested in. I have a BS in math and am working on an MS in computer science. I have a good math background, but not at that level yet. Should I consider applying or no?

Replies from: Davidmanheim
comment by Davidmanheim · 2022-04-24T12:52:45.280Z · LW(p) · GW(p)

For this position, we are looking for people already able to contribute at a very high level. If you're interested in working on the agenda to see if you'd be able to do this in the future, I'd be interested in chatting separately and looking at whether some form of financial support or upskilling would be useful, and look at where to apply for funding.

Replies from: ViktoriaMalyasova, noah-topper
comment by ViktoriaMalyasova · 2022-04-25T08:05:59.864Z · LW(p) · GW(p)

I have a BS in mathematics and MS in data science, but no publications. I am very interested in working on the agenda and it would be great if you could help me find funding! I sent you a private message.

comment by Noah Topper (noah-topper) · 2022-04-24T18:52:13.569Z · LW(p) · GW(p)

Cool, makes sense. I was planning on making various inquiries along these lines starting in a few weeks, so I may reach out to you then. Would there be a best way to do that?

Replies from: Davidmanheim
comment by Davidmanheim · 2022-04-25T13:51:46.922Z · LW(p) · GW(p)

Nope, find me online, I'm pretty easy to reach.

comment by ViktoriaMalyasova · 2022-04-24T08:44:20.421Z · LW(p) · GW(p)

How does this relate to this job offer [LW · GW]? Is this a second job or the same job with requirements clarified? Should I give up on this job now if I don't have publications? 

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2022-04-24T09:07:35.670Z · LW(p) · GW(p)

It is a completely different job, with different requirements, different responsibilities and even different employers (the other job is at MIRI, this job is at ALTER).

comment by Arthur Conmy (arthur-conmy) · 2022-04-19T15:32:11.781Z · LW(p) · GW(p)

When do applications close?

When are applicants expected to begin work?

How long would such employment last?

Replies from: vanessa-kosoy
comment by Vanessa Kosoy (vanessa-kosoy) · 2022-04-19T16:59:12.435Z · LW(p) · GW(p)

When do applications close?

There is no particular deadline, it will be my judgment call based on distribution of applications over time and quality. I expect the position to remain open for no less than 2 weeks and no more than 6 months, but it's hard to say anything more specific atm.

When are applicants expected to begin work?

We are flexible about this: if an applicant needs several months to complete other commitments, it is perfectly acceptable.

How long would such employment last?

Until we either solve AI alignment or the AI apocalypse comes :)

(Or, the employment is terminated because one of the parties is unsatisfied, or we run out of funding, hopefully neither will happen.)

comment by shiney · 2022-04-19T18:17:54.758Z · LW(p) · GW(p)

If someone wanted to work out if they might be able to develop the skills to work on this sort of thing in the future, is there anything you would point to?

Replies from: Davidmanheim
comment by Davidmanheim · 2022-04-25T13:51:14.163Z · LW(p) · GW(p)

If you're interested, I'd start here: https://www.alignmentforum.org/posts/YAa4qcMyoucRS2Ykr/basic-inframeasure-theory [AF · GW] and go through the sequence. (If you're not comfortable enough with the math involved, start here first: https://www.lesswrong.com/posts/AttkaMkEGeMiaQnYJ/discuss-how-to-learn-math [LW · GW] )

And if you've gone through the sequence and understand it, I'd suggest helping developing the problem sets that are mentioned in one of the posts, or reaching out to me.

Replies from: shiney
comment by shiney · 2022-04-27T18:13:28.396Z · LW(p) · GW(p)

Thanks, I'll see how that goes, assuming I get enough free time to try this.

comment by Milli | Martin (Milli) · 2022-05-13T19:40:23.833Z · LW(p) · GW(p)

Application form is closed. Can this be marked in the title?