ARC is hiring!
post by paulfchristiano, Mark Xu (mark-xu) · 2021-12-14T20:09:33.977Z · LW · GW · 2 commentsContents
What is ARC? Who should apply? Hiring process and details None 2 comments
The Alignment Research Center is hiring researchers; if you are interested, please apply!
(Update: We have wrapped up our hiring "round" for early 2022, but are still accepting researcher applications on a rolling basis here, though it may take us longer to get back to you.)
What is ARC?
ARC is a non-profit organization focused on theoretical research to align future machine learning systems with human interests. We are aiming to develop alignment strategies that would continue to work regardless of how far we scaled up ML or how ML models end up working internally.
Probably the best way to understand our work is to read Eliciting Latent Knowledge, a report describing some recent and upcoming research, which illustrates our general methodology.
We currently have 2 research staff (Paul Christiano and Mark Xu). We’re aiming to hire another 1-2 researchers in early 2022. ARC is a new organization and is hoping to grow significantly over the next few years, so early hires will play a key role in helping define and scale up our research.
Who should apply?
Most of all, you should send in an application if you feel excited about proposing the kinds of algorithms and counterexamples described in our report on ELK.
We’re open to anyone who is excited about working on alignment even if you don't yet have any research background (or your research is in another field). You may be an especially good fit if you:
- Are creative and generative (e.g. you may already have some ideas for potential strategies or counterexamples for ELK, even if they don't work).
- Have experience designing algorithms, proving theorems, or formalizing concepts.
- Have a broad base of knowledge in mathematics and computer science (we often draw test cases and counterexamples from these fields).
- Have thought a lot about the AI alignment problem, especially in the limit of very powerful AI systems.
Hiring will be a priority for us in early 2022 and we don't mind reading a lot of applications, so feel free to err on the side of sending in an application.
Hiring process and details
You can apply by filling out this short form. We will begin reviewing applications and interviewing candidates on January 3, 2022. Our hiring process involves a series of 1-2 hours interviews followed by a paid day-long work sample. Where possible we also prefer to do a longer trial although we understand that’s not practical for everyone.
We are based in Berkeley, CA and would prefer people who can work from our office, but we’re open to discussing remote arrangements for great candidates.
Salaries are in the $150k-400k range depending on experience.
2 comments
Comments sorted by top scores.
comment by ESRogs · 2021-12-14T20:59:57.303Z · LW(p) · GW(p)
We are aiming to develop alignment strategies that would continue to work regardless of how far we scaled up ML or how ML models end up working internally.
Is it fair to say that you are assuming that the AI systems are in fact based on ML, and not some other kind of AI (e.g. GOFAI that actually works somehow, or something more exotic)?
Replies from: paulfchristiano↑ comment by paulfchristiano · 2021-12-14T22:20:31.116Z · LW(p) · GW(p)
I think that "TAI is based on ML" is plausible, and responsible for a significant part of the total risk posed by AI. That said, I think our work is reasonably likely to be useful even in other worlds (since the same basic difficulties seem likely to arise in different forms) and that it's useful to think concretely about something that exists today regardless of whether ML is a central ingredient in future AI systems.
Prosaic AI alignment is still a reasonable representation of my position.