Applications for AI Safety Camp 2022 Now Open!

post by adamShimi · 2021-11-17T21:42:31.672Z · LW · GW · 3 comments

If you’ve read about alignment research and you want to start contributing, the new iteration of the AI Safety Camp is a great opportunity!

It’s a virtual camp from January to May 2022, where you collaborate with other applicants to work (1h / normal workday, 7h / weekend sprint day) on open problems proposed and supervised by mentors like John Wentworth, Beth Barnes, Stuart Armstrong, Daniel Kokotajlo… Around this core of research, the camp also includes talks and discussions about fundamental ideas in the field, how alignment research works, and how and where to get a job/funding.

All in all, the AI Safety Camp is a great opportunity if:

Note that you don’t need advanced maths skills to participate in the camp, as some of the projects don’t require any specific skillset or very unusual ones (evolutionary genetics, history...). If you care about alignment and are in this situation, I encourage you to apply for a project without required skillsets and learn what you need as you go along.

All the details on how to apply are available on the website (including the list of open problems).

3 comments

Comments sorted by top scores.

comment by M. Y. Zuo · 2021-11-20T01:42:42.185Z · LW(p) · GW(p)

Seems interesting, I applied. On a logistical note, supplying a pre formatted google sheet for draft answers is a neat innovation. 

comment by Quintin Pope (quintin-pope) · 2021-11-18T08:17:02.570Z · LW(p) · GW(p)

This looks very promising. I think I’ll apply.

About the application: the open ended questions prompt with

“> 5 concise lines”

Does this mean “More than 5 concise lines” or does it mean “Put your 5 concise lines here”? Thanks for the clarification.

Replies from: adamShimi
comment by adamShimi · 2021-11-18T10:48:40.770Z · LW(p) · GW(p)

It means more. :)