Several free CFAR summer programs on rationality and AI safetypost by AnnaSalamon · 2016-04-14T02:35:03.742Z · LW · GW · Legacy · 14 comments
EuroSPARC — July 19-27, 2016 Workshop on AI Safety Strategy — May 29-June 5 (plus the May 18-23 CFAR workshop if needed) MIRI Summer Fellows Program — June 19-July 4 CFAR for ML Researchers (CMLR) -- Aug 30-Sept 4 How do these AI safety programs fit into CFAR’s mission? None 14 comments
CFAR will be running several free summer programs this summer which are currently taking applications. Please apply if you’re interested, and forward the programs also to anyone else who may be a good fit!
EuroSPARC — July 19-27, 2016
Free applied rationality program for mathematically talented high school students (ages 16-19), located in Oxford, UK. Intended for students from any country — we have funding to assist with flights from e.g. South America or Australia or China. Apply here.
(SPARC itself is also running this summer, and remains awesome, but applications are now closed; it'll run again in 2017, though!)
Workshop on AI Safety Strategy — May 29-June 5 (plus the May 18-23 CFAR workshop if needed)
8 days*; 12 participants; located in the SF Bay Area (some travel assistance available). WAISS is aimed at (technical or non-technical) folks who want to better understand AI risk and how they can personally impact it. The workshop will include: (1) forecasting skills, and skills for modeling the AI strategic landscape; (2) further applied rationality skills, building from the standard CFAR workshop; and (3) space in which to brainstorm diverse pathways via which an individual might aim to impact AI risk with their career.
MIRI Summer Fellows Program — June 19-July 4
16 days; 20 participants; SF Bay Area; aimed at folks with strong math backgrounds who are interested in technical AI safety research. MSFP is an intensive workshop that includes both applied rationality content, and practice doing technical research toward AI safety; it is co-run with MIRI technical researchers. (Funded by MIRI.) Apply here.
CFAR for ML Researchers (CMLR) -- Aug 30-Sept 4
5 days; 45 participants; SF Bay Area; free, including flights.
CMLR will be an adapted version of our standard 4-day CFAR workshop, aimed at researchers in the field of machine learning or artificial intelligence. This includes graduate & undergraduate students, postdocs, faculty, and researchers from industry. The workshop will include a 5th "bonus day" for discussion of long-term AI impacts. (Funded by a grant from FLI.) Apply here.
We are also still running our by now well-honed paid workshops, which remain awesome -- and which let you into our increasingly awesome alumni community :) Our next mainline CFAR workshop is May 18-22, 2016 in the Bay Area. We'll have a CFAR alumni reunion this August 19-22 (or dates very very close to those; venue still being pinned down); so this May's workshop is your last chance if you want to be able to come to the 2016 alumni reunion... Sign up here.
How do these AI safety programs fit into CFAR’s mission?
CFAR was created with the mission of “helping to train people who can impact the world’s most important problems, whatever those turn out to be”.
Many of us currently believe that AI safety is among the world’s most important problems, and so we’ll be trying our hand at training people who can help with that (via many of the programs above, and other future programs). It is also much easier to evaluate CFAR’s toolkit if we can see what happens when it is applied to a specific important-seeming problem.
At the same time, we recognize that there is a (possibly quite large) gap between our current beliefs about AI safety and the fact of the matter about what sorts of AI risks humanity is facing and how best to deal with them. Our hope, in running programs on AI safety, is that there will be a community of people thinking about the topic who will develop a better understanding of it than any of us (or anyone else) currently has. Thus, the focus of our programs is to teach general skills for inquiry, and create a space for people to apply them to the topic of AI safety.
We hope you’ll apply to any program you’re interested in, especially if you think you may disagree with some common/vocal viewpoints on some of these topics; conversations need diverse seeds to get to truth.
Comments sorted by top scores.