Looking for AI Safety Experts to Provide High Level Guidance for RAISE

post by Ofer (ofer) · 2018-05-06T02:06:51.626Z · LW · GW · 5 comments

The Road to AI Safety Excellence [LW · GW] (RAISE) initiative aims to allow aspiring AI safety researchers and interested students to get familiar with the research landscape effectively; thereby hopefully increasing the number of researchers that contribute to the field. To that end, we (the RAISE team) are trying to build a high-quality online course. You can see our pilot lesson here (under “Corrigibility 1”).

Most of the course segments will be based on distilled summaries of one or more papers. We already distilled ~9 papers on corrigibility for the first course segments, and used the distilled summaries to write video script drafts.

Our long-term goal is to cover as much of the AI safety research landscape as possible, in the most useful way possible. Therefore, we need guidance from experts who have extensive familiarity with the literature in one of the broad subfields of AI safety (i.e. the machine learning perspective or the Agent Foundations research agenda; or broad parts thereof). We realize that the time of such experts is a critically scarce resource. Therefore, we will ask them only for high-level guidance including:

1) Their idea of a good structure for a part of the course: a list of sections, and the subsections that might constitute each one.

2) Pointers to papers to base each subsection on.

If an expert expects contributing further to RAISE to be an effective use of their time, they could also choose to go over our lesson scripts and provide feedback before the videos are being recorded.

Should this role be an effective use of your time, please contact us at raise@aisafety.camp

5 comments

Comments sorted by top scores.

comment by Scott Garrabrant · 2018-05-10T18:30:58.119Z · LW(p) · GW(p)

I think that we should schedule a video chat. I might have a lot of content for you. Email me?

Replies from: ofer
comment by Ofer (ofer) · 2018-05-11T05:39:13.671Z · LW(p) · GW(p)

I've sent you an email, thanks!

comment by philip_b (crabman) · 2018-05-06T10:36:49.824Z · LW(p) · GW(p)

What does this have in common with https://www.lesswrong.com/posts/kK67yXhmDYwXLqXoQ/fundamentals-of-formalisation-level-1-basic-logic [LW · GW] ?

Replies from: ofer
comment by Ofer (ofer) · 2018-05-06T11:22:11.629Z · LW(p) · GW(p)

Within RAISE there's a team that's working on constructing a prerequisites track for AI safety - something that people who lack some of the necessary undergraduate-level background could use to (1) identify what material they're missing; and (2) learn that material effectively.

The post you linked to is part of that project.

(note: I might be slightly misrepresenting that project - I'm not on the team that works on it)

Replies from: None
comment by [deleted] · 2018-05-07T13:05:29.648Z · LW(p) · GW(p)

You represented it well. We're currently doing 2 things at once. The prerequisites track was too good to pass up.