Announcing new round of "Key Phenomena in AI Risk" Reading Group

post by DusanDNesic, Nora_Ammann · 2023-10-20T07:11:09.360Z · LW · GW · 2 comments

Contents

  What?
  What Changed?
  For Who? 
  When?
  Sign up
  About the application
None
2 comments

TLDR: Key Phenomena in AI Risk [LW · GW]” is an 8-week-long, facilitated reading group. It is aimed at people interested in conceptual AI alignment research, in particular from fields such as philosophy, systems research, biology, cognitive and social sciences. We ran it once and are repeating it now.

The program will run between November 2023 and January 2024. Sign up here by Sunday, October 29th.

What?

The “Key Phenomena in AI risk” reading curriculum provides an extended introduction to some key ideas in AI risk, in particular risks from misdirected optimization or 'consequentialist cognition'. As such, it aims to remain largely agnostic of solution paradigms. It includes 90' minutes of facilitated discussion, and requires at least 2 hours of reading per session. It is virtual and free.

See the old post here [LW · GW] for a short overview of the curriculum; here for a more extensive summary; and here for the full curriculum (which will be updated in minor ways in the following weeks). 

What Changed?

Thanks to the feedback from participants and facilitators in the last iteration, the program has improved. Now, it is is an 8-week-long program (with one week added at the end for reflection). Readings have been made more focused, and we will be adding more technical optional readings.

For Who? 

The curriculum is primarily aimed at people interested in conceptual research in AI risk and alignment. 

It is designed to be accessible to audiences in, among others, philosophy (of agency, knowledge, power, etc.) and systems research (e.g. biological, cognitive, information-theoretic, social systems, etc.).

When?

The reading groups will be taking place November 2023 through January 2024.

We expect to run 6 groups of 4-8 participants (including 1 facilitator). Each group will be led by a facilitator with substantive knowledge of AI risk.

Sign up

Sign up here by October 29th.

About the application

The application consists of one stage, where we ask you to fill in a form with 

We select people based on our best understanding of their motivation to contribute to AI alignment and how much they would counterfactually benefit from participating in the program. 

 


 

If you have any questions, feel free to leave a comment below or contact us at contact@pibbss.ai 

2 comments

Comments sorted by top scores.

comment by Gabriel Staron (gabriel-staron) · 2023-11-04T02:36:37.335Z · LW(p) · GW(p)

Is there any option to participate even if I missed deadline for registration?

Replies from: DusanDNesic
comment by DusanDNesic · 2023-11-08T14:10:56.424Z · LW(p) · GW(p)

Hello Gabriel! We plan to run this group ~3 times a year, so you should be able to apply for next round, around January/February, which would start in Feb/March. (not confirmed, just estimates).