A short course on AGI safety from the GDM Alignment team

post by Vika, Rohin Shah (rohinmshah) · 2025-02-14T15:43:50.903Z · LW · GW · 1 comments

This is a link post for https://deepmindsafetyresearch.medium.com/introducing-our-short-course-on-agi-safety-1072adb7912c

Contents

  Course outline:
None
1 comment

We are excited to release a short course on AGI safety for students, researchers and professionals interested in this topic. The course offers a concise and accessible introduction to AI alignment, consisting of short recorded talks and exercises (75 minutes total) with an accompanying slide deck and exercise workbook. It covers alignment problems we can expect as AI capabilities advance, and our current approach to these problems (on technical and governance levels). If you would like to learn more about AGI safety but have only an hour to spare, this course is for you! 

Here are some key topics you will learn about in this course:

Course outline:

Part 0: Introduction (4 minutes)

Part 1: The alignment problem. This part covers risk arguments and technical problems in AI alignment.

  1. We are on a path to superhuman capabilities (5 minutes)
  2. Risks from deliberate planning and instrumental subgoals (7 minutes)
  3. Exercise 1: Instrumental subgoals (3 minutes)
  4. Where can misaligned goals come from? (10 minutes)
  5. Exercise 2: Classification quiz for alignment failures (3 minutes)

Part 2: Our technical approach. The first talk outlines our overall technical approach, and the following talks cover different components of this approach.

  1. Alignment approach (4 minutes)
  2. Amplified oversight (6 minutes)
  3. Robust training & monitoring (4 minutes)
  4. Interpretability (5 minutes)
  5. Safer design patterns (4 minutes)
  6. Alignment stress tests (4 minutes)

Part 3: Our governance approach. This part covers our approach to AI governance, starting from a high-level overview and then going into specific governance practices.

  1. Institutional approaches to AI Safety (7 minutes)
  2. Frontier safety practices (4 minutes)
  3. Dangerous capability evaluations (7 minutes)

If this course gets you excited about AGI safety, you can apply to work with us! Applications for research scientist and research engineer roles are open until Feb 28.

1 comments

Comments sorted by top scores.

comment by plex (ete) · 2025-02-15T15:26:10.584Z · LW(p) · GW(p)

Nice! I'll watch through these then probably add a lot of them to the aisafety.video playlist.