Introducing the ML Safety Scholars Program

post by Dan H (dan-hendrycks), ThomasW (ThomasWoodside), Mantas Mazeika, ozhang (oliver-zhang), Sidney Hough (Sidney), Kevin Liu (Pneumaticat) · 2022-05-04T16:01:51.575Z · LW · GW · 3 comments

Contents

  Program Overview
  Why have this program?
  Time Commitment
  Preliminary Content & Schedule
    ML Safety
  Who is eligible?
  Questions
  Acknowledgement
  Application
None
3 comments

Program Overview

The Machine Learning Safety Scholars program is a paid, 9-week summer program designed to help undergraduate students gain skills in machine learning with the aim of using those skills for empirical AI safety research in the future. Apply for the program here by May 31st.

The course will have three main parts:

The first two sections are based on public materials, and we plan to make the ML safety course publicly available soon as well. The purpose of this program is not to provide proprietary lessons but to better facilitate learning:

MLSS will be fully remote, so participants will be able to do it from wherever they’re located. 

Why have this program?

Much of AI safety research currently focuses on existing machine learning systems, so it’s necessary to understand the fundamentals of machine learning to be able to make contributions. While many students learn these fundamentals in their university courses, some might be interested in learning them on their own, perhaps because they have time over the summer or their university courses are badly timed. In addition, we don’t think that any university currently devotes multiple weeks to AI Safety.

There are already sources of funding for upskilling within EA, such as the Long Term Future Fund. Our program focuses specifically on ML and therefore we are able to provide a curriculum and support to Scholars in addition to funding, so they can focus on learning the content.

Our hope is that this program can contribute to producing knowledgeable and motivated undergraduates who can then use their skills to contribute to the most pressing research problems within AI safety.

Time Commitment

The program will last 9 weeks, beginning on Monday, June 20th, and ending on August 19th. We expect each week of the program to cover the equivalent of about 3 weeks of the university lectures we are drawing our curriculum from. As a result, the program will likely take roughly 30-40 hours per week, depending on speed and prior knowledge.

Preliminary Content & Schedule

Machine Learning (content from the MIT open course)

Week 1 - Basics, Perceptrons, Features

Week 2 - Features continued, Margin Maximization (logistic regression and gradient descent), Regression

Deep Learning (content from a University of Michigan course as well as an NYU course)

Week 3 - Introduction, Image Classification, Linear Classifiers, Optimization, Neural Networks. ML Assignments due.

Week 4 - Backpropagation, CNNs, CNN Architectures, Hardware and Software, Training Neural Nets I & II. DL Assignment 1 due.

Week 5 - RNNs, Attention, NLP (from NYU), Hugging Face tutorial (parts 1-3),

RL overview. DL Assignment 2 due.

ML Safety

Week 6 - Risk Management Background (e.g., accident models), Robustness (e.g., optimization pressure). DL Assignment 3 due.

Week 7 - Monitoring (e.g., emergent capabilities), Alignment (e.g., honesty). Project proposal due.

Week 8 - Systemic Safety (e.g., improved epistemics), Additional X-Risk Discussion (e.g., deceptive alignment). All ML Safety assignments due.

Week 9 - Final Project (edit May 5th: If students have a conflict in the last week of the program, they can choose not to complete the final project. Students who do this will receive a stipend of $4000 rather than $4500.)

Who is eligible?

The program is designed for motivated undergraduates who have interest in doing empirical AI safety research in the future. We will accept Scholars who will be enrolled undergraduate students after the conclusion of the program (this includes graduated/soon graduating high school students about to enroll in their first year of undergrad).

Prerequisites:

We don’t assume any ML knowledge, though we expect that the course could be helpful even for people who have some knowledge of ML already (e.g., fast.ai or Andrew Ng’s Coursera course).

Questions

Questions about the program should be posted as comments on this post. If the question is only relevant to you, it can be addressed to Thomas Woodside ([firstname].[lastname]@gmail.com).

Acknowledgement

We would like to thank the FTX Future Fund regranting program for providing the funding for the program.

Application

You can apply for the program here. Admission is rolling, but you must apply by May 31st to be considered for the program. All decisions will be released by June 7th.

3 comments

Comments sorted by top scores.

comment by Cornelis Dirk Haupt · 2022-05-11T21:04:40.012Z · LW(p) · GW(p)

Do you have any online math course recommendations for alumni who haven't touched that in while to brush up on? i.e. for differential calculus, linear algebra and statistics

Replies from: ThomasWoodside
comment by ThomasW (ThomasWoodside) · 2022-05-17T16:46:02.749Z · LW(p) · GW(p)

Yes, we're working making this list right now!

comment by Brandon Chan (brandon-chan-multibrandonhd) · 2023-04-19T04:34:10.249Z · LW(p) · GW(p)

Hello, this looks like a fantastic program, will the ML Safety Scholars Program be offered again this year??