AI Safety Prerequisites Course: Revamp and New Lessons

post by philip_b (crabman) · 2019-02-03T21:04:16.213Z · LW · GW · 5 comments

Contents

  News since the last post
None
6 comments

Previous post: Fundamentals of Formalisation Level 7: Equivalence Relations and Orderings [LW · GW]. First post: Fundamentals of Formalisation level 1: Basic Logic [LW · GW].

Nine months ago we, RAISE, have started creating a Math Prerequisites for AI Safety online course. It has mostly MIRI research related subjects: set theory, computability theory, and logic, but we want to add machine learning related subjects in the future. For 4 months we've been adding new lessons and announcing them on LessWrong. Then we stopped, looked back and decided to improve their usability. That's what we've been busy with since August.


Diagram of levels in the prerequisites course

News since the last post

  1. Big update of 7 levels we had previously published, which you can see in the picture above. The lessons use textbooks, which you will need to follow along. Previously lessons looked like "read that section; now solve problems 1.2, 1.3, 1.4c from the textbook; now solve these additional problems we came up with". Now our lessons still say "read that section", but the problems (and their solutions, in contrast to many textbooks, which don't provide solutions) are included in lessons themselves. Additional problems are now optional, and we recommend that students skip them by default and do them only if they need more practice. New levels in Logic, Set Theory, and Computability tracks will be like that as well.
  2. Level 1 was very long, consisted of 45 pages of reading, and could take 10 hours for someone unfamiliar with logic. We separated it into smaller parts.
  3. Two new levels. Level 8.1: Proof by Induction. Level 8.2: Abacus Computability.

If you study using our course, please give us feedback. Leave a comment here or email us at raise@aisafety.camp, or through the contact form. Do you have an idea about what prerequisites are most important for AI Safety research? Do you know an optimal way to learn them? Tell us using the same methods or collaborate with us.

Can you check if a mathematical proof is correct? Do you know how to make proofs understandable and easy to remember? Would you like to help to create the prerequisites course? If yes, consider volunteering.

5 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2019-02-05T23:17:06.617Z · LW(p) · GW(p)

Seems interesting, thanks!

I definitely think machine learning topics are useful. Given that there's so much stuff out there and you can only cover a small fraction of it, maybe recent machine learning topics are a point of comparative advantage, even. The best textbook on set theory is probably pretty good already.

Another service that could take advantage of pre-existing textbooks is short summaries, designed to give people just enough of a taste to make an informed decision about reading said good textbook. Probably easier than developing a course on algorithmic information theory, or circuit complexity, or whatever.

Replies from: crabman
comment by philip_b (crabman) · 2019-02-06T11:13:45.569Z · LW(p) · GW(p)

maybe recent machine learning topics are a point of comparative advantage

Do you mean recent ML topics related to AI safety, or just recent ML topics?

RAISE is already working on the former, it's another course which we internally call "main track". Right now it has the following umbrella topics: Inverse Reinforcement Learning; Iterated Distillation and Amplification; Corrigibility. See https://www.aisafety.info/online-course

comment by Maybe_a · 2019-03-16T10:12:40.663Z · LW(p) · GW(p)

Consider not wasting your reader's time with having to register on grasple to be presented with 34-euro paywall.

Replies from: Davide_Zagami
comment by Davide_Zagami · 2019-03-16T13:28:54.758Z · LW(p) · GW(p)

Registration and access to the lessons is completely free. Where do you see a paywall?

Replies from: Maybe_a, crabman
comment by Maybe_a · 2019-03-18T16:16:24.103Z · LW(p) · GW(p)

Oh, sorry. Javascript shenanigans seem to have sent me into antoher course, works fine on a clean browser.