What to include in a guest lecture on existential risks from AI?

post by Aryeh Englander (alenglander) · 2022-04-13T17:03:40.927Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    9 James_Miller
    6 Viktor Rehnberg
    2 Adam Shimi
None
1 comment

A professor I'm friendly with has been teaching a course on AI ethics this semester, and he asked me if I could come give a guest lecture on "AI apocalypse" scenarios. What should I include in the lecture?

Details:

If anybody has relevant material I could use, such as slides or activities, that would be great! Also, if anybody wants to help develop the material for this class, please message me (preferably at my work email - Aryeh.Englander@jhuapl.edu).

As a bonus, I expect that material for a class of this sort may turn out to be useful for plenty of other people on this and related forums, either for themselves or as a tool they can use when presenting the same topic to others.

[Note: I am posting this here with permission from the professor.]

Answers

answer by James_Miller · 2022-04-13T18:19:56.076Z · LW(p) · GW(p)

Here are the readings/videos I assign when I teach about AI risks to my undergraduates at Smith College:

https://mindstalk.net/vinge/vinge-sing.html

https://iopscience.iop.org/article/10.1088/0031-8949/90/1/018001

answer by Viktor Rehnberg · 2022-04-13T20:04:35.593Z · LW(p) · GW(p)

Olle Häggstöm had three two hour lectures on AI Safety earlier this spring. Original description was

This six-hour lecture series will treat basics and recent developments in AI risk and long-term AI safety. The lectures are meant to be of interest to Ph.D. students and researchers in AI-related fields, but no particular prerequisites will be assumed.

 Lecture 1, Lecture 2 and Lecture 3. Perhaps you can find something there,  I expect he would be happy to help if you reach out to him.

comment by the gears to ascension (lahwran) · 2022-04-14T03:47:37.831Z · LW(p) · GW(p)

any chance you have contact with the people who uploaded that? I suspect the reason I hadn't seen it is that it is marked for being for kids. because of that I can't add it to a playlist. I'm also going to attempt to contact them directly about this.

Replies from: viktor.rehnberg
comment by Viktor Rehnberg (viktor.rehnberg) · 2022-04-20T07:32:09.808Z · LW(p) · GW(p)

Oh, I hadn't noticed that. I've got some connections to them and can reach out.

comment by Aryeh Englander (alenglander) · 2022-04-13T22:31:03.916Z · LW(p) · GW(p)

Thanks, looks useful!

answer by adamShimi (Adam Shimi) · 2022-04-13T18:11:21.650Z · LW(p) · GW(p)

I have a framing of AI risks scenarios that I think is more general and more powerful than most available online, and that might be a good frame before going into examples. It's not posted yet (I'm finishing the sequence now) but I could sent somethings to you if you're interested. ;)

1 comment

Comments sorted by top scores.

comment by the gears to ascension (lahwran) · 2022-04-14T03:46:33.709Z · LW(p) · GW(p)

I would love to add the YouTube video of this class to my database of safety relevant videos once it's out.

copy and pasting channel reviews I wrote originally in my short form - this is too much content to include in a single talk, but I share it in the hope that it will be useful to make the link and perhaps the students would like to see this question itself and discussion around it (I'm a big fan of old fashioned linkweb surfing):

  • CPAIOR has a number of interesting videos on formal verification, how it works, and some that apply it to machine learning, eg "Safety in AI Systems - SMT-Based Verification of Deep Neural Networks"; "Formal Reasoning Methods in Machine Learning Explainability"; "Reasoning About the Probabilistic Behavior of Classifiers"; "Certified Artificial Intelligence"; "Explaining Machine Learning Predictions"; a few others. https://www.youtube.com/channel/UCUBpU4mSYdIn-QzhORFHcHQ/videos

the Schwartz Reisman Institute is a multi-agent safety discussion group, one of the very best ai safety sources I've seen anywhere. a few interesting videos include, for example: "An antidote to Universal Darwinism" - https://www.youtube.com/watch?v=ENpdhwYoF5g

as well as this kickass video on "whose intelligence, whose ethics" https://www.youtube.com/watch?v=ReSbgRSJ4WY

https://www.youtube.com/channel/UCSq8_q4SCU3rYFwnA2bDxyQ

  • I would also encourage directly mentioning the recent works from Anthropic AI, stuff as this paper from this month, "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback" https://arxiv.org/abs/2204.05862

The simons institute for theoretical computer science at UC Berkeley is a contender for my #1 recommendation from this whole list. Banger talk after banger talk after banger talk there. Several recent workshops with kickass ai safety focus. https://www.youtube.com/user/SimonsInstitute

A notable recent workshop is "learning in the presence of strategic behavior": https://www.youtube.com/watch?v=6Uq1VeB4h3w&list=PLgKuh-lKre101UQlQu5mKDjXDmH7uQ_4T

another fun one is "learning and games": https://www.youtube.com/watch?v=hkh23K3-EKw&list=PLgKuh-lKre13FSdUuEerIxW9zgzsa9GK9

they have a number of "boot camp" lessons that appear to be meant for an interdisciplinary advanced audience as well. the current focus of talks is on causality and games, and they also have some banger talks on "how not to run a forecasting competition", "the invisible hand of prediction", "communicating with anecdotes", "the challenge of understanding what users want", and my personal favorite due to its fundamental reframing of what game theory even is, "in praise of game dynamics": https://www.youtube.com/watch?v=lCDy7XcZsSI

In general I have a higher error rate than some folks on less wrong and my recommendations should be considered weaker and more exploratory. but here you go, those are my exploratory recommendations, and I have lots and lots more suggestions for more capability focused stuff on my short form.