Best introductory overviews of AGI safety?
post by JakubK (jskatt) · 2022-12-13T19:01:37.887Z · LW · GW · 3 commentsThis is a link post for https://forum.effectivealtruism.org/posts/aa6wwy3zmLxn7wLNb/best-introductory-overviews-of-agi-safety
This is a question post.
Contents
Answers 6 Thomas Larsen 2 Michael Tontchev 2 Tor Økland Barstad None 3 comments
I'm interested what people think are the best overviews of AI risk for various types of people. Below I've listed as many good overviews as I could find (excluding some drafts), splitting based on "good for a popular audience" and "good for AI researchers." I'd also like to hear if people think some of these intros are better than others (prioritizing between intros). I'd be interested to hear about podcasts and videos as well.
I am maintaining a list at this Google doc to incorporate people's suggestions.
Popular audience:
- Vox (Kelsey Piper)
- AI alignment (Wikipedia -- perhaps the most important to get right!)
- Why alignment could be hard with modern DL (Ajeya Cotra)
- Stampy wiki and aisafety.info (Stampy team)
- The most important century blog post series summary and AI could defeat all of us combined and Why would AI "aim" to defeat humanity (Holden Karnofsky)
- Current work in AI alignment [EA · GW] (Paul Christiano)
- Future of Life Institute (Ariel Conn)
- Why worry about future AI? (Gavin Leech)
- 80k full profile (Benjamin Hilton)
- AGI Ruin: A list of lethalities [LW · GW] (Eliezer Yudkowsky)
- Extinction Risk from Artificial Intelligence (Michael Cohen)
- Set Sail For Fail? On AI risk (Nintil)
- A shift in arguments for AI risk (Tom Adamczewski)
- Is power-seeking AI an x-risk (Joseph Carlsmith)
- More is different for AI (Jacob Steinhardt)
- The Basic AI Drives (Steve Omohundro)
- AI Risk Intro 1: Advanced AI Might Be Very Bad [LW · GW] (TheMcDouglas and LRudL)
- Four Background Claims (Nate Soares)
AI researchers:
- CHAI bibliography
- Unsolved Problems in ML Safety (Dan Hendrycks)
- AI safety from first principles [? · GW] (Richard Ngo)
- The alignment problem from a DL perspective (Richard Ngo)
- X-risk analysis for AI research (Dan Hendrycks and Mantas Mazeika)
- Why I think more NLP researchers should engage with AIS concerns (Sam Bowman)
- More thoughts on outreach to researchers: Marius Hobbhahn [LW · GW], AISFB [EA · GW] and Resources I sent to AI researchers about AIS [EA · GW] (Vael Gates)
Alignment landscape:
- My Overview of the AI Alignment Landscape: A Bird's Eye View [LW · GW] (Neel Nanda)
- My understanding of what everyone in technical alignment is doing and why [LW · GW] (Thomas Larsen and Eli Lifland) and Alignment org cheat sheet [LW · GW] (Thomas Larsen and Akash Wasil)
Podcasts and videos:
- Intro to AI Safety, Remastered (Rob Miles)
- Researcher Perceptions of Current and Future AI (Vael Gates)
- AI Fire Alarm (Connor Leahy)
- Eliezer Yudkowksy interview with Sam Harris -- full audio is still only available to Sam Harris subscribers [LW · GW]
- Richard Ngo and Paul Christiano on AXRP
- Ensuring smarter-than-human intelligence has a positive outcome (Nate Soares)
- AI Alignment: Why It's Hard, and Where to Start (Eliezer Yudkowsky)
- Brian Christian and Ben Garfinkel on the 80,000 Hours Podcast
- Ajeya Cotra and Rohin Shah on the Future of Life Institute Podcast
- Some audio recordings of the readings above (e.g. Cold Takes audio, reading of 80k intro, EA Forum posts [EA · GW], EA Radio, Astral Codex Ten Podcast, Less Wrong Curated Podcast [LW · GW], Nonlinear Library [EA · GW])
- "Recommended EA talks/videos for university groups" includes some generally good channels to look for more videos from in the future
Answers
My favorite for AI researchers is Ajeya's Without specific countermeasures [LW · GW], because I think it does a really good job being concrete about a training set up leading to deceptive alignment. It also is sufficiently non-technical that a motivated person not familiar with AI could understand the key points.
↑ comment by JakubK (jskatt) · 2022-12-13T22:04:50.107Z · LW(p) · GW(p)
Forgot to include this. It's sort of a more opinionated and ML-focused version of Carlsmith's report and has a corresponding video/talk (as does Carlsmith [EA · GW]).
Want to add this one:
https://www.lesswrong.com/posts/B8Djo44WtZK6kK4K5/outreach-success-intro-to-ai-risk-that-has-been-successful [LW · GW]
This is the note I wrote internally at Meta - it's had over 300 reactions, as well as people reaching out to me saying it has convinced them to switch to working on alignment.
↑ comment by JakubK (jskatt) · 2023-06-27T05:43:00.358Z · LW(p) · GW(p)
Thanks for writing and sharing this. I've added it to the doc.
Good initiative.
Regarding introductions to a popular audience, I feel like Tim Urban wrote an intro that also is worth mentioning: Part 1 - Part 2 - Reply from Luke Muehlhauser
Another one is A Response to Steven Pinker on AI (Rob Miles)
Btw, I sometimes recommend Superintelligence by Nick Bostrom (but that's an entire book)
Will be interesting to see what kinds of introductions that are available one or a few years from now. Some people have created good introductions, but I do feel as if there is room for improvement.
Btw, I think Rob Miles is working on a collaborative FAQ: https://stampy.ai/wiki/Main_Page (which he talks about here)
↑ comment by JakubK (jskatt) · 2022-12-13T22:30:36.798Z · LW(p) · GW(p)
Yeah Tim Urban's is perhaps the most enjoyable / fun read. But I worry that skeptics won't take it seriously.
3 comments
Comments sorted by top scores.
comment by Ebenezer Dukakis (valley9) · 2023-02-16T07:12:01.865Z · LW(p) · GW(p)
Just saw this https://www.lesswrong.com/posts/5rsa37pBjo4Cf9fkE/a-newcomer-s-guide-to-the-technical-ai-safety-field [LW · GW]
Replies from: jskatt↑ comment by JakubK (jskatt) · 2023-02-16T19:13:03.133Z · LW(p) · GW(p)
Thanks, I added it to the doc.
comment by the gears to ascension (lahwran) · 2022-12-13T20:49:50.714Z · LW(p) · GW(p)
I appreciate already having a big list of candidates, so I can't comment with one!