Announcing the Technical AI Safety Podcast

post by Quinn (quinn-dougherty) · 2020-12-07T18:51:58.257Z · LW · GW · 6 comments

This is a link post for https://technical-ai-safety.libsyn.com/

Contents

  Episode 0 script
None
6 comments

Please comment if you use an obscure podcast app and I'll try to set it up for my feed. I'll have at least spotify, apple podcasts, and stitcher up shortly. EDIT: spotify, pocketcasts, stitcher, and apple podcasts are live. google podcasts is simply timegated at this point unless something goes wrong.

Episode one should go up in a few weeks. We're doing two episodes on shielding in RL to get started.

Feedback form: https://forms.gle/4YFCJ83seNwsoLnH6

Request an episode: https://forms.gle/AA3J7SeDsmADLkgK9

Episode 0 script

The Technical AI Safety Podcast is supported by the Center for Enabling Effective Altruist Learning and Research, or CEEALAR. CEEALAR, known to some as the EA Hotel, is a nonprofit focused on alleviating bottlenecks to desk work in the effective altruist community. Learn more at ceealar.org

Hello, and welcome to the technical ai safety podcast. Episode 0: announcement.

This is the announcement episode, briefly outlining who i am, what you can expect from me, and why i’m doing this.

First, a little about me. My name is Quinn Dougherty, I’m no one in particular, not a grad student, not a high-karma contributor on lesswrong, nor even really an independent researcher. I only began studying math and CS in 2016, and I haven’t even been razor-focused on AI Safety most of the time since. However, I eventually came to thinking there’s a reasonable chance AGI poses an existential threat to the flourishing of sentient life, and I think it’s nearly guaranteed that it poses a global catastrophic threat to the flourishing of sentient life. I recently quit my job and decided to focus my efforts in this area. My favorite area of computer science is formal verification, but I think I’m literate enough in machine learning to get away with a project like this. We’ll have to see, ultimately, you the listeners will be the judge of that.

Second, what can you expect from me? My plan is to read the alignment newsletter (produced by Rohin Shah) every week, cold-email authors of papers I think are interesting, and ask them to do interviews about their papers. I’m forecasting 1-2 episodes per month, each interview will be 45-120 minutes, and there’s already a google form you can use to request episodes (just link me to a paper you’re interested in) as well as a general feedback form. Just look in the show notes.

Finally, why am I doing this? You might ask, doesn’t 80000 hours and future of life institute cover AI Safety in their podcasts? My claim to you is, not exactly. While 80k and FLI produce a mean podcast, they’re interdisciplinary. As I see it, they’re podcasts for computer scientists to come together with policy wonks and philosophers, but as far as I know, there’s a gap in the podcast market, there isn’t yet a podcast just for computer scientists on the topic of AI Safety. This is the gap I’m hoping to fill. So with me, you can expect jargon, you can expect a modest barrier to entry, so that we can go on deep dives into the papers we cover. We will not be discussing the broader context of why AI safety is important. We will not cover the distinction between existential and catastrophic threats, and we will not look at policy or philosophy, if that’s what you want you can find plenty of it elsewhere. But we will, only on occasion, make explicit the potential for the results we cover to solve a piece of the ai safety puzzle.

6 comments

Comments sorted by top scores.

comment by Rohin Shah (rohinmshah) · 2021-05-01T17:49:09.735Z · LW(p) · GW(p)

Fyi, I personally dislike audio as a means of communicating information, and so I probably won't be summarizing these for the Alignment Newsletter unless they have transcripts.

(This is not a request for transcripts -- I usually don't get that much out of podcasts like this, because I've usually already spent a bunch of time understanding the papers they're based on. Treat it more like an external constraint of the world, that the Alignment Newsletter happens to have a strong bias against audio- or video-only content. This is also not a guarantee that I will summarize it if it does have a transcript.)

Replies from: quinn-dougherty
comment by Quinn (quinn-dougherty) · 2021-05-01T22:00:55.731Z · LW(p) · GW(p)

Thanks for reaching out! Alex had passed onto me the note about transcripts, I hope to get to it (including the backlog of already released episodes) in the next few months.

comment by Neel Nanda (neel-nanda-1) · 2020-12-08T21:53:34.705Z · LW(p) · GW(p)

This seems like an awesome project! I'm excited to see where this goes

comment by adamShimi · 2020-12-07T21:47:55.556Z · LW(p) · GW(p)

I like the idea! I'll listen to the first few episodes to see if I get anything from them.

comment by Ollie Sayeed (ollie-sayeed-1) · 2020-12-08T15:07:21.225Z · LW(p) · GW(p)

Looking forward to it! Will it be on Pocketcasts?

Replies from: quinn-dougherty
comment by Quinn (quinn-dougherty) · 2020-12-08T19:02:02.930Z · LW(p) · GW(p)

When I submitted to pocketcasts it said we were already on it :) https://pca.st/9froevor