Eliezer on The Lunar Society podcast

post by Max H (Maxc) · 2023-04-06T16:18:47.316Z · LW · GW · 5 comments

This is a link post for https://www.dwarkeshpatel.com/p/eliezer-yudkowsky#details

Contents

5 comments

Another podcast appearance. Great to see Eliezer on a real media blitz tour.

For 4 hours, I tried to come up reasons for why AI might not kill us all, and Eliezer Yudkowsky explained why I was wrong.

We also discuss his call to halt AI, why LLMs make alignment harder, what it would take to save humanity, his millions of words of sci-fi, and much more.

If you want to get to the crux of the conversation, fast forward to 2:35:00 through 3:43:54. Here we go through and debate the main reasons I still think doom is unlikely.

Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.

 

Youtube: 

 

Full transcript: https://www.dwarkeshpatel.com/p/eliezer-yudkowsky#details

5 comments

Comments sorted by top scores.

comment by Portia (Making_Philosophy_Better) · 2023-04-07T01:36:49.218Z · LW(p) · GW(p)

I really wish he had used the time to put this into a coherent, complete paper, with references, open peer review, and definitions, citable, attackable.

Just tried going through the transcript looking for novel arguments, and... gah. Because it is audio, you do not even have correct grammar.

Replies from: Viliam
comment by Viliam · 2023-04-07T13:06:18.983Z · LW(p) · GW(p)

Another option is to hire someone else to convert ideas into properly written papers. Pay them with money, co-authorship, or both.

comment by AprilSR · 2023-04-08T05:56:52.148Z · LW(p) · GW(p)

I really like the "when you don't have a good detailed model you need to figure out what space you should have the maximum entropy distribution over" framing

comment by wachichornia · 2023-04-19T10:47:54.479Z · LW(p) · GW(p)

If I understood correctly, he mentions augmenting humans as a way out of the existential risk. At least I understood he has more faith in it than in making AI do our alignment homework. What does he mean by that? Increasing productivity? New drug development? Helping us get insights into new technology to develop? All of the above? I'd love to understand the ideas around that possible way out.

comment by Casey B. (Zahima) · 2023-04-07T00:59:21.570Z · LW(p) · GW(p)

Really extremely happy with this podcast - but I feel like it also contributed to a major concern I have about how this PR campaign is being conducted [LW · GW].