Linkpost for Accursed Farms Discussion / debate with AI expert Eliezer Yudkowsky

post by gilch · 2023-05-05T18:20:20.004Z · LW · GW · 2 comments

This is a link post for https://www.youtube.com/watch?v=hxsAuxswOvM

Contents

2 comments

Eliezer has been on a podcast tour of late. This appears to be the latest one on YouTube.

I feel like this one didn't go so well. It's also over two hours, so maybe don't watch this one if you have better things to do, but for those that did, I thought I'd make a space for discussion.

Most of it was Eliezer trying to do some Socratic questioning to try to get to the bottom of the host's intuition that AI isn't "really" intelligent, or just imitates it in fragments, and therefore is not dangerous. I'm not sure I understood. It didn't seem to be expressed very well.

Eliezer's goal in doing the interview seems to have been to figure out how to talk to a different audience. People in the know ask very different questions from those who are just learning about this.

2 comments

Comments sorted by top scores.

comment by niplav · 2023-05-06T13:56:53.797Z · LW(p) · GW(p)

I predict the interviews would be more successful if Eliezer was more polite to the interviewers. That would include things like not interrupting the interviewer and not becoming too exasperated at him.

On the plus side, he summarized the interviewer's position at him in the end, which felt collegial and cooperative.

I think this comes out of Eliezer trying to debug the interviewer's thinking while holding the interview (a thing I can empathise with).

comment by WychanSieht · 2023-05-09T01:53:44.732Z · LW(p) · GW(p)

I managed about an hour of the discussion... Very painful -- like Communist Re-Education Camp painful.  From what I saw, EY was much more patient with the interviewer than I would have been...  Maybe I'm old school, but generally, the interviewer tries to maximize the guest's time & engagement.  His rambling & near incoherent excursions... Ouch!!

I hoped his first question to EY would be, "Why are you so convinced that an AGI will Kill Us All?"  Yes, the response would be 'old hat' on this forum, but probably not for this podcast's audience...