A long reply to Ben Garfinkel on Scrutinizing Classic AI Risk Arguments

post by Søren Elverlin (soren-elverlin-1) · 2020-09-27T17:51:31.712Z · LW · GW · 6 comments

Contents

6 comments

In July, Ben Garfinkel scrutinized the classic AI Risk arguments in a 158 minute-long interview with 80000 hours, which I strongly recommend.

I have formulated a reply, and recorded 80 minutes of video, as part of two presentations in the AISafety.com Reading Group:

196. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments

197. Ben Garfinkel on Scrutinizing Classic AI Risk Arguments 2

I strongly recommend turning subtitles on. Also consider increasing the playback speed.


"I have made this longer than usual because I have not had time to make it shorter."
-Blaise Pascal

The Podcast/Interview format is less well suited for critical text analysis, compared to a formal article or a LessWrong post, for 3 reasons:

  1. Lack of precision. It is a difficult skill to place each qualifier carefully and deliberately when speaking, and at several points I was uncertain if I was parsing Ben's sentences correctly.

  2. Lack of references. The "Classic AI Risk Arguments" are expansive, and critical text analysis require clear pointers to the specific arguments that are being criticized.

  3. Expansiveness. There are a lot of arguments presented, and many of them deserve formal answers. Unfortunately, this is a large task, and I hope you will forgive me for replying in the form of a video.

tl;dw: A number of the arguments Ben Garfinkel criticize are in fact not present in "Superintelligence" and "The AI Foom Debate". (This summary is incomplete.)

6 comments

Comments sorted by top scores.

comment by Howie Lempel (howie-lempel) · 2020-09-28T12:21:33.036Z · LW(p) · GW(p)

The Podcast/Interview format is less well suited for critical text analysis, compared to a formal article or a LessWrong post, for 3 reasons:

Lack of precision. It is a difficult skill to place each qualifier carefully and deliberately when speaking, and at several points I was uncertain if I was parsing Ben's sentences correctly.

Lack of references. The "Classic AI Risk Arguments" are expansive, and critical text analysis require clear pointers to the specific arguments that are being criticized.

Expansiveness. There are a lot of arguments presented, and many of them deserve formal answers. Unfortunately, this is a large task, and I hope you will forgive me for replying in the form of a video.

tl;dw: A number of the arguments Ben Garfinkel criticize are in fact not present in "Superintelligence" and "The AI Foom Debate". (This summary is incomplete.)

Hi Soren,

I agree that podcasts/interviews have some major disadvantages, though they also have several advantages. 

Just wanted to link to Ben's written versions of some (but not all) of these arguments in case you haven't seen them. I don't know whether they address the specific things you're concerned about. We linked to these in the show notes and if we didn't explicitly flag that these existed during the episode, we should have.[1] 

  1. On Classic Arguments for AI Discontinuities
  2. Imagining the Future of AI: Some Incomplete but Overlong Notes
  3. Slide deck: Unpacking Classic AI Risk Arguments
  4. Slide deck: Potential Existential Risks from Artificial Intelligence

(1) and (3) are most relevant to the things we talked about on the podcast. My memory's hazy but I think (2) and (4) also have some relevant sections. 

Unfortunately, I probably won't have time to watch your videos though I'd really like to.[2] If you happen to have any easy-to-write-down thoughts on how I could've made the interview better (including, for example, parts of the interview where I should've pushed back more), I'd find that helpful. 

[1] JTBC, I think we should expect that most listeners are going to absorb whatever's said on the show and not do any additional reading.

[2] ETA: Oh - I just noticed that youtube now has an 'open transcript' feature which makes it possible I'll able to get to this.

Replies from: soren-elverlin-1
comment by Søren Elverlin (soren-elverlin-1) · 2020-09-29T04:27:41.424Z · LW(p) · GW(p)

Hi Howie,

Thank you for reminding me of these four documents. I had seen them, but I dismissed them early in the process. That might have been a mistake, and I'll read them carefully now.

I think you did a great job at the interview. I describe one place where you could have pushed back more here: https://youtu.be/_kNvExbheNA?t=1376 You asked: "...Assume that among the things that these narrow AIs are really good at doing, one of them is programming AI...", and Ben Garfinkel made a broad answer about "doing science".

Replies from: howie-lempel, howie-lempel
comment by Howie Lempel (howie-lempel) · 2020-09-29T10:21:51.996Z · LW(p) · GW(p)

On the documents:

Unfortunately I read them nearly a year ago so my memory's hazy. But (3) goes over most of the main arguments we talked about in the podcast step by step, though it's just slides so you may have similar complaints about the lack of close analysis of the original texts.

(1) is a pretty detailed write up of Ben's thoughts on discontinuities, sudden emergence, and explosive aftermath. To the extent that you were concerned about those bits in particular, I'd guess you'll find what you're looking for there.

comment by Howie Lempel (howie-lempel) · 2020-09-29T10:16:52.921Z · LW(p) · GW(p)

Thanks! Agree that it'd would've been useful to push on that point some more.

I know Ben was writing up some additional parts of his argument at some point but I don't know whether finishing that up is still something he's working on.

comment by habryka (habryka4) · 2020-09-26T18:39:47.774Z · LW(p) · GW(p)

Sorry for the unfortunate timing of this post and Petrov Day! When the frontpage goes back up tomorrow, I will bump this post to make sure it gets some proper time on the frontpage.

Replies from: soren-elverlin-1