Robin Hanson asks "Why Not Wait On AI Risk?"

post by Gunnar_Zarncke · 2022-06-26T23:32:19.436Z · LW · GW · 4 comments

This is a link post for https://www.overcomingbias.com/2022/06/why-not-wait-on-ai-risk.html

Contents

  here
None
4 comments

UPDATE: This is an accidental duplicate. Please go to the first posting here [LW · GW].

Since the AI-Foom Debate [? · GW] Robin Hanson has been quiet for a while but now: 

I feel I should periodically try again to explain my reasoning, and ask others to please help show me what I’m missing.

After having analyzed multiple widely expressed AI concerns from an economical angle and comes to the conclusion:

Similar concerns have often been expressed about allowing too many foreigners to immigrate into a society, or allowing the next youthful generation too much freedom to question and change inherited traditions. Or [many other comparable concerns].

Many have deep and reasonable fears regarding big long-term changes. And some seek to design AI so that it won’t allow excessive change. But this issue seems to me much more about change in general than about AI in particular.

There is much more in it but take this as a TL;DR.

4 comments

Comments sorted by top scores.

comment by Adam Zerner (adamzerner) · 2022-06-26T23:42:09.436Z · LW(p) · GW(p)

Note: This was posted recently as well https://www.lesswrong.com/posts/H2N6eWw8JgxwKHPM3/linkpost-robin-hanson-why-not-wait-on-ai-risk [LW · GW].

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2022-06-27T06:20:53.653Z · LW(p) · GW(p)

Arg. I did look whether that was the case by using the LW search for "Robin Hanson" but it didn't turn it up in the first pages of the results.

comment by shminux · 2022-06-27T01:57:01.579Z · LW(p) · GW(p)

One of Hanson's points is 

As I’ve previously explained at length, that seems to me to postulate a quite unusual lumpiness relative to the history we’ve seen for innovation in general, and more particularly for tools, computers, AI, and even machine learning. And this seems to postulate much more of a lumpy conceptual essence to “betterness” than I find plausible. Recent machine learning systems today seem relatively close to each other in their abilities, are gradually improving, and none seem remotely inclined to mount a coup.

which seems to be exactly what Eliezer is saying, so the crux of the disagreement is about this. Hanson calls it a  "postulate" while Eliezer claims to derive it from rather general principles.

I have no firm view on the topic, and the expert opinions seem to differ quite a bit. 

comment by JBlack · 2022-06-27T06:22:37.795Z · LW(p) · GW(p)

The main thesis seems to be that the only ways that AI can go wrong is that they stop doing what we want, just like humans might do. Views attributed to Eliezer and the AI alignment community as a whole are strawmen of the most flagrant variety.

This article is trash.