Current AI harms are also sci-fi

post by Christopher King (christopher-king) · 2023-06-08T17:49:59.054Z · LW · GW · 3 comments

Contents

3 comments

A common argument I've seen for focusing on current AI harms over future ones is that the future ones are sci-fi (i.e. the ideas originated in the science fiction genre). This argument is fallacious though, because many (perhaps all?) current AI harms are also sci-fi. It is an isolated demand for rigor [LW · GW].

3 comments

Comments sorted by top scores.

comment by gjm · 2023-06-08T18:50:58.957Z · LW(p) · GW(p)

I don't think people who refer to extinction threats from AI as "science-fictional" are making an implicit argument along the lines of "there have been science fiction stories about this, therefore it will not happen".

Their argument is more like "there is no clear sign of this happening in the immediate future, nor an obvious path by which it can happen using only well understood and verified phenomena; so far things like it are found only in science fiction and not in reality, and for it to become real will require other things that so far are found only in science fiction -- e.g., nanotechnology, "backdoors" in the human mind -- to become real first; so if it seems like an imminent threat to you, the most likely explanation is that you are taking science fiction as a reliable guide to future reality, and you shouldn't".

That may not be a good argument! But it's an argument that isn't weakened much by giving examples of things that science fiction predicted and are now either real or imminent.

It is weakened a bit by that, because "X is found, so far, only in science fiction" is less evidence for "we should not worry too much about X" if things found only in science fiction very commonly become reality. But there are plenty of common science fiction tropes that fairly clearly aren't close to happening in the near future -- e.g., faster-than-light spacecraft, teleportation, antigravity -- that if someone finds something credible largely because they've encountered it in SF, then they're probably making a mistake.

Replies from: christopher-king
comment by Christopher King (christopher-king) · 2023-06-09T01:30:36.047Z · LW(p) · GW(p)

I do agree that it provides some evidence against the idea [LW · GW], but I read some people trying to dismiss AI risk in it's entirety with the argument that it's sci-fi. This is obviously a way to strong conclusion to reach, because it would've prevented you from accepting the current harms.

comment by tjaffee · 2023-06-09T15:26:29.193Z · LW(p) · GW(p)

I wouldn't consider AI art to be an "AI harm" - I think it's a tremendous net benefit for artists, just like the digital camera or Photoshop.