Three Paths to Existential Risk from AI
post by harsimony · 2021-06-16T01:37:02.456Z · LW · GW · 2 commentsThis is a link post for https://harsimony.wordpress.com/2021/06/09/three-paths-to-existential-risk-from-ai/
Contents
2 comments
2 comments
Comments sorted by top scores.
comment by Charlie Steiner · 2021-06-17T00:49:57.287Z · LW(p) · GW(p)
Description? Also, none of your scenarios seem to involve a big intelligence or multitasking advantage - its certainly harder to imagine humans getting outwitted in many different ways in rapid sequence, culminating in an extremely efficient gain of power for the AI, but it actually seems more realistic to me for a fast takeoff (the other option being something like Paul's "gradual loss of control" slow takeoff).
Replies from: harsimony↑ comment by harsimony · 2021-06-17T17:03:37.980Z · LW(p) · GW(p)
Good points. I would imagine that all of these scenarios are made possible by an intelligence advantage, but I did not make that explicit here.
Your point about multitasking (if I understood it correctly) is important too. We can imagine an unfriendly-AI pursuing all 3 paths to existential catastrophe simultaneously. The question becomes, are there prevention strategies for combinations of existential-risk-paths which work better than simply trying to prevent individual paths? I have to think on that more.