How likely do you think worse-than-extinction type fates to be?
post by span1 · 2023-03-24T21:03:12.184Z · LW · GW · 4 commentsContents
4 comments
The risks of unaligned-AGI are usually couched in terms of existential risk, whereby the outcome is explicitly or implicitly human extinction. However, given the nature of what an AGI would be able to do, it seems though there are plausible scenarios worse than human extinction. Just as on an individual level it is possible to imagine fates we would prefer death to (e.g. prolonged terminal illness or imprisonment), it is also possible to expand this to humanity as a whole, where we end up in hell-type fates. This could arise from some technical error, wilful design by the initial creators (say religious extremists) or some other unforeseen mishap.
How prominently do these concerns feature for anyone? How likely do you think worse-than-extinction scenarios to be?
4 comments
Comments sorted by top scores.
comment by RHollerith (rhollerith_dot_com) · 2023-03-26T19:51:50.783Z · LW(p) · GW(p)
Someone concerned about this possibility has posted to this site and used the term "s-risk".
It is approximately as difficult to create an AI that wants people to suffer as it is to create one that wants people to flourish, and humanity is IMO very far from being able to do the latter, so my main worry is the an AGI will kill us all painlessly.
Replies from: awg↑ comment by awg · 2023-03-26T20:33:18.731Z · LW(p) · GW(p)
Some advanced intelligence that takes over doesn't have to be directed toward human suffering for s-risk to happen. It could just happen as a byproduct of whatever unimaginable things the advanced intelligence might want/do as it goes about its own business completely heedless of us. In those cases we're suffering in the same way that some nameless species in some niche of the world is suffering because humans, unaware that species even exists, are encroaching on and destroying its natural domain in the scope of just going about our own comparatively unimaginable business.
Replies from: JBlack↑ comment by JBlack · 2023-03-26T23:42:16.349Z · LW(p) · GW(p)
I think there are two confusions here. This comment appears to be conflating the "suffering" of a species with suffering of individuals within it, and also temporary suffering of the dying with suffering that is protracted indefinitely.
The term s-risk usually refers to indefinitely extended amounts of suffering much greater than has been normal in human history. Centrally, to scenarios in which most people would prefer to die but can't.
Replies from: awg