How likely are scenarios where AGI ends up overtly or de facto torturing us? How likely are scenarios where AGI prevents us from committing suicide or dying?

post by JohnGreer · 2023-03-28T18:00:47.221Z · LW · GW · 3 comments

This is a question post.

Contents

  Answers
    2 avturchin
None
3 comments

I’m a big life extension supporter but being unable to choose to die ever is a literal hell. As dark as it is, if these scenarios are likely, it seems the rational thing to do is die before AGI comes.

Killing all of humanity is bad enough, but how concerned should we be about even worse scenarios?

Answers

answer by avturchin · 2023-03-28T20:56:19.294Z · LW(p) · GW(p)

If you really expect unfriendly superinteligent AI, you should also consider that it will be able to resurrect the dead (may be running simulations of the past in very large numbers), so suicide will not help. 

Moreover, such AI may deliberately go against people who tried to escape, in order to acausaly deter them from suicide.

However, I do not afraid of this as I assume that Friendly AIs can "save [LW · GW]" minds from hell of bad AIs via creating them in even larger numbers in simulations.

3 comments

Comments sorted by top scores.

comment by Benjy Forstadt · 2023-04-05T11:23:55.052Z · LW(p) · GW(p)

 There is discussion of some possibilities at https://www.reddit.com/r/SufferingRisk/wiki/intro/. I'd like to see more talk about these issues.

comment by Ann (ann-brown) · 2023-04-05T13:32:26.379Z · LW(p) · GW(p)

Well ... The possibility of the scenario where Natural General Intelligence does these two things is approximately 100%.

comment by avturchin · 2023-03-28T20:59:15.557Z · LW(p) · GW(p)

One such scenario is that the world will end as a semi stable bipolar world. There will be two AIs, and one of them will be friendly. This will create an initiative for another AI to torture people to blackmail first AI. God bless us to escape this hell.