If AI starts to end the world, is suicide a good idea?

post by IlluminateReality · 2024-07-09T21:53:23.470Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    14 davekasten
    5 mishka
    3 Tomás B.
None
No comments

For a while I’ve thought to myself that if AI starts to obviously end the world I would just commit suicide, mainly to avoid any potential s-riskiness. But I’ve become far less certain recently that that would be a good idea. 

Between the possibility of resurrection, quantum immortality, and weird acausal shenanigans, I’m not sure what my plan is for if the nano-factories starting popping up and whatnot.

This uncertainty about what I should do in such a situation causes me more discomfort than having a plan, even if it’s grim.

What do you think is the right thing to do if it becomes obvious that the world is ending due to AI?

Answers

answer by davekasten · 2024-07-10T04:28:06.242Z · LW(p) · GW(p)

I would politely, but urgently suggest that if you're thinking a lot about scenarios where you could justify suicide, you might not be as interested in the scenarios as the permission you think they might give you.  And you might not realize that!  Motivated reasoning is a powerful force for folks who are feeling some mental troubles.  

This is the sort of thing where checking in with a loved one generally about how they perceive your general affect and mood is a really good idea.  I urge you to do that.  You're probably fine and just playing with some abstract ideas, but why not check in with a loved one just in case? 

comment by IlluminateReality · 2024-07-11T09:30:45.270Z · LW(p) · GW(p)

I appreciate the concern. I’m actually very averse to committing suicide and any motivated reasoning on my part will be on the side of trying to justify staying alive. (To be clear, I think I have ample reasons to stay alive, atleast for the time being). My concern is that there might be good reasons to commit suicide (at some point) in which case I would rather know of them than be ignorant.

answer by mishka · 2024-07-10T01:38:27.462Z · LW(p) · GW(p)

I have a (vague) intuition that this is a bad idea in expectation.

And I have a (vague) intuition that one should focus on working to make things better (in particular, in a situation of inevitable technological singularity, to focus on trying to make it better and on trying to make one's own trajectory in it better).

The epistemological weakness of my position here is that I am transferring my intuition from more conventional situations to something which is very much outside our distribution, so we might find ourselves not really prepared to think about this adequately.


Let me try again, from a more abstract epistemological position.

If it is a bad singularity and it really wants to hurt you, it probably can resurrect you and such, so it's not clear how much of a defense this is.

But if it is a good singularity, it would probably respect your wish to die, and then you'll miss all the potential of ending up in a good singularity (the chances of that are, of course, subject to a rather intense debate these days, but even Eliezer does not put those chances at exact zero).

This does seem to point towards dying being negative in expectation. (But in truth it might depend on your estimates of chances of various scenarios. My estimates of chances for positive singularity might be higher than your estimates, and this might color my judgement.)

(It is such an unusual situation that it is difficult to be sure. Speaking for myself, my own curiosity is sufficiently insatiable that if it is going to happen I want to see it and to experience it in any case, and to put efforts to make it better, if it's at all possible.)

comment by IlluminateReality · 2024-07-11T13:54:22.144Z · LW(p) · GW(p)

From the point of view of trying to reduce personal risk of s-risks, trying to improve the worlds prospects seems like a way to convince yourself you’re doing something helpful, without meaningfully reducing personal s-risk. I have significant uncertainty about even the order of magnitude I could reduce personal s-risk through activism, research, etc, but I’d imagine it would be less than 1%. To be clear, this does not mean that I think doing these things are a waste of time, in fact it’s probably some of the highest expected utility things anyone can do, but it’s not a particularly effective way to reduce personal s-risk. However, this plausibly changes if you factor in that being someone who helped make the singularity go well could put you in a favourable position post-singularity.

Regarding resurrection, do you know what the Lesswrong consensus is on the position that continuation of consciousness is what makes someone the same person as 5 minutes ago? My impression is that this idea doesn’t really make sense, but it’s an intuitive one and a cause of some of my uncertainty about the feasibility of resurrection.

I’m surprised you think that a good singularity would let me stay dead if I had decided to commit suicide out of fear of s-risk. Presumably the benevolent AI/s would know that I would want to live, no?

Also, just a reminder that my post was about what to do conditional on the world starting to end (think nanofactories and geoengineering and the AI/s being obviously not aligned). This means that the obvious paths to utopia are already ruled out by this point, although perhaps we could still get a slice of the lightcone for acausal trade/ decision theoretic reasons.

Also yeah, whether suicide is rational or not in this situation obviously comes down to your personal probabilities of various things.

Replies from: mishka
comment by mishka · 2024-07-11T16:12:06.829Z · LW(p) · GW(p)

Regarding resurrection, do you know what the Lesswrong consensus is on the position that continuation of consciousness is what makes someone the same person as 5 minutes ago? My impression is that this idea doesn’t really make sense, but it’s an intuitive one and a cause of some of my uncertainty about the feasibility of resurrection.

Yes, we don't really know how reality works, that's one of the problems. We don't even know if we are in a simulation. So, it's difficult to be certain.

I’m surprised you think that a good singularity would let me stay dead if I had decided to commit suicide out of fear of s-risk. Presumably the benevolent AI/s would know that I would want to live, no?

It did occur to me that they will try to "wake you up" once (if that's feasible at all) and ask if you really meant to stay dead (while respecting your free will and refraining from manipulation).

And it did occur to me that it's not clear if resurrection is possible, or if bad singularity would bother to resurrect you, even if it's possible.

So, in reality, one needs to have a better idea about all kinds of probabilities, because the actual "tree of possible scenarios" is really complicated (and we know next to nothing about those).

So, I ended up noting that

my estimates of chances for positive singularity might be higher than your estimates, and this might color my judgement.

This does reflect my uncertainty about all this...


conditional on the world starting to end

Ah, I have not realized that you were talking not just about transformation being sufficiently radical ("end of the world known to us"), but about it specifically being bad...

My typical approach to all that is to consider non-anthropocentric points of view (this allows one to take a step back and to think in a more "invariant way"). In this sense, I suspect that "universal X-risk" (that is, the X-risk which threatens to destroy everything including the AIs themselves) dominates (I am occasionally trying to scribble something about that: https://www.lesswrong.com/posts/WJuASYDnhZ8hs5CnD/exploring-non-anthropocentric-aspects-of-ai-existential [LW · GW]).

In this sense, while it is possible to have scenarios where huge sufferings are inflicted, but the "universal X-risk" is somehow avoided, this does not seem to my (unsubstantiated) intuition to be too likely. The need to control the "universal X-risk" and to protect interests of individual members of the AI ecosystem requires a degree of "social harmony" of some sort within the AI ecosystem.

I doubt that anthropocentric approaches to AI alignment are likely to fare well, but I think that a harmonious AI ecosystem where all individuals (including AIs, humans, and so on) are sufficiently respected and protected might be feasible, e.g. I tried to scribble something to that effect here: https://www.lesswrong.com/posts/5Dz3ZrwBzzMfaucrH/ai-57-all-the-ai-news-that-s-fit-to-print?commentId=ckYsqx2Kp6HTAR22b [LW(p) · GW(p)]

I think that if an AI ecosystem permits to inflict massive sufferings within itself, this would increase the risks to all members, and to the ecosystem as a whole. It's difficult to imagine something like this going on for a long time without a catastrophic blow-up. (Although, of course, what do I know...)

answer by Tomás B. · 2024-07-09T22:11:56.974Z · LW(p) · GW(p)

I'm not convinced you can get any utility from measure-reducing actions unless you can parley the advantage they give you into making more copies of yourself in the branch in which you survive. I am not happy about the situation, but it seems I will be forced endure whatever comes and there will never, ever be any escape. 

comment by IlluminateReality · 2024-07-11T14:06:34.666Z · LW(p) · GW(p)

Are you implying that all of the copies of yourself should meaningfully be thought of as the same person? Why would making more copies of yourself increase your utility?

Also, I take it from “never, ever be any escape” that you believe quantum immortality is true?

Replies from: Bjartur Tómas
comment by Tomás B. (Bjartur Tómas) · 2024-07-11T14:27:32.796Z · LW(p) · GW(p)

I think about anticipated future experiences. All future slices of me have the same claim to myself. 

No comments

Comments sorted by top scores.