AI and Non-Existence.
post by Eleven · 2025-01-25T19:36:22.624Z · LW · GW · 6 commentsContents
6 comments
Imagine 2 valleys. One valley leads to billions and billions of years of extreme joy, while the other valley leads to billions and billions of years of extreme suffering. A superintelligence comes to you and tells you that there is a 98% chance that you will end up in the valley of joy for billions of years and a 2% chance that you will end up in the valley of suffering for billions of years. It also gives you a choice to choose non-existence. Would you take the gamble, or would you choose non-existence?
The argument presented in this post occurred to me several months ago and in the last several months, I have spent time thinking about the argument and discussed it with AI models and have not found a satisfactory answer to it given the real-world situation. The argument can be formulated for things other than advanced AI, but given the rapid progress in the AI field and that the argument was originally formulated in the context of AI, I will present it in that context.
Now apply the reasoning from the valleys from above to AGI/ASI. AGI could be here in about 15 months and ASI not long after that. Advanced AI(s) could prolong human life to billions and billions of years, take over the world and create a world in its image - whatever that might be. People have various estimates of how likely it is that the AGI/ASI will go wrong, but one thing that many of them keep saying is that the worst case scenario is that it will kill us all. That is not the worst case scenario. The worst case scenario is that it will cause extreme suffering or torture us for billions and trillions of years.
Let's assume better than 2% odds, let's say they are 0.5%, would you be willing to take the gamble with heaven or hell even if the odds are 0.5% for hell? And if not, at what point would you be willing to take the gamble instead of choosing non-existence?
If some of you might say that you would be willing to take the gamble at 0.5% for a living hell, in this case, would you be willing to spend 1 hour in a real torture chamber now for every 199 hours that you are in a positive mental state and not there? Advanced ASI could not only create suffering based on the current levels of pain experience that humans can have, which are already horrible, but increase pain experience of humans to unimaginable levels (for whatever reason - misalignment, indifference, evilness, sadism, counterintuitive ethical theories, whatever it might be).
If you do not assume a punitive afterlife for choosing non-existence and if choosing non-existence is an option, at which point, what odds do you need to have to take the gamble between almost a literal heaven or hell instead of choosing non-existence? I've asked myself this question and the answer is, when it comes to extreme suffering of billions and trillions of years, the odds would have to be very, very close to zero. What are your thoughts on this? If you think that this argument is not a valid argument, can you show me where is the flaw?
6 comments
Comments sorted by top scores.
comment by Dagon · 2025-01-25T23:00:38.965Z · LW(p) · GW(p)
Would you take the gamble, or would you choose non-existence?
Non-existence is a gamble too. You could lose out on billions of years of happiness! Even without that opportunity cost, I assert that most humans' ability to integrate over large timespans is missing, and you're going to get answers that are hard to reconcile with any sane definitions and preferences that don't ALREADY lead most people to suicide.
For me, sign me up for immortality if it's not pretty certain to be torture.
Replies from: Eleven↑ comment by Eleven · 2025-01-26T05:15:29.178Z · LW(p) · GW(p)
To say that non-existence is a gamble too is kind of like saying that a person who does not gamble in a casino is gambling too - because they are missing on a chance to win millions of dollars - to me that is more a matter of definitions and if one wants to argue for that, sure, let's accept that every single thing in life is a gamble.
Your assertion that humans will be able to integrate over large timespans might be true given the current human brain - but here we are talking about superintelligence, even with relatively primitive AIs we are already talking about new medication and cures, superintelligence that would want to cause widespread suffering or torture you and be able to build a Dyson sphere around the sun and a thousand of other advanced technologies will be able not just to figure out how to torture you persistently (so that your brain does not adapt to the new state of constant torture) but also to increase your pain levels by 1000x - not all animals feels the same pain, and there is no reason to think that current pain experience of humans cannot be increased by a huge amount.
I don't think that it is not rational to take the gamble when the odds are 1%, much less when the odds are 20% or 49% or 70%. Let's go with 1% because I am willing to give you favourable odds - so the post asks, would you be willing to be in a torture chamber now 1 hour for every 99 hours that you are in a really happy state? We can increase that to 20 hours (20%) or what have you. And here I am talking about real extreme torture, not you have a headache. So imagine the worst torture methods that currently exist, and it is not waterboarding - check worst torture methods of history and if you are objective, whatever odds you would be willing to accept, if you say 20% or 1%, would you be willing to be really tortured for that amount of time single every day?
Replies from: Dagon↑ comment by Dagon · 2025-01-26T15:45:21.980Z · LW(p) · GW(p)
On further reflection, I realize I'm assuming a fair bit of hyperbole in the setup. I just don't believe there's more than an infinitesimal chance of actual perpetual torture, and my mind substitutes dust motes in one's eye.
I don't think any amount of discussion is likely to get my mind into a state that takes it seriously enough to actually engage on that level, so I'm bowing out. Thanks for the discussion!
comment by JBlack · 2025-01-26T05:28:23.509Z · LW(p) · GW(p)
There's a very plausible sense in which you may not actually get a choice to not exist.
In pretty much any sort of larger-than-immediately-visible universe, there are parts of the world (timelines, wavefunction sections, distant copies in an infinite universe, Tegmark ensembles, etc) in which you exist and have the same epistemic state as immediately prior to this choice, but weren't offered the choice. Some of those versions of you are going to suffer for billions of years regardless of you choosing to no longer exist in this fragment of the world.
Granted, there's nothing you can do about them - you can only choose your response in worlds where you get the choice.
From the wider point of view it may or may not change things. For example suppose you knew (or the superintelligence told you as follow-up information) that in worlds having an essentially identical "you" in them, 10% will be unconditionally tortured for billions of years, and 90% are offered the question (with a 2% chance of hell and 98% chance of utopia). The superintelligence knows that in most timelines leading to hellworlds there is no care for consent while utopias do, which is why conditional on consent the chance is only 2% rather than the overall 11.8%.
If you are the sort of person to choose "nonexistence" then 10% of versions of you go to hell and 90% die. If you choose "live" then in total 11.8% of you go to hell and 88.2% to utopia. The marginal numbers are the same, but you no longer get the option to completely save all versions of you from torture.
Is it still worthwhile for those who can to choose death? This is not rhetorical, it is a question that only you can answer for yourself. Certainly those in the 1.8% would regret being a "choose life" decider and joining the 10% who never got a choice.
Replies from: Eleven↑ comment by Eleven · 2025-01-26T06:06:19.152Z · LW(p) · GW(p)
If you are a naturalist or physicalist about humans - these copies are not me, they are my identical twins. If you want to go beyond naturalism or physicalism, that is perfectly fine, but based on our current understanding, these are identical twins of me and in no sense are they me. So whatever happens in these other universes - it is not going to be me going through the timeline, it will be my identical twin.
An infinite or very large universe/multiverse is highly speculative and no matter what I decide in this universe, if there is an infinite universe, it makes essentially no difference if all universes are the same. There is going to be 10^1000000 and far beyond that of my identical twins and I have no power to influence anything. To say that you can influence anything in that scenario is worse than saying that you can move earth to another galaxy by jumping on it. You have 0.00...0000...0001% effect on it, and in an infinite universe you have effectively zero effect on the fate of your copies - so no matter what you decide, you will not have any influence over it.
comment by Hzn · 2025-01-26T23:38:28.907Z · LW(p) · GW(p)
As Dagon said it's just not realistic. There's no compelling reason to think that such a thing is even doable given physical constraints. And it's odd that an AI would even offer such a deal. And an AI that offered such a deal -- is it even trust worthy?
If you're worried about this, it's like being worried about going to hell on account of not being a Christian or not being a Muslim.
The realistic risk is that AI would be made to suffer intensely either by AI or by humans. This would also be very hard to detect. Since you're a human I don't think you need to worry too much about this sort of thing as a risk to you personally.