‘Maximum’ level of suffering?
post by Anirandis · 2020-06-20T14:05:14.423Z · LW · GW · 1 commentThis is a question post.
Contents
Answers 4 Nebu 4 Kenny 2 G Gordon Worley III 2 Gurkenglas 0 avturchin -3 t3rtius None 1 comment
Is there a maximum level of suffering that could intentionally be achieved? Say there were a malevolent superintelligence in this universe that planned to create Matrioshka brains & maximise humanity’s suffering.
Could it rapidly expand one’s pain centres, with the effect of increasing their perception of pain, without ever hitting a wall? Would adding extra compute towards torturing someone lead to eventual diminishing returns? Do we have reason to think that there’d be some form of ‘limit’ in the pain it could feasible cause, or would it be near-infinite?
I know this is quite a depressing and speculative topic, but I’m wondering what people think.
Answers
For something to experience pain, some information needs to exist (e.g. in the mind of the sufferer, informing them that they are experiencing pain). There are known information limits, e.g. https://en.wikipedia.org/wiki/Bekenstein_bound or https://en.wikipedia.org/wiki/Landauer%27s_principle
These limits are related to entropy, space, energy, etc., so if you further assume the universe is finite (or perhaps equivalently, that the malicious agent can only access a finite portion of the universe due to e.g. speed-of-light limits), then there is an upon bound of information possible, which implies an upper bound of pain possible.
Presumably pain works in some specific way (or some relatively narrow distribution of ways), so there probably is a maximum amount of pain that could be experienced in any circumstance. Real-life animals can and do die of shock, which seems like it might be some maximum 'pain' threshold being exceeded.
But suffering seems much more general than pain. Creating (e.g. simulating) a consciousness or mind and torturing it increases global suffering. Creating multiple minds and torturing them would increase suffering further.
What seems to be different about suffering (to at least some degree – real-life beings also seem to suffer sympathetic pain) is that additional suffering can be, in effect, created simply by informing other minds of suffering of which they were not previously aware. Some suffering is created by knowledge or belief, i.e. spread by information. (This post [? · GW] has a good perspective one can adopt to avoid being 'mugged' by this kind of information.)
The creation or simulation of minds is presumably bounded by physical constraints, thus there probably is some maximum amount of suffering possible.
Are there possible minds that can experience an infinite amount of pain or suffering? I think not. At a 'gears' level, it doesn't seem like pain or suffering could literally ever be infinite, even over an infinite span of time, tho I admit that seems true because it does seem true that, e.g. there's a finite amount of matter in the universe, and that minds cannot exist for an infinite amount of time (e.g. because of the eventual heat death of the universe).
But even assuming minds can exist for an infinite amount of time or that minds could be arbitrarily 'large', I'd expect the amount of pain or suffering that any one mind could experience to be finite. But, under those same assumptions (or similar ones), the total amount of pain or suffering experienced could be infinite.
↑ comment by Anirandis · 2020-06-20T19:40:56.328Z · LW(p) · GW(p)
Real-life animals can and do die of shock, which seems *like* it might be some maximum 'pain' threshold being exceeded.
In theory, would it not be possible for, say, a malevolent superintelligence to "override" any possibility of a "shock" reaction, and prevent the brain from shutting down? Wouldn't that allow for ridiculous amounts of agony?
It seems plausible to me that a sufficiently powerful agent could create some form of ever-growing agony by expanding subjects' pain centres to maximise pain; and the limit being the point where most of the matter in the universe is part of someone's pain centre seems incredibly scary. I sincerely hope there's good reason to believe that a hypothetical "evil" superintelligence would get diminishing returns quite quickly.
Replies from: Kenny↑ comment by Kenny · 2020-06-20T23:25:55.327Z · LW(p) · GW(p)
(You need a space between the >
and the text being quoted to format it as a quote in Markdown.)
Sure, we can assume a malevolent super-intelligence could prevent people from going into shock and thus cause much more pain than otherwise.
But it's not clear how (or even whether) we can quantize pain (or suffering). From the perspective of information processing (or something similar), it seems like there would probably be a maximum amount of non-disabling pain, i.e. a 'maximum priority override' to focus all energy and other resources on escaping that pain as quickly as possible. It also seems unclear why evolution would result in creatures able to experience pain more intensely than such a maximum.
Let's assume pain has no maximum – I'd still expect a reasonable utility function to cap the (dis)utility of pain. If it didn't, the (possible) torture of just one creature capable of experiencing arbitrary amounts/degrees/levels of pain would effectively be 'Pascal's hostage' (something like, under the control of a malevolent super-intelligence, a utility monster).
But yes, a malevolent super-intelligence, or even just one that's not perfectly 'friendly', would be terrible and the possibility is incredibly scary to me too!
Replies from: Anirandis↑ comment by Anirandis · 2020-06-20T23:49:02.503Z · LW(p) · GW(p)
I'd still expect a reasonable utility function to *cap* the (dis)utility of pain. If it didn't, the (possible) torture of just one creature capable of experiencing arbitrary amounts/degrees/levels of pain would effectively be 'Pascal's hostage'
I suppose I never thought about that, but I'm not entirely sure how it'd work in practice. Since the AGI could never be 100% certain that the pain it's causing is at its maximum, it might further increase pain levels, just to *make sure* that it's hitting the maximum level of disutility.
It also seems unclear why evolution would result in creatures able to experience pain more intensely than such a maximum.
I think part of what worries me is that, even if we had a "maximum" amount of pain, it'd be hypothetically possible for humans to be re-wired to remove that maximum. I'd think that I'd still be the same person experiencing the same consciousness *after* being rewired, which is somewhat troubling.
If the pain a superintelligence can cause scales linearly or better with computational power, then the thought is even more terrifying.
Overall, you make some solid points that I wouldn't have considered otherwise.
Replies from: Kenny↑ comment by Kenny · 2020-06-22T16:52:44.956Z · LW(p) · GW(p)
My point about 'capping' the (dis)utility of pain was that one – a person or mind that isn't a malevolent (super-)intelligence – wouldn't want to be able to be 'held hostage' were something like a malevolent super-intelligence in control of some other mind that could experience 'infinite pain'. You probably wouldn't want to sacrifice everything for a tiny chance at preventing the torture of a single being, even if that being was capable of experiencing infinite pain.
I don't think it's possible, or even makes sense, for a mind to experience an infinite amount/level/degree of pain (or suffering). Infinite pain might be possible over an infinite amount of time, but that seems (at least somewhat) implausible, e.g. given that the universe doesn't seem to be infinite, seems to contain a finite amount of matter and energy, and seems likely to die of an eventual heat death (and thus not able to support life or computation indefinitely).
Even assuming that a super-intelligence could rewire human minds to just increase the amount of pain they can experience, a reasonable generalization is to a super-intelligence creating (e.g. simulating) minds (human or otherwise). That seems to me to be the same (general) moral/ethical catastrophe as your hypothetical(s).
But I don't think these hypotheticals really alter the moral/ethical calculus with respect to our decisions, i.e. the possibility of the torture of minds that can experience infinite pain doesn't automatically imply that we should avoid developing AGI or super-intelligences entirely. (For one, if infinite pain is possible, so might infinite joy/happiness/satisfaction.)
One possibility is "a lot", in that humans seem to interpret pain on a logarithmic scale [EA · GW] such that 2/10 pain is 10x worse than 1/10 pain, etc.. However, there is likely some physiological limit to how much sensor data the human brain can process as pain and still register it as pain and suffer from it. This leaves out the possibility of modifying humans in ways that would allow them to experience greater pain.
Note that I also think this question is exactly symmetrical to asking "what's the maximum level of pleasure", and so likely the answer to one is the answer to the other.
↑ comment by Anirandis · 2020-06-20T20:19:56.410Z · LW(p) · GW(p)
I think it's the modifying humans to experience pain part that's the most terrifying, to be honest.
Replies from: Dagon↑ comment by Dagon · 2020-06-21T04:05:53.958Z · LW(p) · GW(p)
Interesting intuition. How do you feel about modifying humans (or yourself) to experience more pleasure? If they're not symmetrical, why not?
Replies from: Anirandis↑ comment by Anirandis · 2020-06-21T13:07:28.434Z · LW(p) · GW(p)
I do agree that they’re symmetrical. I just find it worrying that I could potentially experience such enormous amounts of pain, even when the opposite is also a possibility.
Replies from: Dagon↑ comment by Dagon · 2020-06-21T21:59:11.196Z · LW(p) · GW(p)
Worrying that you might experience such pain/sorrow/disutility, but not worrying that you might miss out on orders of magnitude more pleasure/satisfaction/utility than humans currently expect is one asymmetry to explore. The other is worrying that you might experience it, more than worrying that trillions (or 3^^^3) ems might experience it.
Having a reasoned explanation for your intuitions to be so lopsided regarding risk and reward, and regarding self and aggregate, will very much help you calculate the best actions to navigate between the extremes.
Replies from: Anirandis↑ comment by Anirandis · 2020-06-22T08:28:58.559Z · LW(p) · GW(p)
It’s more a selfish worry, tbh. I don’t buy that pleasure being unlimited can cancel it out though - even if I were promised a 99.9% chance of Heaven and 0.1% chance of Hell, I still wouldn’t want both pleasure and pain to be potentially boundless.
Presumably, you are asking because you want to calculate the worst-case disutility of the universe, in order to decide whether making sure that it doesn't come about is more important than pretty much anything else.
I would say that this question cannot be properly answered through physical examination, because the meaning of such human words as suffering becomes too fuzzy in edge cases.
The proper approach to deciding on actions in the face of uncertainty of the utility function is utility aggregation. The only way I've found to not run into Pascal's Wager problems, and the way that humans seem to naturally use, is to normalize each utility function before combining them.
So let's say that we are 50/50 uncertain whether there is no state of existence worse than nonexistence, or we should cast aside all other concerns to avert hell. Then after normalization and combination, the exact details will depend on what method of aggregation we use (which should depend on the method we use to turn utility functions into decisions), but as far as I can see the utility function would come out to one that tells us to exert quite an effort to avert hell, but still care about other concerns.
When pain is unbearable it destroys us; when it does not it is bearable. Marcus Aurelius
The goal of increasing the suffering contradicts the need to preserve an individual, who is experience the pain as the same person, which may be a natural limitation for intensity.
This is quite reductionist and I admit this, but I'm guessing that "suffering" in itself is not something that anyone can quantify to know when there is "more" or "less". My guess is that one has to look every time for the physiological effects and by doing that, it is easier to answer when there is too much or there's room for more.
The same "stimulus" (e.g. grief -- probably one of the most common) will result in varied answers from the psyche and the physical, so I would say that a quantitative appreciation must start from the effects, and not the causes. And once effects are considered, any quasi-precise appreciation will end up in the physical.
Nevertheless, I would really enjoy to read more well advised answers, as it is a subject that interests me greatly.
1 comment
Comments sorted by top scores.
comment by Dagon · 2020-06-20T17:44:35.685Z · LW(p) · GW(p)
This is an excellent question to be exploring - what are the bounds of utility, and how does it behave in the extremes. Along with how to aggregate (when is two beings suffering worse than one being suffering more intensely?).
But you can explore as easily on the other side, and that's both more pleasant and more likely to help you notice ways to improve things by a small amount. What's the maximum level of satisfaction? Why isn't wireheading a good solution?