Greg Linster on the beauty of death
post by Jonathan_Graehl · 2011-10-20T04:47:24.711Z · LW · GW · Legacy · 67 commentsContents
67 comments
Without death we cannot truly have life. As such, what a travesty of life it would be to achieve a machine-like immortality!
Gray writes the following chilling lines: “If you understand that in wanting to live for ever you are trying to preserve a lifeless image of yourself, you may not want to be resurrected or to survive in a post-mortem paradise. What could be more deadly than being unable to die?” (my emphasis)
via.
Sounds like sour grapes. I'd heard of people holding such sentiments; this is the first time I've actually seen them expressed myself.
67 comments
Comments sorted by top scores.
comment by shokwave · 2011-10-20T04:53:21.753Z · LW(p) · GW(p)
What could be more deadly than being unable to die?
Anything. Quite literally, anything at all. All of the things are more deadly than being unable to die.
Replies from: Logos01↑ comment by Logos01 · 2011-10-20T07:08:01.944Z · LW(p) · GW(p)
Well... I am firmly in the pro-immortality camp. However, I have given thought to the quandary of the fact that for any given immortal being, the probability of becoming eternally trapped at any given moment in a "And I Must Scream" scenario is non-zero (though admittedly vanishingly small). An infinitessimally likely result will occur at least once in an infinite number of trials, however, so... that's a troublesome one.
Better to be immortal with the option of suicide, than to be literally incapable of dying.
Replies from: shokwave, TheOtherDave, dbaupp, Nisan, lessdazed↑ comment by shokwave · 2011-10-20T07:37:51.540Z · LW(p) · GW(p)
Better to be immortal with the option of suicide, than to be literally incapable of dying.
Sure, but Linster asked what is more deadly, not what is better. Being immortal with the option of suicide is clearly more deadly than being literally incapable of dying.
Replies from: pedanterrific, Logos01↑ comment by pedanterrific · 2011-10-21T01:09:56.684Z · LW(p) · GW(p)
You're thinking about this the wrong way. If you took away a Terminator's ability to die, would it become less deadly? I argue that it would, in fact, become more deadly.
↑ comment by Logos01 · 2011-10-20T08:25:14.317Z · LW(p) · GW(p)
Sure, but Linster asked what is more deadly, not what is better. Being immortal with the option of suicide is clearly more deadly than being literally incapable of dying.
Uncontested. I wasn't writing a rebuttal to his words, but rather elaborating on the thoughts that came to my mind upon reading his words. An extension of the dialogue rather than an argument.
Replies from: shokwave↑ comment by TheOtherDave · 2011-10-20T14:05:29.137Z · LW(p) · GW(p)
Surely the net value of having the option depends on the magnitude of the chance that I will, given the option, choose suicide in situations where the expected value of my remaining span of life is positive? For example, if I have the option of suicide that has only a one-in-a-billion chance every minute of being triggered in such a situation (due to transient depression or accidentally pressing the wrong button or whatever), that will kill me on average in two thousand years. Is that better or worse than my odds of falling into an AIMS scenario?
That aside... I think AIMS scenarios are a distraction here. It is certainly true that the longer I live, the more suffering I will experience (including, but not limited to, a vanishingly small chance of extended periods of suffering, as you say). What I conclude from that has a lot to do with my relative valuations of suffering and nonexistence.
To put this a different way: what is the value of an additional observer-moment given an N% chance that it will involve suffering? I would certainly agree that the value increases as N decreases, all else being equal, but I think N has to get awfully large before that value even approximates zero, let alone goes negative (which presumably it would have to before death is the more valuable option).
Replies from: Logos01↑ comment by Logos01 · 2011-10-20T15:13:40.750Z · LW(p) · GW(p)
Surely the net value of having the option depends on the magnitude of the chance that I will, given the option, choose suicide in situations where the expected value of my remaining span of life is positive?
Well -- here's the thing. These sorts of scenarios are relatively easy to safeguard against. For example (addressing the 'wrong button' but also the transient emotional state): require the suicide mechanism to take two thousand years to implement from initiation to irreversibility. Another example -- tailored the other way -- since we're already talking about altering/substituting physiology drastically, given that psychological states depend on physiological states, it would stand to reason that any being capable of engineering immortality could also engineer cognitive states.
I would find the notion of pressing the wrong button accidentally for two thousand years to be ... possible at all, given human timeframes. Especially if the suicide mechanism is tied to intentionality. Could you even imagine actively desiring to die for that long a period, short of something akin to an AIMS scenario?
Also, please note that I later described "AIMS" as "total anti-utility that endures for a duration outlasting the remaining lifespan of the universe". So what I'm talking about is a situation where you have a justified true belief that the remaining span of your existence will not merely not have positive value but rather specifically will consist solely of negative value.
(For me, the concept of "absolute suffering" -- that is, a suffering greater than which I cannot conceive (That's kinda got echoes of the Ontological but in this case it's not fallacious since I'm not using that as an argument in favor of its instantiation rather than definition) -- is not sufficient to induce zero-value/utility let alone negative-value/utility. Suffering that "serves a purpose" is acceptable to me; so even an 'eternity' of 'absolute suffering' wouldn't necessarily be AIMS for me, under these terms.)
The point is: such a scenario -- total anti-utility from a given point forward -- has a negligible but non-zero chance of occurring. And over a period of 10^100 years, that raises such a negligible percentage individual instance occurance to an accumulatedly significant one. IF we humans manage to manipulate M-Theory to bypass the closed-system state of the universe and thereby, furthermore, stave off the heat death of the universe indefinitely, then ... well, I know that my mind cannot properly grok the sort of scenario we'd then be discussing.
Still -- given the scenario-options of possibly fucking it up and dying early, or being unable to escape a scenario of unending anti-utility... I choose the former. Then again, If I were given the choice of increasing my g
quotient and IQ quotient (or, rather, whatever actual cognitive function those scores are meant to model) twofold for ten years at the price of dying after those ten... I think, right now, I would take it.
So I guess that says something about my underlying beliefs in this discussion, as well.
Replies from: pedanterrific, TheOtherDave↑ comment by pedanterrific · 2011-10-20T20:28:00.477Z · LW(p) · GW(p)
For me, it's the irreversibility that's the real issue. For any situation that would warrant pressing the suicide switch, would it not be preferable to press the analgesia switch?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-10-21T15:23:30.234Z · LW(p) · GW(p)
Not necessarily. If I believed that my continued survival would cause the destruction of everything I valued, suicide would be a value-preserving option and analgesia would not be. More generally: if my values include anything beyond avoiding pain, analgesia isn't necessarily my best value-preserving option.
But, agreed, irreversibility of the sort we're discussing here is highly implausible. But we're discussing low-probability scenarios to begin with.
Replies from: pedanterrific↑ comment by pedanterrific · 2011-10-22T02:58:41.411Z · LW(p) · GW(p)
my continued survival would cause the destruction of everything I valued
This is a situation I hadn't thought of, and I agree that in this case, suicide would be preferable. But I hadn't got the impression that's what was being discussed - for one thing, if this were a real worry it would also argue against a two-thousand-year safety interval. I feel like the "Omega threatening to torture your loved ones to compel your suicide" scenario should be separated from the "I have no mouth and I must scream" scenario.
More generally: if my values include anything beyond avoiding pain, analgesia isn't necessarily my best value-preserving option.
True, but the problem with pain is that its importance in your hierarchy of values tends to increase with intensity. Now I'm thinking of a sort of dead-man's-switch where outside sensory information requires voluntary opting-in, and the suicide switch can only be accessed from the baseline mental state of total sensory deprivation, or an imaginary field of flowers, or whatever.
But, agreed, irreversibility of the sort we're discussing here is highly implausible. But we're discussing low-probability scenarios to begin with.
I was mostly talking about the irreversibility of suicide, actually. In an AIMS scenario, where I have every reason to expect my whole future to consist of total, mind-crushing suffering, I would still prefer "spend the remaining lifetime of the universe building castles in my head, checking back in occasionally to make sure the suffering hasn't stopped" to "cease to exist, permanently".
Of course, this is all rather ignoring the unlikelihood of the existence of an entity that can impose effectively infinite, total suffering on you but can't hack your mind and remove the suicide switch.
↑ comment by TheOtherDave · 2011-10-21T15:15:43.153Z · LW(p) · GW(p)
For convenience I refer hereafter to an the-future-is-solely-comprised-of-negative-value scenario as an "AIMS" scenario and to a I-trigger-my-suicide-switch-when-the-future-has-net-positive-expected-value as an "OOPS" scenario.
Things I more-or-less agree with:
I don't really see why positing "solely of negative value" is necessary for AIMS scenarios. If I'm confident that my future unavoidably contains net negative value, that should be enough to conclude that opting out of that future is a more valuable choice than signing up for it. But since it seems to be an important part of your definition, I accept it for the sake of discussion.
I agree that suffering is not the same thing as negative value, and therefore that we aren't necessarily talking about suffering here.
I agree that a AIMS scenario has a negligible but non-zero chance of occurring. I personally can't imagine one, but that's just a limit of my imagination and not terribly relevant.
I agree that it's possible to put safeguards on a suicide switch such that an OOPS scenario has a negligible chance of occurring.
Things I disagree with:
You seem to be suggesting either that it's possible to make the OOPS scenario likelihood not just negligible, but zero. I see no reason to believe that's true. (Perhaps you aren't actually saying this.)
Alternatively, you might be suggesting that the OOPS scenario is negligible, non-zero, and not worthy of attention, whereas the AIMS scenario is negligible, non-zero, and worthy of attention. I certainly agree that IF that's true, then it trivially follows that giving people a suicide switch creates more expected value than not doing so in all scenarios worthy of attention. But I see no reason to believe that's true either.
↑ comment by Logos01 · 2011-10-22T21:23:28.168Z · LW(p) · GW(p)
You seem to be suggesting either that it's possible to make the OOPS scenario likelihood not just negligible, but zero.
Specific versions of "OOPS". I don't intend to categorize all of them that way.
Alternatively, you might be suggesting that the OOPS scenario is negligible, non-zero, and not worthy of attention, whereas the AIMS scenario is negligible, non-zero, and worthy of attention.
Well, no. It has more to do with the expected cost of failing to account for either variety at least in principle. An OOPS scenario being fulfilled means the end of potential and the cessation of gained utility. An AIMS scenario being fulfilled means the aversion of constantly negative utility. (We can drop the "solely" so long as the 'net' is kept.)
↑ comment by dbaupp · 2011-10-20T07:33:26.568Z · LW(p) · GW(p)
This is a version of Pascal's mugging.
And also, given what we know of the universe, I don't think there is a method of becoming trapped with zero chance of escape. Trapped for a very long period, maybe, but not eternally.
Replies from: Incorrect, Logos01↑ comment by Incorrect · 2011-10-20T14:13:36.577Z · LW(p) · GW(p)
What if you get trapped in a loop of mind-states, each horrible state leading to the next until you are back where you started?
Replies from: Dallas↑ comment by Dallas · 2011-10-20T16:18:14.825Z · LW(p) · GW(p)
You probably could/would subjectively end up in a non-looping state. After all, you had to have multiple possible entries into the loop to begin with. Besides, it's meaningless to say that you go through the loop more than once (remember, your mind can't distinguish which loop it is in, because it has to loop back around to an initial state).
Replies from: Incorrect↑ comment by Incorrect · 2011-10-20T16:40:32.660Z · LW(p) · GW(p)
Whether you have multiple possible entries into the loop is irrelevant, what is important is whether you have possible exits.
As to your second point, does that mean it is ethical to run a simulation of someone being tortured as long as that simulation has already been run sometime in the past?
Replies from: Dallas↑ comment by Dallas · 2011-10-20T17:53:38.070Z · LW(p) · GW(p)
Possible exits could emerge from whatever the loop gets embedded in. (see 3 below)
Assuming a Tegmarkian multiverse, if it is mathematically possible to describe an environment with someone being tortured, in a sense it "is happened". Whether or not a simulation which happens to have someone being tortured is ethical to compute is hard to judge. I'm currently basing my hypothetical utility function on the following guidelines:
If your universe is casually necessary to describe theirs, you are probably responsible for the moral consequences in their universe.
If your universe is not casually necessary to describe theirs, you are essentially observing events which are independent of anything you could do. Merely creating an instance of their universe is ethically neutral.
One could take information from an casually independent universe and put it towards good ends; e.g. someone could run a simulation of our universe and "upload" conscious entities before they become information theoretically dead.
Of course, these guidelines depend on a rigorous definition of casual necessity that I currently don't have, but I don't plan to run any non-trivial simulations until I do.
↑ comment by Logos01 · 2011-10-20T08:40:08.826Z · LW(p) · GW(p)
This is a version of Pascal's mugging.
How do you figure? I can see the relationship -- the discussion of vanishingly small probabilities. The difference, however, is that Pascal's Mugging attempts to apply those small probabilities to deciding a specific action.
There is a categorical difference, I feel, between stating that a thing could occur and stating that a thing is occurring. After all; if there were an infinite number of Muggings, at least one of them conceivably could be actually telling the truth.
And also, given what we know of the universe, I don't think there is a method of becoming trapped with zero chance of escape. Trapped for a very long period, maybe, but not eternally.
s/eternally/"the remaining history of the universe"/, then. The problem remains equivalent as a thought-experiment. The point being -- as middle-aged suicides themselves demonstrate: there is, at some level, in every conscious agent's decision-making processes, a continuously ongoing decision as to whether it would be better to continue to exist, or to cease existing.
Being denied that capacity for choice, it seems highly plausible that over a long enough timeline, nearly anyone should eventually have a problem with this state of affairs.
↑ comment by Nisan · 2011-10-20T09:04:25.691Z · LW(p) · GW(p)
An infinitessimally likely result will occur at least once in an infinite number of trials
Actually, it's not guaranteed that an unlikely thing will happen at all, if the instantaneous probability of that thing diminishes quickly enough over time.
Replies from: Logos01↑ comment by lessdazed · 2011-10-20T07:27:09.922Z · LW(p) · GW(p)
trapped at any given moment
Doesn't associating your identity with (anything similar enough to) a set of physical parts rather than (pure magic or) a pattern of parts going through time imply a belief in non-local physics?
I think a suspended (I mean not just frozen, but not just vitrified either, any stasis will do, and I mean my argument to hold up against more physically neutral forms of stasis than those) version of my body that actually will not be revived is as much me as a grapefruit is. If physical parts do not vary in their relationship to each other, how is there supposed to be experience of any kind?
Alternatively, if one is made magically invincible, one could be entombed in concrete for thousands of years - forever, even, after the heat death of the (rest of) the universe if one is sent off into deep space in a casing of concrete. Is that what you say has a non-zero probability? Or are you talking about a multi-galaxy civilization devoted to keeping one individual under torture for as long as physically possible until the heat death of the universe?
I will add that I am strongly in the anti-"immortality" camp, as that word should not be used. I am in the anti-mortality camp, that's how I'd put it.
Replies from: Logos01, Logos01↑ comment by Logos01 · 2011-10-20T08:26:58.515Z · LW(p) · GW(p)
Doesn't associating your identity with (anything similar enough to) a set of physical parts rather than (pure magic or) a pattern of parts going through time imply a belief in non-local physics?
As your individual identity can only be associated with a specific set of physical parts at any given time, I'm pretty sure I don't follow your meaning. I also find myself confused by the concept of "non-local physics". Elaborate?
Replies from: lessdazed↑ comment by lessdazed · 2011-10-22T02:45:17.516Z · LW(p) · GW(p)
If the time it normally takes for a signal to go from your toe to your brain is t, and we consider your experience over one half t, your lower leg is irrelevant. Your experiences during that time slice of feeling something in your foot are due to signals propagated before one half t ago. Similarly, if we consider your experience over t, but double the amount of time every function of your body occurs at, you lower (entire? I'm not a biologist) leg would be irrelevant. That is, your lower leg could be replaced with a grapefruit tree and you wouldn't feel the difference (assume you're not looking at it).
The limit of that is stopping signals from traveling entirely, at which point your entire body is irrelevant. I think someone frozen in time would not have any experience rather than be eternally trapped in a moment. If someone is at a point where their experiences would be the same were they replaced by a tree, they're not having any.
There's no reason it would be logically impossible to harness the resources of galaxies towards keeping you alive and in pain, but eventually the second law of thermodynamics saves you.
Replies from: Logos01↑ comment by Logos01 · 2011-10-22T21:29:18.400Z · LW(p) · GW(p)
If the time it normally takes for a signal to go from your toe to your brain is t, and we consider your experience over one half t, your lower leg is irrelevant. [...]
I don't follow the meaning of what it is you are trying to convey here. Furthermore; how does any of that lead to "non-local physics"? I sincerely am not following whatever it is you are trying to say.
There's no reason it would be logically impossible to harness the resources of galaxies towards keeping you alive and in pain, but eventually the second law of thermodynamics saves you.
There is a fine art to the linguistic tool called the segue. This is a poor example of it.
That being said -- the second law of thermodynamics is only applicable to closed systems. We assume the universe is a closed system because we have no evidence to the contrary as yet. It remains conceivable however that future societies might devise a means of bypassing this particular problem.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-23T15:04:10.563Z · LW(p) · GW(p)
There is a fine art to the linguistic tool called the segue. This is a poor example of it.
I can see how "There's no reason it would be logically impossible to harness the resources of galaxies towards keeping you alive and in pain, but eventually the second law of thermodynamics saves you," looks random. I was contrasting the logically possible worst case scenario of "eternal unable to scream horror" with what I think is the physically possible worst case scenario, where you might think the logically and physically possible are the same here.
If there is a source of infinite energy, I agree one could be tortured forever - but even still it couldn't be a frozen single moment of torture. The torturers would have to cycle one through states.
how does any of that lead to "non-local physics"?
It applies because of the speed limit issue. It's just saying the nerve speed-limit analogy can't be circumvented by doing things infinitely fast., rather than at nerve speed (or light speed). But the analogy is the central thing.
I will try again. What would it be like is all of your brain's operations took twice the time they normally do? It would look like everything was happening quickly, one would experience a year as six months. The limit of that is at infinite slowness, one would experience infinitely little.
Replies from: Logos01↑ comment by Logos01 · 2011-10-23T15:16:19.475Z · LW(p) · GW(p)
If there is a source of infinite energy, I agree one could be tortured forever - but even still it couldn't be a frozen single moment of torture. The torturers would have to cycle one through states.
I fail to recognize any reason why this would be relevant or interesting to this conversation. It's trivially true, and addresses no point of mine, that's for certain.
I will try again. What would it be like is all of your brain's operations took twice the time they normally do? It would look like everything was happening quickly, one would experience a year as six months. The limit of that is at infinite slowness, one would experience infinitely little.
... what in the blazes is this supposed to be communicating? It's a trivially true statement, and informs absolutely nothing that I can detect towards the cause of explaining where or how "non-local physics" comes into the picture.
Would you care to take a stab at actually expounding on your claim of the necessity of believing in this "non-local physics" you speak of, and furthermore explaining what this "non-local physics" even is? So far, you keep going in many different directions, none of which even remotely address that issue.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-23T15:21:28.552Z · LW(p) · GW(p)
Sure, it means that one can't construct a brain by having signals go infinitely fast, the local means that to get from somewhere to somewhere else one has to go through intermediary space. It was a caveat I introduced because I thought it might be needed, but it really wasn't. My main point is that I don't think a person could be infinitely tortured by being frozen in torture, which leads to the interesting point that people shouldn't be identified with objects in single moments of time, such as bodies, but with their bodies/experience going through time.
Replies from: Logos01↑ comment by Logos01 · 2011-10-20T08:23:27.479Z · LW(p) · GW(p)
Is that what you say has a non-zero probability?
I don't care to constrain the particulars of how "And I Must Scream" is defined saving that it be total anti-utility and it be without escape. Whatever particulars you care to imagine in order to aid in understanding this notion are sufficient.
I will add that I am strongly in the anti-"immortality" camp, as that word should not be used. I am in the anti-mortality camp, that's how I'd put it.
Please elaborate on your feelings regarding the problems of the word "immortality". I am agnostic as to your perceptions and have no internal clues to fill in that ignorance.
Replies from: lessdazed↑ comment by lessdazed · 2011-10-22T13:29:14.160Z · LW(p) · GW(p)
Just what Robin Hanson said.
Replies from: Logos01↑ comment by Logos01 · 2011-10-22T21:45:03.669Z · LW(p) · GW(p)
Took me a couple of readings to get the gist of that article. Frankly, it's rather... well, I find myself reacting poorly to it.
After all -- is not "giving as many years as we can" the same, quantitatively, as saying that the goal is clinical immortality?
comment by wedrifid · 2011-10-20T05:10:33.628Z · LW(p) · GW(p)
Gray writes the following chilling lines: “If you understand that in wanting to live for ever you are trying to preserve a lifeless image of yourself, you may not want to be resurrected or to survive in a post-mortem paradise. What could be more deadly than being unable to die?” (my emphasis)
Gray... is not good with the thinking.
Why are we hearing quotes from an economics grad student who is clearly a fool? Aren't there higher status, better qualified people who say silly things about the value of future life that we can quote and ridicule? (Robin Hanson springs to mind.)
Replies from: Logos01comment by billswift · 2011-10-20T05:03:22.389Z · LW(p) · GW(p)
For anyone that hasn't seen it yet, here's Nick Bostrom's Fable of the Dragon-Tyrant.
comment by buybuydandavis · 2011-10-20T10:08:13.402Z · LW(p) · GW(p)
It's promising that people have to resort to such gibberish to argue against death.
Without death we cannot truly have life.
What wooo. Is this supposed to mean anything?
What could be more deadly than being unable to die?
More wooo. Perhaps being unable to die would be undesirable, but I think most of us are aiming more at being able to die when we choose to, and not before.
comment by byrnema · 2011-10-20T16:02:56.870Z · LW(p) · GW(p)
So far, there is uniform agreement in the comments of this post. What a silly thing to say!
Have you really thought about it though? You have to understand a position before you can dismiss it. I think it would be interesting to see comments from the other point of view, to then see the counter-arguments.
Replies from: None↑ comment by [deleted] · 2011-10-20T19:20:48.975Z · LW(p) · GW(p)
Since you seem to want to have people attempt to comment as advocates for death, and generate possible counter arguments to provoke new angles of thought, I accept your challenge and will try to come up with a plausible sounding argument for death and a self contained counter argument that you can consider.
Beginning Death's advocate:
I'll start off with a quote from Hanson: http://www.overcomingbias.com/2011/10/limits-of-imagination.html
Our finite universe simply cannot continue our exponential growth rates for a million years. For trillions of years thereafter, possibilities will be known and fixed, and for each person rather limited.
The article goes into it's own details, but the general point is: Exponential growth eventually ends unless we simply have infinite space/resources. Hanson has several examples and he has run a variety of numbers.
So clearly, we simply have our immortal population producing new immortal babies at a constant rate, Unless we first gain such unlikely powers as generating new universes filled with such useful things as empty space and atoms out of thin air.
I'm willing to accept immortality as physically possible out to billions of years, but it seems stagnant. Let's call this "Immortal Steady Population." And I'm willing to accept a 1% of the population forming new babies for billions of years, without immortality. Let's call this "Death, with Babies."
But I'm not willing to accept BOTH as physically possible. We run into Hanson's exponential limits far before we reach billions of years. We can have, at best, "Immortal Steady Population" or "Death, but Babies." But we can't have "Immortals with Immortal Babies." at least, not for any substantial amount of time, without infinite resources.
Our very existence revolves around spreading our genes and making new babies. I can't accept not having that.
Therefore, to fulfill our life's purpose as baby makers, we must allow ourselves to die at some point. Without death, life has no purpose.
Ending Death's advocate.
Starting possible counter argument. I'll call it "The Fusion Argument":
Imagine if two immortals could fuse together in the future. You would take two unique individuals, and have them combine together into a single individual, who only needed one person's worth of resources, and was still immortal. You then combine their genetic material, and make a fresh immortal baby, without the accompanying memories. So I and a fusion partner could accept being only "half people" by fusing into one being to diminish our resource use, and as a single "parent entity" we can have a new baby. The overall population doesn't change, but no one individual actually dies. Instead, the "death" if it even makes sense to call it that anymore, is shared between the two people who want the new baby, who are forced to become one person to make up for what would otherwise be an untenable resource demand.
Also you could rearrange this further: 3 parents form 2 people and 1 new baby? That's fine as well. 3 parents form 1 existing person and 2 new babies? If that's what they want. Two people fuse into one, but have the baby also take genetic material from a third person who is not personally diminished? Needless to say, there are a lot of possible combinations here.
Ending possible counter argument.
So given the choices I mentioned:
1: "Immortals with Immortal Babies." (Impractical for long periods of time as per Hanson, without infinite resources)
2: "Immortal Steady Population" (No new Babies! Doesn't appeal to people who feel the need to make new life.)
3: "Death, but Babies" (A standard argument for the way things are, but doesn't allow for immortality.)
4: "The Fusion Argument" (A demonstration that there are other ways of handling things like death and new life.)
The Fusion Argument sounds like a pretty good. I gain all the benefits of immortality to begin with.
And if I want, I CAN have babies to make new life, and if I can find someone else to help and we can work out an appropriate division of our diminished cognitive/biological resources.
There might even be further possible states, although none immediately come to mind.
Do you find the argument and counter argument as presented interesting?
Replies from: byrnema, JoshuaZ↑ comment by byrnema · 2011-10-20T20:53:37.442Z · LW(p) · GW(p)
Do you find the argument and counter argument as presented interesting?
I really do. I was very fairly convinced by Death's Advocate, but then the counter argument blew it out of the water. Fusing to form babies sounds like a fun thing to look forward to, and mediates the problem of everyone who wants to being able to create a completely new consciousness without exponential growth.
Generalizing, I imagine that any 'resource-type' problem could be solved with some ingenuity. That's not the problem I anticipated however.
The problem that I anticipate is one of goals and values. If we should be immortal -- truly invincible, for example through a medium that is indestructible or as information that is easily stored and copied and thus disposable -- then what goals could we have? I wonder then if there would be anything to do?
Most of our current goals regard self-preservation or self-continuation in one way or another. I suppose another set of goals regards aesthetics, creating and observing art or something like that. Still, this activates associations in my brain with building towers to nowhere and wireheading.
I'm concerned that this 'argument' is just nihilism popping its head out where it spots an opprtunity, due to the extreme 'far-ness' of the idea of immortality, but nevertheless, I think this is closer to what Gray was driving at with his claim that "immortality is lifeless".
Replies from: Dreaded_Anomaly, Desrtopa, Nornagest↑ comment by Dreaded_Anomaly · 2011-10-21T00:15:11.138Z · LW(p) · GW(p)
If we should be immortal -- truly invincible, for example through a medium that is indestructible or as information that is easily stored and copied and thus disposable -- then what goals could we have? I wonder then if there would be anything to do?
From HP:MoR, Chapter 39, Pretending to be Wise, Pt. 1:
"I have lived a hundred and ten years," the old wizard said quietly (taking his beard out of the bowl, and jiggling it to shake out the color). "I have seen and done a great many things, too many of which I wish I had never seen or done. And yet I do not regret being alive, for watching my students grow is a joy that has not begun to wear on me. But I would not wish to live so long that it does! What would you do with eternity, Harry?"
Harry took a deep breath. "Meet all the interesting people in the world, read all the good books and then write something even better, celebrate my first grandchild's tenth birthday party on the Moon, celebrate my first great-great-great grandchild's hundredth birthday party around the Rings of Saturn, learn the deepest and final rules of Nature, understand the nature of consciousness, find out why anything exists in the first place, visit other stars, discover aliens, create aliens, rendezvous with everyone for a party on the other side of the Milky Way once we've explored the whole thing, meet up with everyone else who was born on Old Earth to watch the Sun finally go out, and I used to worry about finding a way to escape this universe before it ran out of negentropy but I'm a lot more hopeful now that I've discovered the so-called laws of physics are just optional guidelines."
↑ comment by Desrtopa · 2011-10-20T22:22:37.293Z · LW(p) · GW(p)
The problem that I anticipate is one of goals and values. If we should be immortal -- truly invincible, for example through a medium that is indestructible or as information that is easily stored and copied and thus disposable -- then what goals could we have? I wonder then if there would be anything to do?
Personally, I can't think of a single goal of mine that is strictly contingent on my dying at any point in the future.
Replies from: byrnema, dlthomas↑ comment by byrnema · 2011-10-20T23:25:21.803Z · LW(p) · GW(p)
I guess I meant more long-term goals.
What would be the purpose of having and controlling resources if you were already going to live forever?
Replies from: pedanterrific, Desrtopa↑ comment by pedanterrific · 2011-10-21T01:21:28.712Z · LW(p) · GW(p)
I suspect that this sort of approach may come from imagining "immortality" as a story that you're reading - where's the suspense, the conflict, if there's no real danger? But if you instead imagine it as being your life, except it goes on for a longer time, the question becomes - do you actually enjoy being confronted with mortal danger?
Or, put another way: I'd lay odds that most people would have more fun playing Dungeons and Dragons than Russian Roulette.
↑ comment by Desrtopa · 2011-10-21T00:33:19.061Z · LW(p) · GW(p)
Improving your quality of life?
Having and controlling resources isn't a terminal goal for me, and I suspect that anyone who treats it as one has lost track of what they really want.
↑ comment by Nornagest · 2011-10-20T22:31:48.656Z · LW(p) · GW(p)
If we should be immortal -- truly invincible, for example through a medium that is indestructible or as information that is easily stored and copied and thus disposable -- then what goals could we have? [,,,] Most of our current goals regard self-preservation or self-continuation in one way or another.
If we self-consciously value the futile pursuit of self-continuation over actual self-continuation, it seems to me that something's gone seriously wrong somewhere.
I also think that some serious inferential distance problems show up for almost everyone when they start thinking about perfect immortality, ones that tend to be wrongly generalized to various imperfect forms of life extension. But that's less immediately relevant.
Replies from: byrnema↑ comment by byrnema · 2011-10-20T23:19:37.285Z · LW(p) · GW(p)
If we self-consciously value the futile pursuit of self-continuation over actual self-continuation, it seems to me that something's gone seriously wrong somewhere.
Just because that would be a ridiculous way to be, doesn't mean it isn't that way. (We weren't designed afterall.) Suppose underlying every one of our goals is the terminal goal to continue forever. (It seems reasonable that this could be the purpose of most of our goals, since that would be the goals of evolution.) Then it makes sense that we might report 'shrug' in response to the question, 'what goals would you have if you were already in a state of living forever'?
It would be necessary to just observe what the case is, regarding our goal structure. It seems that some people (Gray, and myself to some extent) anticipate that life would be meaningless. I think this might be a minority view, but still perhaps in the tens of percents?
I also allow that it is one thing to anticipate what our goals would be verses what they actually would be. I anticipated I wouldn't have any goals if I stopped believing in God, but then I still did.
Replies from: Nornagest, Desrtopa, Will_Sawin↑ comment by Nornagest · 2011-10-21T00:11:36.106Z · LW(p) · GW(p)
Just because that would be a ridiculous way to be, doesn't mean it isn't that way. (We weren't designed afterall.)
That lack of design is precisely what makes me skeptical of the idea that the pursuit of self-continuation can be generalized that far. When I look at people's goals, I see a lot of second- or third-degree correlates of self-continuation, but outside the immediate threat of death they don't seem to be psychologically linked to continued life in any deep or particularly consistent way. Compared to frameworks like status signaling or reproductive fitness, in fact, self-preservation strikes me as a conspicuously weak base for generalized human goal structure.
All of which is more or less what I'd expect. We are adaptation-executors, not fitness-maximizers. Even if self-continuation was a proper terminal value, it wouldn't be very likely that whatever component of our minds handles goal structure would actually implement that in terms of expected life years (if it did, we wouldn't be having this conversation); in the face of perfect immortality we're more likely to be left with many disconnected sub-values pointing towards survival-correlates which very likely could still be pursued. Contemporary people still happily pursue goals which were rendered thoroughly suboptimal upon leaving the EEA, after all; why should this be different?
And on top of all that, I suspect the entire question's largely irrelevant; since individual self-preservation is only loosely coupled to genetic, memetic, or societal fitness, I'd be very surprised if it turned out to be a coherent top-level goal in many people. The selection pressures all point in other directions.
Replies from: byrnema↑ comment by byrnema · 2011-10-21T20:42:27.053Z · LW(p) · GW(p)
Yeah, I do agree with you. I would probably continue to care about all the same things if I were to live forever, such as getting along with my coworkers and eating yummy food. I might have time to seek training in a different profession, but I would still be working, and I certainly don't eat now just to keep alive.
I'll locate this sympathetic feeling I have with Gray as just another version of the nihilistic, angsty tendencies some of us wrestle with.
I think it would be nice to hear in more detail exactly what Gray meant...
↑ comment by Desrtopa · 2011-10-22T06:27:04.371Z · LW(p) · GW(p)
It seems reasonable that this could be the purpose of most of our goals, since that would be the goals of evolution.
Insofar as evolution can be said to have goals, continuation of the individual is definitely not one of them.
Replies from: byrnema↑ comment by Will_Sawin · 2011-10-23T21:21:47.204Z · LW(p) · GW(p)
However, empirically, I don't think this is the case. Since we are so messy, we probably don't have a feature to make desires for sex, power, companionship, exercise, etc. vanish when we attain immortality. We will probably just go on desiring those things, perhaps in somewhat different ways.
↑ comment by JoshuaZ · 2011-10-20T22:55:11.364Z · LW(p) · GW(p)
This seems to fit a different context than the sort of argument being made by Linster. The primary objection to Linster is not a disagreement on what is possible but a disagreement on what is good, moral or just. Your argument is an argument primarily about possibility based on the physical constraints of our universe. I don't think many people here will disagree with your assessment.
If for example it turned out that we could bend space and violate conservation of energy using some advanced technology to make the limitations you point out obsolete Linster's argument seems to be that immortality would still be a very bad thing.
comment by ahaspel · 2011-10-20T21:00:41.796Z · LW(p) · GW(p)
Linster I will not trouble to defend, but you are reading Gray uncharitably. By "more deadly" he surely means "duller" ("dull" is an adjective to which "deadly" is often yoked). The claim that immortality would be boring may be false, but it is not obviously ridiculous.
Replies from: pedanterrific↑ comment by pedanterrific · 2011-10-21T01:15:25.710Z · LW(p) · GW(p)
Even assuming your interpretation is correct, the claim is not that immortality would be boring. The claim is that nothing could be more boring than immortality. I would argue that an infinite series of observer-moments, each with a non-zero chance of being interesting, is ridiculously obviously less boring than a finite series of observer-moments concluding with ceasing to exist.
comment by lessdazed · 2011-10-23T10:15:57.514Z · LW(p) · GW(p)
Someone should write an algorithm that makes DEEEEEPLY PROFOUND statements. Have it accept or construct for itself any trivially true sentence, such as...
What could be more healthy than being unable to die?
What could be more deadly than being able to die?
What could be less deadly than being unable to die?
...and replace a word with an antonym, or add or elide a negation, and out comes something like...
What could be more healthy than being unable to live?
What could be less unhealthy than being able to die?
What could be more deadly than being able to live?
What could be more deadly than being unable to die?
Ta-da!
comment by James_Miller · 2011-10-21T01:35:43.163Z · LW(p) · GW(p)
From an article about a Steve Jobs biography:
"Jobs confided in Sculley that he believed he would die young, and therefore he needed to accomplish things quickly so that he would make his mark on Silicon Valley history. 'We all have a short period of time on this earth,' he told the Sculleys. 'We probably only have the opportunity to do a few things really great and do them well. None of us has any idea how long we're gong to be here nor do I, but my feeling is I've got to accomplish a lot of these things while I'm young.'"
Replies from: pedanterrific↑ comment by pedanterrific · 2011-10-21T01:59:37.142Z · LW(p) · GW(p)
Post: An academic said "Life without death would be boring."
Comment: Someone famous said "Death is a source of motivation."
Reply: ...what's your point?
Replies from: James_Miller↑ comment by James_Miller · 2011-10-21T15:35:49.649Z · LW(p) · GW(p)
For Jobs at least the certainty of death made him accomplish more in life and so if we define "deadly" as lethargic (which I suspect is what the author meant) made his life less deadly.
comment by Rain · 2011-10-31T13:54:59.686Z · LW(p) · GW(p)
For some reason, a lot of people who think of "immortality" think of things like this short story by Ursula K. Le Guin. We need more positive, non-curse long-life stories.
comment by Morendil · 2011-10-20T13:38:08.775Z · LW(p) · GW(p)
Reassuringly, most of the commenters are pointing out the flaws in the poster's praise for the book.
In the Amazon reviews I found an interesting partial quote from the foreword to Gray's book:
Replies from: JoshuaZ...it was the rejection of rationalism that gave birth to scientific enquiry. Ancient and medieval thinkers believed the world could be understood by applying first principles. Modern science begins when observation and experiment come first, and the results are accepted even when what they show seems to be impossible. In what might seem a paradox, scientific empiricism - reliance on actual experience rather than supposedly rational principles - has very often gone with an interest in magic.
↑ comment by JoshuaZ · 2011-10-20T22:59:12.432Z · LW(p) · GW(p)
Eh, the quote isn't as interesting at it might seem. He's using rationalism in the first quote to mean in contrast to empiricism. There's a debate in classical philosophy over whether one should reason from first principles or rely on empirical observation (with almost everyone largely supporting the first). Some of the people in the late Middle Ages swung too far in the other direction. The important realization is that both are important. One can't get too far without using both.
As to the comment about magic, this shouldn't be surprising. Early natural philosophers were doing the natural thing there, testing hypotheses that looked reasonable to them. They could not have been aware of how much humans naturally construct bad hypotheses involving sympathetic magic and similar issues. Nor for that matter did they know enough about the nature of the universe to appreciate that magic would actually violate basic principles and meta-principles of how the world seems to work. Given what they knew, examining magic and trying to get it to work would be the correct thing to do.
comment by Dallas · 2011-10-20T13:48:58.782Z · LW(p) · GW(p)
Why do these people go out of their way to justify other people dying? It clearly signals you as an existential threat, and indeed, one particularly vicious to intelligences on non-biological substrates. You might as well go on an armed rampage.
comment by Jonathan_Graehl · 2011-10-20T04:47:51.334Z · LW(p) · GW(p)
(speaking of (brain) death: sad hit+run + bystander effect in a poor country)
Replies from: pedanterrific↑ comment by pedanterrific · 2011-10-21T03:29:41.224Z · LW(p) · GW(p)
Say it with me now: TRIGGER WARNING