Over-applying rationality: Indefinite lifespans
post by Bart119 · 2012-05-25T17:25:41.763Z · LW · GW · Legacy · 14 commentsContents
14 comments
UPDATE: One commenter said that arguments against the desirability of indefinite lifespans and their rebuttal had appeared before on LW and elsewhere. I am very interested in links to the best such discussions. If I'm going over old ground, a kind soul who can point me to the prior art would be much appreciated.
-----------
I am very impressed with this site in its goal of outlining cognitive biases and seeing how they apply in everyday situations. When you're trying to decide how to spend money to alleviate human misery, it works. Yeah, it's better to save 50,000 people than 5,000. The two alternatives concern the same moral intuitions. When faced with a specific choice among alternatives, you may find that the tools of rationality will apply and tell you what to do, which might be contrary to what you would have done without such analysis.
But when I see people trying to use Bayesian analysis for bigger questions beyond this, I think there is a substantial danger of being led astray by the method. When you can't find a clear way to analyze the situation and you are making low-confidence probability estimates of alternative futures and their utility, you'd do better to just put your rationality toolbox back on the shelf and decide the way you've always decided: gut feelings, intuition, doing what everyone else does, etc.
Let's take as a case study the popular view on LW that living as long as possible is a good thing. First, within the range of currently common lifespans, it's a good thing to live a longer, healthier life; that is uncontroversial.
But judging from the LW posts I've read, the prospect that science could reach a point where people could live indefinitely long is hailed as a great and noble goal. I think it would be terrible.
First, let me distinguish an indefinite lifespan with true immortality. Is there anyone here who thinks true immortality is within reach? The sun will go red giant, making earth uninhabitable. If we hop from star to star, we get a little longer. But there's stuff like heat death and entropy and all. Not to mention the accumulation of small, mundane risks over a very long time. Eternity is one friggin' long time.
If you don't have true immortality, you have a longer lifespan, and then you die. You still have to face the same profoundly settling issue. One wry formulation might be: whenever you do die, you're saving yourself the trouble of dying later. Different lifespans all end with the same unsettling matter of personal extinction. (Other thought: mortality is the most salient and immediate roadblock to finding a more satisfying meaning in life, but it's just the first one; if it was removed we'd find others beyond.) If you live 500 years instead of 100, you haven't achieved anything special. You haven't cheated death. You've just got an extra 400 years of living. The mundane stuff of eating, sleeping, thinking, seeing beautiful sunsets, chatting with friends, etc., and of course the less pleasant parts too.
The ecological integrity of the world is already under severe strain. Perhaps with technological and political improvements we could improve how many people could live sustainably on earth by some constant factor, but it doesn’t affect the current argument. Our population is limited. (You may think we're going to personally take off to colonize the stars. Let's assume for now it can't be done.)
Given a population limit, the effect of people living 500 or 50,000 years is that the available slots will soon be filled, and reproduction would have to be seriously curtailed. Children would be very rare.
I think that no matter how healthy they are, a world full of people who are over 100 or 10,000 years old with very few children would be a place that 'just isn't right'.
First, I estimate that the human mind isn't designed to live beyond 100 years (if that) and will tend to become unhappy. Such people think the same thoughts over and over. They get bored (a lot of people today get bored at 50). They still know they're going to die someday.
Second, they live in a world without children. (One thing I've never seen in a LessWrong survey is the proportion who are parents -- given a highly educated group predominantly in their 20s, I would estimate it is very low).
And aside from their own personal boredom and personal lack of children in their lives, they know they live in a world where everyone else is in the same position. It's an ossifying world.
Now let's put rationality into it. I imagine a Bayesian feeling comfortable and at home constructing an equation with two key constants being the number of people and the number of happy, productive years they get to live. Multiplication is in order. I'm not sure how the argument goes after that: Potential future lives that don't happen don't get to add to the utility (do they? Or at a discounted rate?) Even if they do, the utility of a new life has to be weighed against the lost utility of an existing person dying. We can see the equation coming out in favor of extending life as long as possible.
The argument on the other side can also be framed in Bayesian terms. My estimation is that the utility of a large majority of these people who are over 200 years old is going to be very small. We can multiply their utility and conclude that the world will be a happier place with a mix of children, young people, and people croaking after 90 years of happy, productive life.
I imagine a Bayesian frowning at this analysis. It seems imprecise. I could I suppose assign some sort of utility-reduction weight to those various factors and multiply them out, but it isn't really going to make the Bayesian very happy. It’s not going to make me very happy either. I would rather just consider the situation as a whole and assign a low utility to the bulk of the population that's hundreds of years old rather than break it into parts.
At one level, my argument with the pro-indefinite-lifespan faction is just a difference in what kind of a future world would be a happier place. We've plugged in our different assumptions and reached different conclusions.
But to what extent does framing the problem as one of Bayesian analysis bias people to prefer the indefinite extension of individual lives? If your favorite tool is a hammer, things tend to look like nails. My conclusion feels more naturally framed if we ignore individual utilities and just say: a world full of people living indefinitely long would suck. Spelling it out in terms of utility just doesn't add anything.
The practical implications are a separate question. Killing people when they get to be 90 is of course highly repugnant, as is asking them to kill themselves. But it might affect what sort of scientific research we fund and what drugs we approve, for starters.
14 comments
Comments sorted by top scores.
comment by drethelin · 2012-05-25T17:50:32.368Z · LW(p) · GW(p)
Some of your problems solve each other. If people get bored and want to kill themselves at 90, then why would we have overpopulation problems?
Other things you assert based on your own intuitions people don't have to agree with. Why should I accept that a world full of people living indefinitely would suck? No one I know is dying and I'm pretty fucking happy about that and I don't see why I'd want my friends to start dying.
comment by Desrtopa · 2012-05-25T17:43:06.264Z · LW(p) · GW(p)
If you live 500 years instead of 100, you haven't achieved anything special. You haven't cheated death. You've just got an extra 400 years of living.
Same for living eighty years instead of twenty.
Globally, we don't have an overpopulation problem. I used to think we did, but we don't. What we have is a resource overconsumption problem. We're overloading the carrying capacity of the earth, but that carrying capacity isn't constant, we've increased it in the past, and it can be increased again.
Increasing the human lifespan many times over and not developing other technology would lead to some very bad outcomes, but not developing technology that will allow us to increase earth's human carrying capacity is almost certainly going to result in bad outcomes regardless.
In the long term, eventually we may reach a genuine overpopulation problem, where it becomes problematic to put more people on the earth even at optimal resource efficiency, but in the long term, earth is not necessarily our only habitation prospect. In the really long term we may start running up against the limitations of what our light cone can support, but that's a problem we've got a lot of time to grapple with.
comment by orthonormal · 2012-05-25T20:21:09.245Z · LW(p) · GW(p)
First, you're not disagreeing with anything Bayesian here, but with the idea of transhumanism that many Less Wrongers share. It's important to realize that the weird ideas that come up here are not all the same weird idea.
Anyway, Eliezer's discussion of this topic starts with Transhumanism as Simplified Humanism and continues with the Fun Theory Sequence, both of which you might find interesting.
comment by Shmi (shminux) · 2012-05-25T18:33:28.258Z · LW(p) · GW(p)
Most of the issues you raise have been debated here and elsewhere many times, yet you did not provide a single link. Either you did not bother familiarizing yourself with the current state of them, or you think thet you are the first one to come up with such ideas. Or maybe this OP is just a rant. None of those is a particularly good thing, so you get my downvote.
Replies from: Bart119↑ comment by Bart119 · 2012-05-25T19:30:52.955Z · LW(p) · GW(p)
OK. Forgive my modest research skills. I've certainly seen lots of posts that assume that indefinite lifespans are a good thing, but I had never seen any that made contrary claims or rebutted such claims. I would welcome pointers to the best such discussions. It was not intended as a rant.
Replies from: VincentYu, timtyler↑ comment by VincentYu · 2012-05-25T23:15:33.234Z · LW(p) · GW(p)
On LW, you can find discussions about the ethics and desirability of life extension in posts on cryonics. But it's also a well-established academic topic. Bostrom's papers on life extension are probably interesting to the LW crowd:
- The Fable of the Dragon Tyrant (this has been posted several times on LW)
- Recent Developments in the Ethics, Science, and Politics of Life-Extension (a review of this edited volume on life extension)
(Bostrom is, of course, an advocate for vastly extended lifespans. But he does give references to academic papers and popular writings with different conclusions.)
I think a literature review, rather than the current discussion post, would be much more appropriate.
↑ comment by timtyler · 2012-05-25T20:58:15.314Z · LW(p) · GW(p)
My "On methusalahites" video attempts to explain the existence of those who prioritise living for a long time highly - which superficially appears to be a biological anomaly. Essentially, I invoke memetic hijacking.
comment by smk · 2012-05-26T07:30:30.734Z · LW(p) · GW(p)
I guess your post isn't really suited for this context because it's basically just telling us what your preferences are. Oh, well, I find it interesting to see what people's preferences are. And it gives me an excuse to tell you mine. I would prefer a world in which existing people did not die and new people were not created. If for some strange reason new people simply had to be created, they definitely would not be created as utterly dependent creatures who gradually develop personhood. Imagining a world with few children gives you a feeling of wrongness? Well, thinking about childhood gives me a feeling of wrongness. I really hope we get rid of childhood someday.
comment by Richard_Kennaway · 2012-05-26T07:47:23.108Z · LW(p) · GW(p)
I think this quote is the refutation in a nutshell.
And as Harry Potter puts it somewhere in MoR, if you don't want to sicken and die right now, if you want live another day, then by induction, you want to live forever, whatever Deep Wisdom you come up with to persuade yourself of the contrary.
I know The Dragon-Tyrant has been linked already in this thread, but I think it's worth repeating.
The point is that the evilness of death is so blatantly obvious that it is only possible to support the opposite by just making shit up. It's like defending a claim to have a dragon in one's garage. The moment you stop doing that, there's no more argument to be had about it. People shouldn't die.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-26T15:41:32.801Z · LW(p) · GW(p)
That said, I think we often underestimate how many people do sort of want to die right now and are prevented from doing so by essentially deontological considerations, or the expectation of (and identification with) wanting to live in the future, or a combination of risk-aversion and fear of an afterlife (as per Shakespeare).
Such people, or people in that state, might genuinely consider the prospect of eventual death (at such time as their death is permitted) something to look forward to.
Of course, we might conclude that they have the wrong values and ought to be cured of their depression instead, but that's different from concluding that they're just making shit up.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2012-05-26T16:56:54.080Z · LW(p) · GW(p)
Some people are in circumstances so dreadful that they quite rationally don't want to go on. (My mother is 92 and definitely does not want heroic measures to be taken.) But on a large scale, the answer to that is to not get into such a state -- to prolong not merely any sort of existence, but healthy existence. As far as I'm concerned, that's part and parcel of life extension, and Swift's Struldbrugs are just another mistaken objection to long life.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-05-26T18:04:08.473Z · LW(p) · GW(p)
Sure, if I don't want to live because my life is insufficiently healthy, one solution is to keep making me healthier until I change my mind, then extend my life.
More generally, if I don't want to live because my life lacks some property X (of which health is one example, but not the only one), one solution is to provide me with X and then extend my life. I'm not sure I would consider the general problem of providing people with everything they lack to make life feel worth living to them be part and parcel of life extension, but it's not clearly wrong to do so.
comment by Bart119 · 2012-05-25T18:38:17.478Z · LW(p) · GW(p)
Interesting. Downvoted into invisibility. Because of disagreement on conclusions, or form? I suppose an assertion of over-application of rationality is in a sense off-topic, but not in the most important sense. And of course no one has to accept the intuitions (which qualify as Bayesian estimates), but are they so far off they're not worth considering?
Replies from: billswift