Life Extension versus Replacement

post by Julia_Galef · 2011-11-30T01:47:10.475Z · LW · GW · Legacy · 99 comments

Contents

99 comments

Has anyone here ever addressed the question of why we should prefer

(1) Life Extension: Extend the life of an existing person 100 years
to
(2) Replacement: Create a new person who will live for 100 years?


I've seen some discussion of how the utility of potential people fits into a utilitarian calculus. Eliezer has raised the Repugnant Conclusion, in which 1,000,000 people who each have 1 util is preferable to 1,000 people who each have 100 utils. He rejected it, he said, because he's an average utilitarian.

Fine. But in my thought experiment, average utility remains unchanged. So an average utilitarian should be indifferent between Life Extension and Replacement, right? Or is the harm done by depriving an existing person of life greater in magnitude than the benefit of creating a new life of equivalent utility? If so, why?

Or is the transhumanist indifferent between Life Extension and Replacement, but feels that his efforts towards radical life extension have a much greater expected value than trying to increase the birth rate?

 

(EDITED to make the thought experiment cleaner. Originally the options were: (1) Life Extension: Extend the life of an existing person for 800 years, and (2) Replacement: Create 10 new people who will each live for 80 years. But that version didn't maintain equal average utility.)


*Optional addendum: Gustaf Arrhenius is a philosopher who has written a lot about this subject; I found him via this comment by utilitymonster. Here's his 2008 paper, "Life Extension versus Replacement," which explores an amendment to utilitarianism that would allow us to prefer Life Extension. Essentially, we begin by comparing potential outcomes according to overall utility, as usual, but we then penalize outcomes if they make any existing people worse off.

So even though the overall utility of Life Extension is the same as Replacement, the latter is worse, because the existing person is worse off than he would have been in Life Extension. By contrast, the potential new person is not worse off in Life Extension, because in that scenario he doesn't exist, and non-existent people can't be harmed. Arrhenius goes through a whole list of problems with this moral theory, however, and by the end of the paper we aren't left with anything workable that would prioritize Life Extension over Replacement.

 

99 comments

Comments sorted by top scores.

comment by Manfred · 2011-11-30T03:07:17.024Z · LW(p) · GW(p)

Here's his 2008 paper, "Life Extension versus Replacement," which explores an amendment to utilitarianism that would allow us to prefer Life Extension

I feel like the thing that should allow us to prefer life extension is the thing that makes people search for amendments to utilitarianism that would allow us to prefer life extension.

Replies from: Julia_Galef
comment by Julia_Galef · 2011-11-30T17:31:59.795Z · LW(p) · GW(p)

When our intuitions in a particular case contradict the moral theory we thought we held, we need some justification for amending the moral theory other than "I want to."

Replies from: Luke_A_Somers, TheOtherDave, None
comment by Luke_A_Somers · 2011-11-30T18:24:47.705Z · LW(p) · GW(p)

I think the point is, Utilitarianism is very very flexible, and whatever it is about us that tells us to prefer life extension should already be there - the only question is, how do we formalize that?

comment by TheOtherDave · 2011-11-30T18:12:21.750Z · LW(p) · GW(p)

Presumably that depends on how we came to think we held that moral theory in the first place.

If I assert moral theory X because it does the best job of reflecting my moral intuitions, for example, then when I discover that my moral intuitions in a particular case contradict X, it makes sense to amend X to better reflect my moral intuitions.

That said, I certainly agree that if I assert X for some reason unrelated to my moral intuitions, then modifying X based on my moral intuitions is a very questionable move.

It sounds like you're presuming that the latter is generally the case when people assert utilitarianism?

Replies from: Julia_Galef
comment by Julia_Galef · 2011-11-30T19:52:30.829Z · LW(p) · GW(p)

Preferring utilitarianism is a moral intuition, just like preferring Life Extension. The former's a general intuition, the latter's an intuition about a specific case.

So it's not a priori clear which intuition to modify (general or specific) when the two conflict.

Replies from: TheOtherDave
comment by TheOtherDave · 2011-11-30T20:18:37.535Z · LW(p) · GW(p)

I don't agree that preferring utilitarianism is necessarily a moral intuition, though I agree that it can be.

Suppose I have moral intuitions about various (real and hypothetical) situations that lead me to make certain judgments about those situations. Call the ordered set of situations S and the ordered set of judgments J.

Suppose you come along and articulate a formal moral theory T which also (and independently) produces J when evaluated in the context of S.

In this case, I wouldn't call my preference for T a moral intuition at all. I'm simply choosing T over its competitors because it better predicts my observations of the world; the fact that those observations are about moral judgments is beside the point.

If I subsequently make judgment Jn about situation Sn, and then evaluate T in the context of Sn and get Jn' instead, there's no particular reason for me to change my judgment of Sn (assuming I even could). I would only do that if I had substituted T for my moral intuitions... but I haven't done that. I've merely observed that evaluating T does a good job of predicting my moral intuitions (despite failing in the case of Sn).

If you come along with an alternate theory T2 that gets the same results T did except that it predicts Jn given Sn, I might prefer T2 to T for the same reason I previously preferred T to its competitors. This, too, would not be a moral intuition.

comment by [deleted] · 2011-11-30T18:22:14.978Z · LW(p) · GW(p)

Well if you view moral theories as if they were scientific hypothesis, you could reason in the following way: If a moral theory/hypothesis makes a counter intuitive prediction you could 1) reject the your intuition or 2) reject the hypothesis ("I want to") 3) revise your hypothesis.

It would be practical if one could actually try out an moral theory, but I don't see how one could go about doing that. . .

Replies from: Julia_Galef
comment by Julia_Galef · 2011-11-30T19:47:13.118Z · LW(p) · GW(p)

Right -- I don't claim any of my moral intuitions to be true or correct; I'm an error theorist, when it comes down to it.

But I do want my intuitions to be consistent with each other. So if I have the intuition that utility is the only thing I value for its own sake, and I have the intuition that Life Extension is better than Replacement, then something's gotta give.

comment by Jayson_Virissimo · 2011-11-30T09:12:56.003Z · LW(p) · GW(p)

I'm not comfortable spending my time and mental resources on these utilitarian puzzles until I am shown a method (or even a good reason to believe there is such a method) for interpersonal utility comparison. If such a method has already been discussed on Less Wrong, I would appreciate a link to it. Otherwise, why engage in metaphysical speculation of this kind?

Replies from: steven0461, None, endoself
comment by steven0461 · 2011-11-30T20:39:50.345Z · LW(p) · GW(p)

This is most obviously a problem for preference utilitarians. The same preference ordering can be represented by different utility functions, so it's not clear which one to pick.

But utilitarians needn't be preference utilitarians. They can instead maximize some other measure of quality of life. For example, lifetime hiccups would be easy to compare interpersonally.

And if utility can be any measure of quality of life, then interpersonal utility comparison isn't the sort of question you get to refuse to answer. Whenever you make a decision that affects multiple people, and you take their interests into account, you're implicitly doing an interpersonal utility comparison. It's not like you can tell reality it's philosophically mistaken in posing the dilemma.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2011-12-01T07:29:18.600Z · LW(p) · GW(p)

But utilitarians needn't be preference utilitarians. They can instead maximize some other measure of quality of life. For example, lifetime hiccups would be easy to compare interpersonally.

I don't think this will work; it sweeps the difficult part under the rug. When you identify utility with a particular measure of welfare (for example, lifetime hiccups) there really is no good reason to think we all get the same amount of (dis)satisfaction for a single hiccup. Some would be extremely distressed by a hiccup, some would be only slightly bothered, and others will laugh because they think hiccups are funny.

If people actually do get different amount of (dis)satisfaction from the units of our chosen measure of welfare (which seems to me very likely), then even if we minimize (I'm assuming hiccups are supposed to be bad) the total (or average) number of lifetime hiccups between us, we still don't have very good reason to think that this state of affairs really provides the "the greatest amount of good for the greatest number" like Bentham and Mill were hoping for.

Replies from: steven0461
comment by steven0461 · 2011-12-01T20:34:21.636Z · LW(p) · GW(p)

The assumption wasn't that minimizing hiccups maximizes satisfaction, but that it's hiccups rather than satisfaction that matters. Obviously we both agree this assumption is false. We seem to have some source of information telling us lifetime hiccups are the wrong utility function. Why not ask this source what is the right utility function?

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-02-01T07:07:52.275Z · LW(p) · GW(p)

We seem to have some source of information telling us lifetime hiccups are the wrong utility function. Why not ask this source what is the right utility function?

We could settle this dispute on the basis of mere intuition if out intuitions didn't conflict so often. But they do, so we can't.

comment by [deleted] · 2011-11-30T11:42:35.355Z · LW(p) · GW(p)

As a first rough approximation, one could compare fMRIs of people's pleasure or pain centers.

But no, I largely agree with you. If one chooses the numbers so that the average utility of both scenarios is the same, then I don't see any reason to prefer one to the other. If instead one is trying to make some practical claim, it seems clear that in the near future humanity overwhelmingly prefers making new life to researching life extension.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2011-11-30T12:07:37.533Z · LW(p) · GW(p)

As a first rough approximation, one could compare fMRIs of people's pleasure or pain centers.

Hedons are not utilons. If they were, wireheading (or entering the experience machine) would be utility-maximizing.

Replies from: None, None
comment by [deleted] · 2011-11-30T12:35:34.191Z · LW(p) · GW(p)

Oh. Right.

comment by [deleted] · 2012-07-09T21:44:34.722Z · LW(p) · GW(p)

In order for this to be true, it would have to be sustainable enough that the pleasure gain outweighs the potential pleasure loss from a possibly longer life without wireheading/experience machine.

For utilitarians, externalities of one person's wireheading affecting other lives would have to be considered as well.

comment by endoself · 2011-11-30T22:24:47.040Z · LW(p) · GW(p)

I'm not comfortable spending my time and mental resources on these utilitarian puzzles until I am shown a method (or even a good reason to believe there is such a method) for interpersonal utility comparison.

  1. Create an upload of Jayson Virissimo (for the purpose of getting more time to think).

  2. Explain to him, in full detail, the mental states of two people.

  3. Ask him how he would choose if he could either cause the first person to exist with probability p or the second person to exist with probability q, in terms of p and q.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2011-12-01T07:34:27.162Z · LW(p) · GW(p)
  1. Create an upload of Jayson Virissimo (for the purpose of getting more time to think).

  2. Explain to him, in full detail, the mental states of two people.

  3. Ask him how he would choose if he could either cause the first person to exist with probability p or the second person to exist with probability q, in terms of p and q.

At best, this is a meta-method, rather than a method for interpersonal utility comparisons, since I still don't know which method my uploaded-self would use when choosing between the alternatives.

At worst, this would only tell us how much utility my uploaded-self gets from (probably) causing a person to exist with a particular mental state and is not actually an interpersonal utility comparison between the two persons.

Replies from: torekp, endoself
comment by torekp · 2011-12-02T02:28:36.706Z · LW(p) · GW(p)

In some senses of "utility", your uploaded-self's utility rankings of "create person A" and "create person B" are strongly dependent on his estimates of how much A's life has utility for A, and B's has for B. At least if you have a typical level of empathy. But then, this just reinforces your meta-method point.

However ... dig deeper on empathy, and I think it will lead you to steven0461's point.

comment by endoself · 2011-12-01T18:17:52.718Z · LW(p) · GW(p)

At best, this is a meta-method, rather than a method for interpersonal utility comparisons, since I still don't know which method my uploaded-self would use when choosing between the alternatives.

This is at least useful for creating thought experiments where different ideas have different observable consequences, showing that this isn't meaningless speculation.

At worst, this would only tell us how much utility my uploaded-self gets from (probably) causing a person to exist with a particular mental state and is not actually an interpersonal utility comparison between the two persons.

We have reason to care about the definition of 'utility function' that is used to describe decisions, since those are, by definition, how we decide. Hedonic or preferential functions are only useful insofar as our decision utilities take them into account.

comment by ShardPhoenix · 2011-11-30T04:01:19.015Z · LW(p) · GW(p)

A currently living person doesn't want to die, but a potentially living person doesn't yet want to live, so there's an asymmetry between the two scenarios.

Replies from: Richard_Kennaway, Julia_Galef, endoself, Lightwave
comment by Richard_Kennaway · 2011-11-30T08:20:27.305Z · LW(p) · GW(p)

Is that still true in Timeless Decision Theory?

Replies from: tondwalkar
comment by tondwalkar · 2013-07-31T13:12:59.503Z · LW(p) · GW(p)

I'd prefer never having existed to death at the moment. This might change later if I gain meaningful accomplishments, but I'm not sure how likely that is.

comment by Julia_Galef · 2011-11-30T17:22:18.428Z · LW(p) · GW(p)

I agree, and that's why my intuition pushes me towards Life Extension. But how does that fact fit into utilitarianism? And if you're diverging from utilitarianism, what are you replacing it with?

Replies from: None
comment by [deleted] · 2011-12-03T01:49:25.868Z · LW(p) · GW(p)

But how does that fact fit into utilitarianism?

That birth doesn't create any utility for the person being born (since it can't be said to satisfy their preferences), but death creates disutility for the person who dies. Birth can still create utility for people besides the one being born, but then the same applies to death and disutility. All else being equal, this makes death outweigh birth.

comment by endoself · 2011-11-30T22:31:56.351Z · LW(p) · GW(p)

To make this more precise think about what you would do if you had to choose between Life Extension and Replacement for a group of people, none of whom yet exist. I think the intuition in favour of Life Extension is the same, but I am not sure (I also find it very likely that I am actually indifferent ceteris paribus, for some value of 'actually' and sufficiently large values of 'paribus').

comment by Lightwave · 2011-11-30T08:51:58.563Z · LW(p) · GW(p)

Current people would prefer to live for as long as possible, but should they, really? What if they prefer it in the same sense that some prefer dust specks over torture? How can you justify extension as opposed to replacement apart from current people just wanting it?

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2011-12-02T04:14:35.790Z · LW(p) · GW(p)

I thought everything in utilitarianism was justified by what people want, as in what maximizes their utility... How is the fact that people want extension as opposed to replacement not a justification?

Replies from: Lightwave
comment by Lightwave · 2011-12-02T09:08:58.813Z · LW(p) · GW(p)

What maximizes their utility might not be what they (currently) want, e.g. a drug addict might want more drugs, but you probably wouldn't argue that just giving him more drugs maximizes his utility. There's a general problem that people can change what they want as they think more about it, become less biased/irrational, etc, so you have to somehow capture that. You can't just give everyone what they, at that current instant, want.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2011-12-02T18:47:17.709Z · LW(p) · GW(p)

But wouldn't more life maximize the individual utility generally? It's not like people are mistaken about the value of living longer. I get your argument, but the fact that people want to live longer (and would still want to even after ideally rational and fully informed) means that the asymmetry is still there.

Replies from: Lightwave
comment by Lightwave · 2011-12-03T10:00:42.649Z · LW(p) · GW(p)

Let me try to explain it this way:

Let's say you create a model of (the brain of) a new person on a computer, but you don't run the brain yet. Can you say the person hasn't been "born" yet? Are we morally obliged to run his brain (so that he can live)? Compare this to a person who is in a coma. He currently has no preferences, he would've preferred to live longer, if he were awake, but the same thing applies to the brain in the computer that's not running.

Additionally, it seem life extensionists also should commit to the resurrection of everyone who's ever lived, since they also wanted to continue living, and it could be said that being "dead" is just a temporary state.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2011-12-10T07:43:02.951Z · LW(p) · GW(p)

I'm going to get hazy here, but I think the following answers are at least consistent:

Let's say you create a model of (the brain of) a new person on a computer, but you don't run the brain yet. Can you say the person hasn't been "born" yet?

Yes.

Are we morally obliged to run his brain (so that he can live)?

No.

Compare this to a person who is in a coma. He currently has no preferences, he would've preferred to live longer, if he were awake, but the same thing applies to the brain in the computer that's not running.

They are not equivalent, because the person in the coma did live.

Additionally, it seem life extensionists also should commit to the resurrection of everyone who's ever lived, since they also wanted to continue living, and it could be said that being "dead" is just a temporary state.

Yes, I do think life extensionists are committed to this. I think this is why they endorse Cryonics.

Replies from: Lightwave
comment by Lightwave · 2011-12-23T09:06:19.792Z · LW(p) · GW(p)

They are not equivalent, because the person in the coma did live.

Well it seems it comes down to the above being something like a terminal value (if those even exist). I personally can't see how it's justified that a certain mind that had happened (by chance) to exist at some point in time is more morally significant than other minds that would equally like to be alive, but hadn't had the chance to have been created. It's just arbitrary.

Replies from: peter_hurford
comment by Peter Wildeford (peter_hurford) · 2012-01-05T06:13:07.184Z · LW(p) · GW(p)

Upon further reflection, I think I was much too hasty in my discussion here. You said that "Compare this to a person who is in a coma. He currently has no preferences". How do we know the person in the coma has no pereferences?

I'm going to agree that if the person has no preferences, then there is nothing normatively significant about that person. This means we don't have to turn the robot on, we don't have to resurrect dead people, we don't have to oppose all abortion, and we don't have to have as much procreative sex as possible.

On this further reflection, I'm confused as to what your objection is or how it makes life extension and replacement even. As the original comment says, life extension satisfies existing preferences whereas replacement does not, because no such preferences exist.

comment by Normal_Anomaly · 2011-12-01T01:12:22.307Z · LW(p) · GW(p)

I am an average utilitarian with one modification: Once a person exists, they are always counted in the number of people I average over, even if they're dead. For instance, a world where 10 people are born and each gets 50 utility has 10X50/10=50 utility. A world where 20 people are born, then 10 of them die and the rest get 50 utility each has (10X50+10X0)/20=25 utility. AFAICT, this method has several advantages:

  1. It avoids the repugnant conclusion.
  2. It avoids the usual argument against average utilitarianism, namely that it advocates killing off people experiencing low (positive) utility.
  3. It favors life extension over replacement, which fits both my intuitions and my interests. It also captures the badness of death in general.
  4. A society that subscribed to it would revive cryopreserved people.
Replies from: Larks, KatieHartman, None
comment by Larks · 2011-12-01T11:05:57.796Z · LW(p) · GW(p)

This doesn't seem to be monotonic in pareto improvements.

Suppose I had the choice between someone popping into existence for 10 years on a distant planet, living a worthwhile life, and then disappearing. They would prefer this to happen, and so might everyone else in the universe; however, if other's utilities were sufficiently high, this person's existence might lower the average utility of the world.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-12-01T11:22:33.727Z · LW(p) · GW(p)

That is . . . a pretty solid criticism. Half of the reason I posted this was to have people tear holes in it.

I'm looking for some way of modeling utilitarianism that adequately expresses the badness of death and supports resurrecting the dead, but maybe this isn't it. Perhaps a big negative penalty for deaths or "time spent dead," though that seems inelegant.

EDIT: Looking at this again later, I'm not sure what counts as a pareto improvement. Someone popping into existence, living happily for one day, and then disappearing would not be a good thing according to my (current conception of) my values. That implies there's some length of time or amount of happiness experienced necessary for a life to be worth creating.

Replies from: jhuffman
comment by jhuffman · 2011-12-01T21:28:11.673Z · LW(p) · GW(p)

Isn't there something a little bit broken about trying to find a utility system that will produce the conclusions you presently hold? How would you ever know if your intuitions were wrong?

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-12-01T22:44:51.865Z · LW(p) · GW(p)

What basis do I have for a utility system besides my moral intuitions? If my intuitions are inconsistent, I'll notice that because every system I formulate will be inconsistent. (Currently, I think that if my intuitions are inconsistent the best fix will be accepting the repugnant conclusion, which I would be relatively okay with.)

Replies from: jhuffman
comment by jhuffman · 2011-12-02T15:21:03.274Z · LW(p) · GW(p)

I understand what you are saying. But when I start with a conclusion, what I find myself doing is rationalizing. Even if my reasons are logically consistent I am suspicious of any product based on this process.

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-12-02T22:16:34.286Z · LW(p) · GW(p)

If it helps, the thought process that produced the great^4-grandparent was something like this:

"Total utilitarianism leads to the repugnant conclusion; average leads to killing unhappy people. If there was some middle ground between these two broken concepts . . . hm, what if people who were alive and are now dead count as having zero utility, versus the utility they could be experiencing? That makes sense, and it's mathematically elegant. And it weighs preserving and restoring life over creating it! This is starting to look like a good approximation of my values. Better post it on LW and see if it stands up to scrutiny."

comment by KatieHartman · 2011-12-01T05:13:53.746Z · LW(p) · GW(p)

It seems that you could use this to argue that nobody ever ought to be born unless we can ensure that they'll never die (assuming they stay dead, as people tend to do now).

Replies from: Normal_Anomaly
comment by Normal_Anomaly · 2011-12-01T11:17:40.300Z · LW(p) · GW(p)

I bite this bullet to an extent, but I don't think the argument that strong. If someone has a better-than-average life before they die, they can still raise the average, especially if everyone else dies too. I'm not sure how to model that easily; I'm thinking of something like: the utility of a world is the integral of all the utilities of everyone in it (all the utility anyone ever experiences), divided by the number of people who ever existed. In this framework, I think it would be permissible to create a mortal person in some circumstances, but they might be too rare to be plausible.

comment by [deleted] · 2011-12-01T05:00:11.197Z · LW(p) · GW(p)

I like this. Captures everything nicely. Em-ghettos and death both suck. It is good to have a firm basis to argue against them.

comment by daenerys · 2011-11-30T08:25:08.073Z · LW(p) · GW(p)

This actually reminds me of a movie trailer I saw the other day, for a movie called In Time. (Note: I am not at all endorsing it or saying you should see it. Apparently, it sucks! lol)

General premise of the sci-fi world- People live normally until 25. Then you stop aging and get a glowy little clock on your arm, that counts down how much time you have left to live. "Time" is pretty much their version of money. You work for time. You trade time for goods, etc. Rich people live forever; Poor people die very young. (pretty much imagine if over-drafting your bank account once means that you die)

Anyway, when I saw this preview, being the geek I am, I thought: "That doesn't make sense!"

The reason it doesn't make sense has to do with the extension v. replacement argument. Until the age of at least 16, and more generally 22-ish, people are a drain rather than benefit to society. The economic cost of maintaining a child is not equal to the output of a child. (I'm obviously not talking about love, and fulfillment of the parents, etc.).

This society's idea is that people of working age would be required to provide the economic cost for their life. However what would actually end up happening is that the birth rates would climb sky high (since people die young, and you need some of your children to make it so that once you can't keep working anymore they can provide hours for you). So society would be burdened with raising and educating a disproportionately large amount of children, but not getting full utility out of them. (aka they would start killing them off once they actually reached working/productive age)

In other words, society pays a lot to raise a kid, and then kills it after only getting a couple of productive years out of it. Does not compute.

So my thought, upon seeing this trailer, was that it would make no sense for that society to allow everyone to have children, and only rich people to live forever. It would make way more sense for that society to allow everyone to live forever, and only allow rich people (who could completely pay for their children's upbringing) to have children. (I am not saying this is at all a "good" idea, but given the premise of the film, it was the much more reasonable alternative).

In other words, you can continue getting utility out of one person if they live forever, but if you are going the "replacement" route you constantly have to be pouring money into their education and upbringing.

Note: I am not actually arguing for extension over replacement in any society other than the rather far-fetched one presented in the movie.

comment by drethelin · 2011-11-30T02:07:29.153Z · LW(p) · GW(p)

response a) My life gets better with each year I live. I learn new things and make new friends. 2 people who live 12 years will not have the same amount of happiness as I will on my birthday, when I turn 24. I see no reason why the same should not hold for even longer lifespans.

Response b) I privilege people that already exist over people who do not exist. A person living 800 years is more valuable to me EVEN if you say the same amount of happiness happens in both cases. I care about existing people being happy, and about not creating sad people, but I don't particularly care about creating new happy entities unless it's necessary for the perpetuation of humanity, which is something I value.

response c) The personal response, I value my own happiness significantly higher than that of other people. 1 year of my own life is worth more to me than 1 year of someone else's life. If my decision was between creating 10 people as happy as I am or making myself 10 times happier, I will make myself 10 times happier.

Finally, you don't seem to realize what is meant by caring about average utility. In your scenario, the TOTAL years lived remains the same in both cases, but the AVERAGE utility goes far down in the second case. 80 years per person is a lot less than 800 years per person.

Replies from: Logos01, Julia_Galef
comment by Logos01 · 2011-11-30T02:34:10.710Z · LW(p) · GW(p)

In your scenario, the TOTAL years lived remains the same in both cases, but the AVERAGE utility goes far down in the second case. 80 years per person is a lot less than 800 years per person.

Not only that but there is a decent claim to be made to -- within certain bounds -- noting that ten people who live only 100 years is less preferable to a utilitarian than 1 person who lives 1,000 years, so long as we accept the notion that deaths cause others to experience negative utility. The same number of years are lived but even without attempting to average utility the 10x100 scenario has 9 additional negative-utility events the 1x1,000 does not.

Replies from: Prismattic
comment by Prismattic · 2011-11-30T04:19:37.103Z · LW(p) · GW(p)

Implied assumption: death causes more disutility to others than birth causes utility to others. Might be true, but ought to be included explicitly in any such calculation.

Replies from: Logos01
comment by Logos01 · 2011-11-30T04:25:09.151Z · LW(p) · GW(p)

True.

comment by Julia_Galef · 2011-11-30T02:19:31.206Z · LW(p) · GW(p)

Thanks -- I fixed the setup.

Replies from: None
comment by [deleted] · 2011-11-30T02:20:18.812Z · LW(p) · GW(p)

Please don't do that. OP's comment doesn't make any sense now.

Replies from: Julia_Galef
comment by Julia_Galef · 2011-11-30T02:52:41.025Z · LW(p) · GW(p)

Ah, true! I edited it again to include the original setup, so that people will know what Logos01 and drethelin are referring to.

comment by Grognor · 2011-11-30T15:26:05.018Z · LW(p) · GW(p)

First thought: I accept the repugnant conclusion because I am a hard utilitarian. I also take the deals in the lifespan dilemma because my intuition that the epsilon chances of survival "wouldn't be worth it" are due to scope insensitivity.

Second: I attach much more disutility to death than utility to birth for two reasons, one good and bad. The bad reason is that I selfishly do not want to die. The good reason, which I have not seen mentioned, is that the past is not likely to repeat itself. Memories of the past have utility in themselves! History is just lines on paper, sometimes with videos, sometimes not, but it doesn't compare to actual experience! Experience and memory matter. Discounting them is an error in utilitarian reasoning.

Replies from: jhuffman
comment by jhuffman · 2011-12-01T21:34:07.035Z · LW(p) · GW(p)

The exact circumstances and memories of a person's life will not repeat but that's just as good an argument for creating new people who will also have unique memories that otherwise would not happen. While some remarkable memories from the past would be in some ways special to me if I can trace any sort of cultural lineage through them, memories from closely intertwined lives would interest me less than other memories that would be completely novel to me.

Replies from: Grognor
comment by Grognor · 2011-12-01T21:51:31.656Z · LW(p) · GW(p)

You're right. But here's the thing. I should have said it in my original comment, but the argument holds because learning from history is important, and, as we've all shown, that's REALLY HARD to do when everyone keeps dying. And I also strongly value the will to awesomeness, striving to be better and better (even before I read Tsuyoku Naritai), and I expect that people start at 0 and increase faster than linearly over time. In other words, the utility is still greater for the people who are still alive.

comment by rwallace · 2011-11-30T11:50:13.758Z · LW(p) · GW(p)

I'm perfectly prepared to bite this bullet. Extending the life of an existing person a hundred years and creating a new person who will live for a hundred years are both good deeds, they create approximately equal amounts of utility and I believe we should try to do both.

Replies from: torekp
comment by torekp · 2011-12-02T02:20:30.554Z · LW(p) · GW(p)

I agree. Note that this is independent of utilitarianism per se.

comment by FeepingCreature · 2011-11-30T15:52:04.462Z · LW(p) · GW(p)

I already exist. I prefer to adopt a ruleset that will favor me continuing to existing. Adopting a theory that does not put disutility on me being replaced with a different human would be very disingenuous of me. Advocating the creation of an authority that does not put disutility on me being replaced with a different human would also be disingenuous.

For spreading your moral theory, you need the support of people who live, not people who may live. Thus, your moral theory must favor their interests.

[edit] Is this metautilitarianism?

Replies from: jhuffman
comment by jhuffman · 2011-12-01T21:40:08.798Z · LW(p) · GW(p)

I am rich because I own many slaves. I prefer to adopt a ruleset that will favor me by continuing to provide me with slaves. ... etc.

Replies from: FeepingCreature
comment by FeepingCreature · 2011-12-02T14:10:47.483Z · LW(p) · GW(p)

Which is not necessarily a bad choice for you!

Very few people are trying to genuinely chose the most good for the most people; they're trying to improve their group status by signalling social supportiveness. There's no point to that if your group will be replaced; even suicide bombers require the promise of life after death or rewards for their family.

comment by prase · 2011-11-30T11:04:43.690Z · LW(p) · GW(p)

In the replicating scenario, people die twice as much. Since expectations of near death are unpleasant and death itself is unpleasant for the relatives and friends, doubling the number of deaths induces additional disutility, ceteris paribus.

comment by Xachariah · 2011-12-01T04:00:08.250Z · LW(p) · GW(p)

I don't see how this is a paradox at all.

Scenario (1) creates 100 years of utility, minus the death of one person. Scenario (2) creates 100 years of utility, plus the birth of one person, minus the death of two people. We can set them equal to each other and solve for the variables, you should prefer scenario (1) to scenario (2) iff the negative utility caused by a death is greater than the utility caused by a birth. Imagine that a child was born, and then immediately died ten minutes later. Is this a net positive or negative utility? I vote negative and I think most people agree; death outweighs birth.

(As an interesting sidenote, if we lived in a world where the value of Birth outweighed the value of Death, I think most of us would happily change our preference ordering. Eg, If we lived in the Children of Men world, we'd go with scenario (2) because a new birth is more important than a new death. Or if we lived in a universe where there really was a heaven, we'd go with scenario (2) as well because the value of death would be near zero.)

Things get less simple when we take into account the fact that all years and deaths don't generate equal (dis)utility. The disutility caused by Death(newborn) != Death(10 year old) != Death (20 year old) != Death (80 year old) != Death (200 year old). Similarly, the utility generated by a child's 3rd->4th year is nowhere near equivalent to the utility of someone's 18th->19th year. I would assume the external utility generated by someone 101st->200th year to far, far outweigh the external utility generated by the 1st->100th year (contributing to the world by being a valuable source of wisdom). By any reasonable calculations it seems that net utility in scenario (1) significantly outweighs the net utility of scenario (2).

Different people might have different expected values for the utility and disutility of years/deaths, and thus get differing results. But it seems if you had sufficiently accurate data with which to calculate expected utility, you could actually determine what those utilities are and they wouldn't come out equal. However, just because something is incredibly hard to calculate doesn't mean you throw your hands up and say that they must be equal. You do what you always do with insufficient information: approximate as best you can, double check your numbers, and hope you don't miss anything.

Replies from: None
comment by [deleted] · 2011-12-03T02:01:35.969Z · LW(p) · GW(p)

I'm not sure about the Children of Men example: a birth in that situation is only important in that it implies MORE possible births. If it doesn't, I still say that a death outweighs a birth.

But here's another extremely inconvenient possible world:

People aren't 'born' in the normal sense - instead they are 'fluctuated' into existence as full-grown adults. Instead of normal 'death', people simply dissolve painlessly after a given amount of time. Nobody is aware that at some point in the future they will 'die', and whenever someone does all currently existing people have their memories instantly modified to remove any trace of them.

I still prefer option (1) in this scenario, but I'm much less confident of it.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-04-26T16:50:21.484Z · LW(p) · GW(p)

People aren't 'born' in the normal sense - instead they are 'fluctuated' into existence as full-grown adults. Instead of normal 'death', people simply dissolve painlessly after a given amount of time. Nobody is aware that at some point in the future they will 'die', and whenever someone does all currently existing people have their memories instantly modified to remove any trace of them.

This scenario is way, way worse than the real world we live in. It's bad enough that some of my friends and loved ones are dead. I don't want to lose my memories of them too. The social connections people form with others are one of the most important aspects of their lives. If you kill someone and destroy all their connections at the same time you've harmed them far more badly than if you just killed them.

Plus, there's also the practical fact that if you are unaware of when you will "dissolve" it will be impossible for you to plan your life to properly maximize your own utility. What if you had the choice between going to a good movie today, and a great movie next week, and were going to dissolve tomorrow? If you didn't know that you were going to dissolve you'd pick the great movie next week, and would die having had less fun than you otherwise could have had.

I'd prefer option 1 in this scenario, and in any other, because the title of the OP is a misnomer, people can't be replaced. The idea that you are "replacing" someone if you create a new person after they die implies that people are not valuable, they are merely containers for holding what is really valuable (happiness, utility, etc.), and that it does not matter if a container is destroyed as long as you can make a new one to transfer its contents into. I completely disagree with this approach. Utility is valuable because people are valuable, not the other way around. A world with lower utility where less people have died is better than a world of higher utility with more death.

comment by shokwave · 2011-11-30T05:53:26.957Z · LW(p) · GW(p)

I prefer 1 to 2 because I'm currently alive, and so 1 has a more direct benefit for me than 2. I don't know if I have any stronger reasons; I don't think I need any, though.

comment by steven0461 · 2011-11-30T02:11:05.302Z · LW(p) · GW(p)

I really need to fix my blog archive, but I discussed this in the post at the top of this page.

Replies from: Julia_Galef
comment by Julia_Galef · 2011-11-30T03:19:24.828Z · LW(p) · GW(p)

Thanks -- but if I'm reading your post correctly, your arguments hinge on the utility experienced in Life Extension being greater than that in Replacement. Is that right? If I stipulate that the utility is equal, would your answer change?

Replies from: steven0461
comment by steven0461 · 2011-11-30T04:01:58.305Z · LW(p) · GW(p)

If utility per life year is equal, and total life years are equal, then total utility is equal and total utilitarianism is indifferent. But for the question to be relevant for decision-making purposes, you have to keep constant not utility itself, but various inputs to utility, such as wealth. Nobody is facing the problem of how to distribute a fixed utility budget. (And then after that, of course, you can analyze how those inputs themselves would vary as a result of life extension.)

I object to the phrasing "utility experienced". Utility isn't something you experience, it's a statement about a regularity in someone's preference ordering -- in this case, mine.

comment by amcknight · 2011-12-01T01:21:22.995Z · LW(p) · GW(p)

I think it comes down to how you value relationships. I don't want my family replaced, so replacing one of them with someone in a similarly valuable mental state might be equal in terms of their mental state, but because you've broken a relationship I value, the total utility has dropped. Other than this, I'm not sure I can see a relevant difference between extension and replacement.

comment by Larks · 2011-11-30T22:51:54.899Z · LW(p) · GW(p)

I assume everyone is familiar with the following argument:

Premise: You are not indifferent about the utility of people who will come to exist, if they definitely will exist. Conclusion: You can't be in general indifferent between people existing and not existing.

World A: Person has 10 utility World B: Person does not exist World C: Person has 20 utility

By hypothesis, you're not indifferent between A and C. Hence by transitivity, you're not indifferent between both A,B and B,C.

comment by DanielLC · 2011-11-30T06:25:09.973Z · LW(p) · GW(p)

Ignoring the fact that replacement tend to be expensive, I'd consider them equal utility if I believed in personal identity. I don't, so not only are the equally good, they are, for all intents and purposes, the same choice.

Replies from: None
comment by [deleted] · 2011-12-01T16:33:56.388Z · LW(p) · GW(p)

Downvoted for using such an ill defined word as personal identity, without additional specification.

Replies from: DanielLC
comment by DanielLC · 2011-12-01T20:21:50.075Z · LW(p) · GW(p)

I don't think there's any fundamental connection between past and future iterations of the same person. You die and are replaced by someone else every moment. Extending your life and replacing you are the same thing.

Replies from: orthonormal, Curiouskid, None
comment by orthonormal · 2011-12-02T05:14:41.745Z · LW(p) · GW(p)

I don't need to posit any metaphysical principle; my best model of the universe (at a certain granularity) includes "agents" composed of different mind-states across different times, with very similar architecture and goals, connected by memory to one another and coordinating their actions.

Replies from: DanielLC
comment by DanielLC · 2011-12-02T05:16:35.945Z · LW(p) · GW(p)

Exactly what changes if you remove the "agents", and just have mind-states that happen to have similar architecture and goals?

Replies from: orthonormal
comment by orthonormal · 2011-12-02T05:48:56.087Z · LW(p) · GW(p)

At present, when mind-copying technology doesn't exist, there's an extremely strong connection exhibited by the mind-states that occupy a given cranium at different times, much stronger than that exhibited by any two mind-states that occupy different crania. (This shouldn't be taken naively- I and my past self might disagree on many propositions that my current self and you would agree on- but there's still an architectural commonality between my present and past mind-states, that's unmistakably stronger than that between mine and yours.)

Essentially, grouping together mind-states into agents in this way carves reality at its proper joints, especially for purposes of deciding on actions now that will satisfy my current goals for future world-states.

Replies from: DanielLC
comment by DanielLC · 2011-12-02T06:06:40.009Z · LW(p) · GW(p)

Essentially, grouping together mind-states into agents in this way carves reality at its proper joints

So does specifying rubes and bleggs. This is what I mean by there being nothing fundamentally separating them. It might matter whether it's red or blue, or whether it's a cube or an egg, but it can't possibly matter whether it's a rube or a blegg, because it isn't a rube or a blegg.

Replies from: orthonormal
comment by orthonormal · 2011-12-03T00:07:11.039Z · LW(p) · GW(p)

At present, there aren't any truly intermediate cases, so "agents with an identity over time" are useful concepts to include in our models; if all red objects in a domain are cubic and contain vanadium, "rube" becomes a useful concept.

In futures where mind-copying and mind-engineering become plentiful, this regularity will no longer be the case, and our decision theories will need to incorporate more exotic kinds of "agents" in order to be successful. I'm not talking about agents being fundamental- they aren't- just that they're tremendously useful components of certain approximations, like the wings of the airplane in a simulator.

Even if a concept isn't fundamental, that doesn't mean you should exclude it from every model. Check instead to see whether it pays rent.

Replies from: DanielLC
comment by DanielLC · 2011-12-03T01:10:31.990Z · LW(p) · GW(p)

My point isn't that it's a useless concept. It's that it would be silly to consider it morally important.

Replies from: Vladimir_Nesov, orthonormal
comment by Vladimir_Nesov · 2011-12-03T13:15:34.431Z · LW(p) · GW(p)

You argued that a concept "isn't fundamental", because in principle it's possible to construct things gradually escaping the current natural category, and therefore it's morally unimportant. Can you give an example of a morally important category?

comment by orthonormal · 2011-12-03T01:47:06.960Z · LW(p) · GW(p)

Sorry, but my moral valuations aren't up for grabs. I'm not perfectly selfish, but neither am I perfectly altruistic; I care more about the welfare of agents more like me, and particularly about the welfare of agents who happen to remember having been me. That valuation has been drummed into my brain pretty thoroughly by evolution, and it may well survive in any extrapolation.

But at this point, I think we've passed the productive stage of this particular discussion.

comment by Curiouskid · 2011-12-01T22:26:48.180Z · LW(p) · GW(p)

fundamental connection between past and future iterations of the same person

like memory?

Replies from: DanielLC
comment by DanielLC · 2011-12-01T23:59:21.744Z · LW(p) · GW(p)

There is nothing morally important about remembering being someone. There's no reason there has to be the same probability of being you and being one of the people you remember being. Memory exists, but it's not relevant.

Read The Anthropic Trilemma. I agree with the third horn.

Replies from: jacob_cannell
comment by jacob_cannell · 2011-12-12T21:15:58.745Z · LW(p) · GW(p)

Memory exists, but it's not relevant.

I find this odd because it sounds like the exact opposite of the patternist view of identity, where memory is all that is relevant.

Would you not mind then if some process erased all of your memories? Or replaced them completely with the memories of someone else?

Replies from: DanielLC
comment by DanielLC · 2011-12-13T01:42:50.182Z · LW(p) · GW(p)

I find this odd because it sounds like the exact opposite of the patternist view of identity

It's the lack of the patternist view of identity. I have no view of identity, so I disagree.

Would you not mind then if some process erased all of your memories?

It would be likely to cause problems, but beyond that, no. I don't see why losing your memory would be intrinsically bad.

I think the main thing I'm against is that any of this is fundamental enough to have any effect on anthropics. Erasing your memory and replacing it with someone else's who's still alive won't make it half as likely to be you, just because there's only a 50% chance of going from past him to you. Erasing your memory every day won't make it tens of thousands of times as likely to be one of them, on the basis that now you're tens of thousands of people.

You could, in principle, have memory mentioned in your utility function, but it's not like it's the end of the world if someone dies. I mean that in the sense that existance ceases for them or something like that. You could still consider it bad enough to warrant the phrase "it's like the end of the world".

comment by [deleted] · 2011-12-01T21:43:12.974Z · LW(p) · GW(p)

Don't know if I would call a mind-state a person, persons usually respond to things, think and so on, a mind state can't do any of that. It's somewhat like saying "a movie is made of little separate movies" when it's actually separate frames. And death implies the end of a person not a mind-state. It might be a bit silly of me to make all this fuss about definitions but it's already a quite messy subject, let's not make it any messier.

Replies from: DanielLC
comment by DanielLC · 2011-12-02T00:07:42.646Z · LW(p) · GW(p)

Fine, there's no fundamental connection between separate mind-states. Personhood can be defined (mostly), but it's not fundamentally important whether or not two given mind-states are connected by a person. All that matters is the mind-states, whether you're talking about morality or anthropics.

Replies from: None
comment by [deleted] · 2011-12-02T00:32:30.842Z · LW(p) · GW(p)

All this is of course very speculative but couldn't you just reduce mind-states into sub-mind-states? If you look at split brain patients, where you have cut off corpus callosum, the two hemispheres behave/report in some situations as if they were two different people, it seems (at least to me) that there does not seem to be such irreducible quanta as "brain-states" either. My point is that you could make the same argument:

It's not fundamentally important whether or not two given sub-mind-states are connected by a mind state. All that matters is the sub-mind-states.

Replies from: DanielLC
comment by DanielLC · 2011-12-02T01:10:20.610Z · LW(p) · GW(p)

It seems to me that my qualia are all experienced together, or at least the ones that I'm aware of. As such, there is more than just sub-mind-states. There is a fundamental difference. For what it's worth, I don't consider this difference morally relevant, but it's there.

comment by Protagoras · 2011-11-30T03:46:45.533Z · LW(p) · GW(p)

I guess that my own response to the repugnant conclusion tends to be along the lines that mere duplication does not add value, and the more people there are, the closer the inevitable redundancy will bring you to essentially adding duplicates of people you already have. At least as things are at present, giving an existing person an extra hundred years seems like it will involve less redundancy than adding yet another person with a hundred year lifespan to the many we already have and are constantly adding.

comment by Ghatanathoah · 2013-03-20T05:50:09.889Z · LW(p) · GW(p)

We can deal with this with a thought experiment that engages our intuitions more clearly, since it doesn't involve futuristic technology: Is it okay to kill a fifteen year old person who is destined to live a good life if doing so will allow you to replace them with someone who will live a life that is as good, or better, as the fifteen year old's remaining years would have been? What if the fifteen year old in question is disabled, so their life is a little more difficult, but still worth living, while their replacement would be an able person? Would it be okay then?

The answers are obvious. No and no. Once someone exists keeping them alive and happy is much more important than creating new people. It isn't infinitely more important, it would be wrong to sterilize the entire human race to prevent one existing person from getting a dust speck in their eye. But it is much, much, much more important.

Life extension isn't just better than replacement, it is better by far, even if the utility of the person with an extended life is much lower than the utility their replacement would have.

I suspect that the reason for this is that population ethics isn't about maximizing utility. If it was we wouldn't be trying to create more people, we'd be trying to figure out how to kill the human race and replace it with another species whose preferences are easier to satisfy.* I believe that the main reason to create new people because having the human race continue to exist helps fulfill certain ideals, such as Fun Theory and the various complex human values. If you try to do population ethics just by adding the utility of the creatures being created, you're doing it wrong.

Now, once we've created someone we do have a responsibility to make sure they have high utility (you can't unbirth a child). If we know they are going to exist we should definitely take steps to improve their utility even before they come into existence. And if you're trying to decide between creating two people who fulfill our ideals equally well, which one has a higher level of utility is definitely a good tiebreaker. But a person's utility isn't the main reason we create them. If I had a choice between making a human with positive utility, and making a kiloton of orgasmium, all other things being equal I'd pick the human, because the complex values of a human being furthers my moral ideals far better than orgasmium does.

*Anyone who disagrees and believes that creating human beings (or nonhuman creatures with human-like values) is the most efficient way to maximize utility should consider the Friendly AI problem. Imagine that someone has just created an AI programmed to "maximize preference satisfaction." The AI is extremely intelligent and has access to immense resources. All that needs to be done is switch it on. What is your honest, Bayesian probability that, if you switch on the AI, it will not eventually try to exterminate the human race and replace it with creatures who have cheaper, easier to satisfy preferences?

comment by jacob_cannell · 2011-12-12T22:11:32.134Z · LW(p) · GW(p)

This is a really interesting issue which I suspect will only get more important over time. I largely agree with Xachariah, but I see a greater dependency on personal preference.

Another way of looking at the problem is to consider individual preferences. Imagine a radical sustainable future where everyone gets to choose between an extended life with no children or a normal life with 1 child (or 2 per couple). I'd be really interested in polls on that choice. Personally I'd choose extension over children. I also suspect that polls may reveal a significant gender gap on this issue, but that's just speculation.

I have a suspicion that linear aggregation of individual utilities over all possible people is perhaps not the best global decision process. There probably should be a discount factor where we discount other potential beings in proportion to their dissimilarity to existing people. We should count existing people's preferences somewhat higher than future descendants who in turn we weight higher than distant descendants and other more remote possible people.

comment by jefftk (jkaufman) · 2011-12-02T14:13:30.029Z · LW(p) · GW(p)

As you phrased it, life extension and replacement seem roughly similar to me. I don't feel the need to modify my utilitarianism to strongly prefer life extension. There are some differences, though:

  • Perhaps the later life years are less pleasant than the earlier ones? You're less physically able, more cynical, less open to new ideas? Or perhaps the later life years are more pleasant than the earlier ones? You've had the time to get deeply into subjects and achieve mastery, you could have some very strong old friendships, you have a better model of the world.
  • People who live longer might be better at solving the worlds problems. Or having more different people might be better.
  • I think of the harm of death as being the removal of the potential for future joy combined with the suffering of those who remain, and replacement deals with the former. While death seems to cause pain to those who continue living in all societies, death of "old age" doesn't seem to cause too much suffering to other people.

I would definitely (selfishly) prefer life extension.

(I'm a total utilitarian)

comment by Hyena · 2011-11-30T22:41:07.824Z · LW(p) · GW(p)

This presumes that extending the life of an existing person by 100 years precludes the creation of a new person with a lifespan of 100 years. We will be motivated to prefer the former scenario because it is difficult for us to feel its relevance to the latter.

comment by Nick_Roy · 2011-11-30T08:45:02.678Z · LW(p) · GW(p)

I currently route around this by being an ethical egoist, though I admit that I still have a lot to learn when it comes to metaethics. (And I 'm not just leaving it at "I still have a lot to learn", either - I'm taking active steps to learn more, and I'm not just signalling that, and I'm not just signalling that I'm not signalling that, etc.)!

comment by [deleted] · 2011-11-30T02:09:08.943Z · LW(p) · GW(p)

Why does one have to be better than the other?

Replies from: Julia_Galef
comment by Julia_Galef · 2011-11-30T03:23:51.301Z · LW(p) · GW(p)

One doesn't have to be better than the other. That's what's in dispute.

I think making this comparison is important philosophically, because of the implications our answer has for other utilitarian dilemmas, but it's also important practically, in shaping our decisions about how to allocate our efforts to better the world.

comment by [deleted] · 2011-12-01T15:05:08.835Z · LW(p) · GW(p)

But in my thought experiment, average utility remains unchanged.

The average utility, counting only those two people, is unchanged (as long as we assume that life from 0-100 is as pleasurable as life from 100-200). But firstly the utility of other humans should be taken into account; the loved ones of the person already living, the likely pleasure given to others by younger people in comparison to older people, the expected resources consumed etc.

But perhaps your thought experiment supposes that these expected utility calculations all happen to be equal in either case, too. In that case, surely there is still your own utility to be taken into consideration!

I don't know much about utilitarianism (generally I regard all attempts at inventing moral formulae to describe human values to be hopeless, because we aren't as a matter of fact reflectively consistent), but presumably in utilitarianism you are allowed to have aesthetic values apart from the value attached to pleasure and suffering in others (otherwise I conclude that utilitarianism is one of the less sensible such formulae). Therefore if the utility of others is the same either way, your personal non-altruistic aesthetic values are the deciding factor (and the question of how these are to be balanced with the value attached to the utility of other people is irrelevant in this case). Clearly your non-altruistic values do prefer the idea of life extension, so I don't see any problem here.