To capture anti-death intuitions, include memory in utilitarianism
post by Kaj_Sotala · 2014-01-15T06:27:28.901Z · LW · GW · Legacy · 34 commentsContents
34 comments
EDIT: Mestroyer was the first one to find a bug that breaks this idea. Only took a couple of hours, that's ethics for you. :)
In the last Stupid Questions Thread, solipsist asked
Making a person and unmaking a person seem like utilitarian inverses, yet I don't think contraception is tantamount to murder. Why isn't making a person as good as killing a person is bad?
People raised valid points, such as ones about murder having generally bad effects on society, but most people probably have the intuition that murdering someone is bad even if the victim was a hermit whose death was never found out by anyone. It just occurred to me that the way to formalize this intuition would also solve more general problems with the way that the utility functions in utilitarianism (which I'll shorten to UFU from now on) behave.
Consider these commonly held intuitions:
- If a person is painlessly murdered and a new (equally happy) person is instantly created in their place, this is worse than if there was a single person who lived for the whole time.
- If a living person X is painlessly murdered at time T, then this is worse than if the X's parents had simply chosen not to have a child at time T-20, even though both acts would have resulted in X not existing at time T+1.
- If someone is physically dead, but not information-theoretically dead and a close enough replica of them can be constructed and brought back, then bringing them back is better than creating an entirely new person.
34 comments
Comments sorted by top scores.
comment by Mestroyer · 2014-01-15T10:16:00.032Z · LW(p) · GW(p)
If we pick an appropriate value for the "not alive anymore" penalty, then it won't be so large as to outweigh all other considerations, but enough that situations with unnecessary death will be evaluated as clearly worse than ones where that death could have been prevented.
Under your solution, every life created implies infinite negative utility. Due to thermodynamics or whatever (big rip? other cosmological disaster that happens before heat death?) we can't keep anyone alive forever. No matter how slow the rate of disutility accumulation, the infinite time after the end of all sentience makes it dominate everything else.
If I understand you correctly, then your solution is that the utility function actually changes every time someone is created, so before that person is created, you don't care about their death. One weird result of this is that if there will soon be a factory that rapidly creates and then painlessly destroys people, we don't object (And while the factory is running, we are feeling terrible about everything that has happened in it so far, but we still don't care to stop it). Or to put it in less weird terms, we won't object to spreading some kind of poison which affects newly developing zygotes, reducing their future lifespan painlessly.
There's also the incentive for an agent with this system to self-modify to stop changing their utility function over time.
Replies from: Kaj_Sotala, philh, polarix↑ comment by Kaj_Sotala · 2014-01-15T10:39:11.533Z · LW(p) · GW(p)
No matter how slow the rate of disutility accumulation, the infinite time after the end of all sentience makes it dominate everything else.
That's true, but note that if e.g. 20 billion people have died up to this point, then that penalty of -20 billion gets applied equally to every possibly future state, so it won't alter the relative ordering of those states. So the fact that we're getting an infinite amount of disutility from people who are already dead isn't a problem.
Though now that you point it out, it is a problem that, under this model, creating a person who you don't expect to live forever has a very high (potentially infinite) disutility. Yeah, that breaks this suggestion. Only took a couple of hours, that's ethics for you. :)
If I understand you correctly, then your solution is that the utility function actually changes every time someone is created, so before that person is created, you don't care about their death.
That's an interesting idea, but it wasn't what I had in mind. As you point out, there are some pretty bad problems with that model.
Replies from: Gunnar_Zarncke, Ghatanathoah↑ comment by Gunnar_Zarncke · 2014-01-15T12:22:15.856Z · LW(p) · GW(p)
Yeah, that breaks this suggestion.
It only breaks that specific choice of memory UFU. The general approach admits lots of consistent functions.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-01-15T13:08:04.082Z · LW(p) · GW(p)
That's true.
I wonder whether professional philosophers have made any progress with this kind of an approach? At least in retrospect it feels rather obvious, but I don't recall hearing anyone mention something like this before.
Replies from: RichardChappell, Gunnar_Zarncke↑ comment by RichardChappell · 2014-01-15T15:04:36.683Z · LW(p) · GW(p)
It's not unusual to count "thwarted aims" as a positive bad of death (as I've argued for myself in my paper Value Receptacles), which at least counts against replacing people with only slightly happier people (though still leaves open that it may be worthwhile to replace people with much happier people, if the extra happiness is sufficient to outweigh the harm of the first person's thwarted ends).
↑ comment by Gunnar_Zarncke · 2014-01-15T14:11:24.200Z · LW(p) · GW(p)
Philosophers are screwed nowadays. If they apply the scientific method and reductionism to social sciences topics they cut away too much. If they stay with vague notions which do not cut away the details they are accused of being vague. The vaguesness is there for a reason: It it kind of abstraction of the essential complexity of the abstracted domain.
↑ comment by Ghatanathoah · 2014-01-23T20:00:40.024Z · LW(p) · GW(p)
Though now that you point it out, it is a problem that, under this model, creating a person who you don't expect to live forever has a very high (potentially infinite) disutility. Yeah, that breaks this suggestion. Only took a couple of hours, that's ethics for you. :)
Oddly enough, right before I noticed this thread I posted a question about this on the Stupid Questions Thread.
My question, however, was whether this problem applies to all forms of negative preferences utilitarianism. I don't know what the answer is. I wonder if SisterY or one of the other antinatalists who frequents LW does.
↑ comment by philh · 2014-01-15T14:03:09.184Z · LW(p) · GW(p)
I think we can add a positive term as well: we gain some utility for happiness that once existed but doesn't any more. E.g. we assign more utility to the state "there used to be a happy hermit, then she died" than the state "there used to be a sad hermit, then she died". For certain values, this would be enough to be better than the state "there has never been a hermit", which doesn't get the "dead hermit" loss, but also doesn't get the "past happiness" bonus.
Replies from: Mestroyer↑ comment by Mestroyer · 2014-01-15T21:27:08.330Z · LW(p) · GW(p)
Hmm. You need to avoid the problem where you might want to exploit the past happiness bonus infinitely. The past happiness bonus needs to scale at least linearly with duration of life lived, else we want to create as many short happy lives we can so we can have as many infinite durations of past happiness bonus as we can.
Say our original plan was for every person who's died, we would continue accruing utility at a rate equal to the average rate they caused us to accrue it over their life, forever. Then making this adjustment puts us at multiplying their lifespan times that average. Which is equivalent to every thing that happens causing utiltity starting a continuous stream of utility forever irrespective of the person who experienced it. But that is equivalent to scaling everything in a utilitarianism that doesn't care about death by a factor of "t", and taking the limit as t goes to infinity. Which is equivalent to ordinary utilitarianism, as no big scaling factor applied to everything at once will change anything.
By the way, if one of these ideas works, we should call it WWDILEIEU (What We Do In Life Echoes In Eternity Utilitarianism) . Or if that's too long, then Gladiator Utilitarianism.
Let me make an attempt of my own...
What if after a person's death, we accumulate utility at a rate of the average they accumulated it over their lifetime multiplied by the square of the duration of their lifetime?
Then we want happy lifetimes to be as long as possible, and we aren't afraid to create new people if their lives will be good. Although...
Perhaps if someone has already suffered enough, but their life is now going to become positive, and them living extremely long is not a possibility, we'll want to kill them and keep their past suffering from accumulating any more scaling factor.
If there are people whose lifespans we can't change, all of them mortal, and some of their lifespans are longer than others, and we have limited resources to distribute 1 hour (much shorter than any of the lifespan) periods of increased happiness, we will drastically favor those whose lifespans are longer.
If you have a limited supply of "lifespan juice", which when applied to someone increases their lifespan by a fixed time per liter, and a certain population already alive, each of whom has a fixed and equal quality of life, you want to give all the juice to one person. Dividing it up is as bad as "dividing a single person up", by killing them partway through their otherwise lifespan and replacing them with a new person.
comment by Viliam_Bur · 2014-01-15T13:20:57.223Z · LW(p) · GW(p)
If a living person X is painlessly murdered at time T, then this is worse than if the X's parents had simply chosen not to have a child at time T-20, even though both acts would have resulted in X not existing at time T+1.
Example 1:
On a planet Utopia, people live 1000 years in perfect health and happiness. There is no war, starvation, pain, or other things typical on the planet Earth. Just a life full of pleasures, and then a quick and painless death. Mr. and Ms. A decided to have children.
Example 2:
Mr. and Ms. B think that babies are cute, but teenagers are super annoying. They don't care about being alone when they are old; they just want to maximize their pleasures of parenthood. They decided to have a lot of babies, give them a perfect life while they are small, and to kill them painlessly when they start being annoying.
Both examples could be seen as instances of the same problem, namely -- whether it is morally good to create a happy life that must end at some time -- and yet, my feelings about them are very different.
comment by James_Miller · 2014-01-15T07:05:35.054Z · LW(p) · GW(p)
What your article basically shows is that to keep utilitarianism consistent with our moral intuition we have to introduce a fudge factor that favors people (such as us) who are or were alive. Having made this explicit next we should ask if this preference is morally justified. For me, however, it doesn't seem all that far from someone saying "I'm a utilitarian but my intuition strongly tells me that people with characteristic X are more important than everyone else so I'm going to amend utilitarianism by giving greater weight to the welfare of X-men." Although since the "Repugnant Conclusion" has never seemed repugnant to me, I'm probably an atypical utilitarian.
Replies from: David_Gerard, Ghatanathoah, Kaj_Sotala↑ comment by David_Gerard · 2014-01-15T09:00:29.497Z · LW(p) · GW(p)
In the limit case, "I like being alive and regard it as a good thing, so I assume others do about themselves too."
↑ comment by Ghatanathoah · 2014-01-23T20:18:46.715Z · LW(p) · GW(p)
For me, however, it doesn't seem all that far from someone saying "I'm a utilitarian but my intuition strongly tells me that people with characteristic X are more important than everyone else so I'm going to amend utilitarianism by giving greater weight to the welfare of X-men."
There is a huge difference between discriminatory favoritism, and valuing continued life over adding new people,
In discriminatory favoritism people have a property that makes them morally valuable (i.e the ability to have preferences, or to feel pleasure and pain). They also have an additional property that does not affect their morally valuable property in any significant way (i.e skin color, family relations). Discriminatory favoritism argues that this additional property means that the welfare of these people is less important, even though that additional property does not affect the morally valuable property in any way.
By contrast, in the case of valuing continuing life over creating new people, the additional property (nonexistance) that the new people have does have a significant effect on their morally significant property. Last I checked never having existed had a large effect on your ability to have preferences, and your ability to feel please and pain. If the person did exist in the past, or will exist in the future, that will change, but if they never existed, don't exist, and never will exist, then I think that is significant. Arguing that it shouldn't be is like arguing you shouldn't break a rock because "if the rock could think, it wouldn't want you to."
We can illustrate it further by thinking about individual preferences instead of people. If I become addicted to heroin I will have a huge desire to take heroin far stronger than all the desires I have now. This does not make me want to be addicted to heroin. At all. I do not care in the slightest that the heroin addicted me would have a strong desire for heroin. Because that desire does not exist and I intend to keep it that way. And I see nothing immoral about that.
Replies from: James_Miller↑ comment by James_Miller · 2014-01-23T22:22:30.447Z · LW(p) · GW(p)
does have a significant effect on their morally significant property.
But not in any absolute sense, just because this is consistent with your moral intuition.
Last I checked never having existed had a large effect on your ability to have preferences, and your ability to feel please and pain
Not relevant because we are considering bringing these people into existence at which point they will be able to experience pain and pleasure.
I do not care in the slightest that the heroin addicted me would have a strong desire for heroin.
Imagine you know that one week from now someone will force you to take heroin and you will become addicted. At this point you will be able to have an OK life if given a regular amount of the drug but will live in permanent torture if you never get any more of the substance. Would you pay $1 today for the ability to consume heroin in the future?
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2014-01-24T04:22:36.728Z · LW(p) · GW(p)
Not relevant because we are considering bringing these people into existence at which point they will be able to experience pain and pleasure.
Yes, but I would argue that the fact that they can't actually do that yet makes a difference.
Imagine you know that one week from now someone will force you to take heroin and you will become addicted. At this point you will be able to have an OK life if given a regular amount of the drug but will live in permanent torture if you never get any more of the substance. Would you pay $1 today for the ability to consume heroin in the future?
Yes, if I was actually going to be addicted. But it was a bad thing that I was addicted in the first place, not a good thing. What I meant when I said I "do not care in the slightest" was that the strength of that desire was not a good reason to get addicted to heroin. I didn't mean that I wouldn't try to satisfy that desire if I had no choice but to create it.
Similarly, in the case of adding lots of people with short lives, the fact that they would have desires and experience pain and pleasure if they existed is not a good reason to create them. But it is a good reason to try to help them extend their lives, and lead better ones, if you have no choice but to create them.
Thinking about it further, I realized that you were wrong in your initial assertion that "we have to introduce a fudge factor that favors people (such as us) who are or were alive." The types of "fudge factors" that are being discussed here do not, in fact do that.
To illustrate this, imagine Omega presents you with the following two choices:
Everyone who currently exists receives a small amount of additional utility. Also, in the future the amount of births in the world will vastly increase, and the lifespan and level of utility per person will vastly decrease. The end result will be the Repugnant Conclusion for all future people, but existing people will not be harmed, in fact they will benefit from it.
Everyone who currently exists loses a small amount of their utility. In the future far fewer people will be born than in Option 1, but they will live immensely long lifespans full of happiness. Total utility is somewhat smaller than in Option 1, but concentrated in a smaller amount of people.
Someone using the fudge factor Kaj proposes in the OP would choose 2, even though it harms every single existing person in order to benefit people who don't exist yet. It is not biased towards existing persons.
I basically view adding people to the world in the same light as I view adding desires to my brain. If a desire is ego-syntonic (i.e. a desire to read a particularly good book) then I want it to be added and will pay to make sure it is. If a desire is ego-dystonic (like using heroin) I want it to not be added and will pay to make sure it isn't. Similarly, if adding a person makes the world more like my ideal world (i.e. a world full of people with long eudaemonic lives) then I want that person to be added. If it makes it less like my ideal world (i.e. Repugnant Conclusion) I don't want that person to be added and will make sacrifices to stop it (for instance, I will spend money on contraceptives instead of candy).
As long as the people we are considering adding are prevented from ever having existed, I don't think they have been harmed in the same way that that discriminating against an existing person for some reason like skin color or gender harms someone, and I see nothing wrong with stopping people from being created if it makes the world more ideal.
Of course, needless to say, if we fail and these people are created anyway, we have just as much moral obligation towards them as we would towards any preexisting person.
Replies from: James_Miller↑ comment by James_Miller · 2014-01-24T13:47:11.975Z · LW(p) · GW(p)
I basically view adding people to the world in the same light as I view adding desires to my brain.
Interesting way to view it. I guess I see a set of all possible types of sentient minds with my goal being to make the universe as nice as possible for some weighted average of the set.
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2014-01-24T20:30:22.282Z · LW(p) · GW(p)
I guess I see a set of all possible types of sentient minds with my goal being to make the universe as nice as possible for some weighted average of the set.
I used to think that way, but it resulted in what I considered to be too many counterintuitive conclusions. The biggest one, that I absolutely refuse to accept, being that we ought to kill the entire human race and use the resources doing that would free up to replace them with creatures whose desires are easier to satisfy. Paperclip maximizers or wireheads for instance. Humans have such picky, complicated goals, after all..... I consider this conclusion roughly a trillion times more repugnant than the original Repugnant Conclusion.
Naturally, I also reject the individual form of this conclusion, which is that we should kill people who want to read great books, climb mountains, run marathons, etc. and replace them with people who just want to laze around. If I was given a choice between having an ambitious child with a good life, or an unambitious child with a great life, I would pick the ambitious one, even though the total amount of welfare in the world would be smaller for it. And as long as the unambitious child doesn't exist, never existed, and never will exist I see nothing wrong with this type of favoritism.
↑ comment by Kaj_Sotala · 2014-01-15T07:24:21.401Z · LW(p) · GW(p)
Indeed, I believe that I mostly disagree with the anti-death intuition as well. (But disagreeing with it doesn't mean that formalizing it wouldn't have been an interesting exercise. :))
comment by private_messaging · 2014-01-15T16:22:11.507Z · LW(p) · GW(p)
I've written earlier that utilitarianism* completely breaks down if you try to go specific enough.
Just consider brain simulators, where a new state is computer from the current state, then the current state is overwritten. So the utility of new state has to be greater than the utility of current state. At which point you'd want to find a way to compute maximum utility state without computing intermediate steps. The trade-offs between needs of different simulators of different ages also end up working wrongly.
- i assume that utilitarianism has some actual content, beyond trivialities such as assigning utility of 1 to actions prescribed by some kind of virtue ethics and 0 to all other actions, and then claiming that it is utilitarian.
comment by Luke_A_Somers · 2014-01-15T13:15:08.388Z · LW(p) · GW(p)
Seems to me like having a continuance bonus rather than a death penalty would make more sense. And of course you'd need to encode a mutual information penalty so you don't overwrite everyone with the same oldest person.
At each time, score 1 for each distinct memory in a living person? (not complete UFU, just one term)
Replies from: Baughn↑ comment by Baughn · 2014-01-15T17:57:08.130Z · LW(p) · GW(p)
You can keep patching the function, someone will likely find a way around it... or, if not, it'll be some time before we feel safe that no-one will.
It's not the same function we're actually implementing, though.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-01-15T19:30:58.343Z · LW(p) · GW(p)
Successive approximation? At least this keeps what we're looking for without making having children be as bad as murder.
Replies from: Baughn↑ comment by Baughn · 2014-01-15T21:16:03.235Z · LW(p) · GW(p)
Well...
Honestly, I'm not quite sure about that one. Making a child, knowing ve'll eventually die? When there are probably other universes in which that is not the case? I don't feel very safe in judging that question, one way or the other.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2014-01-15T21:28:34.285Z · LW(p) · GW(p)
1) I'm a lot less confident in the existence of true immortality. The second law of thermodynamics is highly generalizable, and to get around it you need infinite enthalpy sources.
2) I like living enough to prefer it and then dying to never living. I think I can give my kids enough of a head start that they'll be able to reach the same choice.
comment by AlexMennen · 2014-01-15T22:13:33.409Z · LW(p) · GW(p)
incorporating a history to the utility function.
That should definitely be part of the solution. In fact, I would say that utility functions defined over individual world-states, rather than entire future-histories, should not have ever been considered in the first place. The effects of your actions are not restricted to a single time-slice of the universe, so you cannot maximize expected utility if your utility function takes only a single time-slice as input. (Also because special relativity.)
Suppose that a person X is born at time T: we enter the fact of "X was born" into the utility function's memory. From now, for every future state the UF checks whether or not X is still alive. If yes, good, if not, that state loses one point of utility.
maintaining a memory of the amount of peak well-being that anyone has ever had, and if they fall below their past peak well-being, apply the difference as a penalty. So if X used to have 50 points of well-being but now only has 25, then we apply an extra -25 to the utility of that scenario.
These are kludge-y answers to special cases of a more general issue: we care about the preferences existing people have for the future. Presumably X himself would prefer a future in which he keeps his 50 points of well-being over a future where he has 25 and Y pops into existence with 25 as well, whereas Y is not yet around to have a preference. I don't see what the peak well-being that X has ever experienced has anything to do with it. If we were considering whether to give X an additional 50 units of well-being (for a total of 100), or bring into existence Y with 50 units of well-being, it seems to me that exactly the same considerations would come into play.
comment by Gunnar_Zarncke · 2014-01-15T07:08:51.604Z · LW(p) · GW(p)
We can fix this by incorporating a history to the utility function.
I think this is a sensible modelling as we value life exactly because of the continuity over time.
This does complicate matters a lot thought because it is not clear how the history should be taken into account. At least no obvious model as for the UFUs suggests itself (except for the trivial one to ignore the history).
Your examples sound plausible but I guess that trying to model human intuition of this leads to very complex functions.
Replies from: Calvin↑ comment by Calvin · 2014-01-15T07:27:46.796Z · LW(p) · GW(p)
Is just me, or is it somewhat to the contrary to normal approach taken by some utilitarians, I mean, here we are tweaking the models, while elsewhere some apparent utilitarians seem to be approaching it from the other case:
My intuition does not match current model, so I am making incorrect choice and need to change intuition and become more moral, and act according to preferred values.
Tweaking the model seems like several magnitudes harder, but as I guess, also several magnitudes more rewarding. I mean, I would love to see a self-consistent moral framework that maps to my personal values, but I assume it is not a goal that is easy to achieve, unless we include egoism, I guess.
comment by [deleted] · 2014-01-15T13:13:09.954Z · LW(p) · GW(p)
If a living person X is painlessly murdered at time T, then this is worse than if the X's parents had simply chosen not to have a child at time T-20, even though both acts would have resulted in X not existing at time T+1.
I don't share this intuition.
comment by Richard_Kennaway · 2014-01-15T09:59:36.241Z · LW(p) · GW(p)
We can fix this by incorporating a history to the utility function. ...
Or tl;dr: you are changing the utility function to fit it more closely to your experienced desires. Well, your utility function can be anything at all, it's "not up for grabs", as we say, which means that it is totally up for grabs by the person whose function it is.
But there's a meta-issue here. If you have a moral theory, whether utilitarianism of any sort or something else, and you find it yields conclusions that you find morally repugnant, from what standpoint can you resolve the conflict?
You can't just say "the theory says this, therefore it's right," because where did the theory come from? Neither can you say "my inner moral sense trumps the theory," or what was the theory for? Modus ponens versus modus tollens. When a theory derived from your intuitions has implications that contradict your intuitions, how do you resolve the conflict? Where can you stand, to do so?
As Socrates might ask a modern Euthyphro, "are the injunctions of your theory good because they follow from the theory, or do they follow from the theory because they are good?" In the first case, whence the theory? In the second, whence the good?
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2014-01-15T14:47:07.432Z · LW(p) · GW(p)
Personally I just take this as a bit of intellectual entertainment. It's fun to try to formalize moral intuitions and then look for the part where the formalization breaks, but that's it - I don't expect it to actually change anything about my behavior or anything like that.
comment by IainM · 2014-01-15T19:48:41.572Z · LW(p) · GW(p)
Why the time factor? I don't find it particularly matches my intuitions directly, and as pointed out it makes having children arbitrarily bad (which also doesn't match my intuitions). Say we give each person's death a particular negative utility - histories in which they die get that single penalty regardless of time (though other independent time factors might apply, such as the sadness of their loved ones). Does that fit any better or worse with your conception of death morality?
(Incidentally, I was thinking about this just a few hours ago. Interesting how reading the same comment can trigger similar lines of thought.)
comment by Calvin · 2014-01-15T06:53:05.223Z · LW(p) · GW(p)
Devil as always, seems to lie in the details, but as I see it some people may see it as a feature:
Assuming I am a forward looking agent who aims to maximize long term, not short term utility.
What is the utility of a person that is being currently preserved in suspended animation with hope of future revival? Am I being penalized as much as for a person who was, say, cremated?
Are we justified to make all current humans unhappy (without sacrificing their lives of course), so that means of reviving dead people are created faster, so that we can stop being penalized for their ended lifespans?
Wouldn't it be only prudent to stop creation of new humans, until we can ensure their lifespans would reach end of the universe, to avoid taking negative points?
comment by [deleted] · 2014-01-15T07:08:11.478Z · LW(p) · GW(p)
Antinatalism is the claim that because life will always include suffering then death, conception is as bad as murder.