Is Equality Really about Diminishing Marginal Utility?

post by Ghatanathoah · 2012-12-04T23:03:31.297Z · LW · GW · Legacy · 45 comments

Contents

45 comments

In Robert Nozick's famous "Utility Monster" thought experiment he proposes the idea of a creature that does not receive diminishing marginal utility from resource consumption, and argues that this poses a problem for utilitarian ethics.  Why?  Utilitarian ethics, while highly egalitarian in real life situations, does not place any intrinsic value on equality.  The reason utilitarian ethics tend to favor equality is that human beings seem to experience diminishing returns when converting resources into utility.  Egalitarianism, according to this framework, is good because sharing resources between people reduces the level of diminishing returns and maximizes the total amount of utility people generate, not because it's actually good for people to have equal levels of utility.

The problem the Utility Monster poses is that, since it does not receive diminishing marginal utility, there is no reason, under traditional utilitarian framework, to share resources between it and the other inhabitants of the world it lives in.  It would be completely justified in killing other people and taking their things for itself, or enslaving them for its own benefit.  This seems counter-intuitive to Nozick, and many other people.

There seem to be two possible reasons for this.  One, of course, is that most people's intuitions are wrong in this particular case.  The reason I am interesting in exploring, however, is the other one, namely that equality is valuable for its own sake, not just as a side effect of diminishing marginal utility.

Now, before I go any further I should clarify what I mean by "equality."  There are many different types of equality, not all of which are compatible with each other.  What I mean is equality of utility, everyone has the same level of satisfied preferences, happiness, and whatever else "utility" constitutes.  This is not the same thing as fiscal equality, as some people may differ in their ability to convert money and resources into utility (people with horrible illnesses, for instance, are worse at doing so than the general population).  It is also important to stress that "lifespan" should be factored in as part of the utility that is to be equalized (i.e. killing someone increases inequality).  Otherwise one could achieve equality of utility by killing all the poor people.

So if equality is valuable for its own sake, how does one factor it into utilitarian calculations?  It seems wrong to replace utility maximization with equality maximization.  That would imply that a world where everyone had 10 utilons and a society where everyone had 100 utilons are morally identical, which seems wrong, to say the least. 

What about making equality lexically prior to utility maximization?  That seems just as bad.  It would imply, among other things, that in a stratified world where some people have far greater levels of utility than others, that it would be morally right to take an action that would harm every single person in the world, as long as it hurt the best off slightly more than the worst off.  That seems insanely wrong.  The Utility Monster thought experiment already argues against making utility maximization lexically prior to equality.

So it seems like the best option would be to have maximizing utility and increasing equality as two separate values.  How then, to trade one off against the other?  If there is some sort of straight, one-to-one value then this doesn't do anything to dismiss the problem of the Utility Monster.  A monster good enough at utility generation could simply produce so much utility that no amount of equality could equal its output.

The best possible solution I can see would be to have utility maximization and equality have diminishing returns relative to each other.  This would mean that in a world with high equality, but low utility, raising utility would be more important, while in a world of low equality and high utility, establishing equality would be more important.

This solution deals with the utility monster fairly effectively.  No matter how much utility the monster can generate, it is always better to share some of its resources with other people.

Now, you might notice that this doesn't eliminate every aspect of the utility monster problem.  As long as the returns generated by utility maximization do not diminish to zero you can always posit an even more talented monster.  And you can then argue that the society created by having that monster enslave the rest of the populace is better than one where a less talented monster shares with the rest of the populace. However, this new society would instantly become better if the new Utility Monster was forced to share its resources with the rest of the population.

This is a huge improvement over the old framework.  Ordinary utility maximizing ethics would not only argue that a world where a Utility Monster enslaved everyone else might be a better world.  They would argue that it was the optimal world, the best possible world given the constraints the inhabitants face.  Under this new ethical framework, however, that is never the case.  The optimal world, under any given level of constraints, is one where a utility monster shares with the rest of the population. 

In other words, under this framework, if you were to ask, "Is it good for a utility monster to enslave the rest of the population?" the answer would always be "No."

Obviously the value of equality has many other aspects to be considered.  For instance is it better described by traditional egalitarianism, or by prioritarianism?  Values are often more complex than they first appear.

It also seems quite possible that there are other facets of value besides maximizing utility and equality of utility.  For instance, total and average utilitarianism might be reconciled by making them two separate values that are both important.  Other potential candidates include prioritarian concerns (if they are not included already), number of worthwhile lives (most people would consider a world full of people with excellent lives better than one inhabited solely by one ecstatic utility monster), consideration of prior-existing people, and perhaps many, many more. As with utility and equality, these values would have diminishing returns relative to each other, and an optimum society would be one where all receive some measure of consideration.

 

 

An aside.  This next section is not directly related to the rest of the essay, but develops the idea in a direction I thought was interesting:

 

It seems to me that the value of equality could be the source of a local disagreement in population ethics.  There are several people (Robin Hanson, most notably) who have argued that it would be highly desirable to create huge amounts of poor people with lives barely worth living, and that this may well be better than having a smaller, wealthier population. Many other people consider this to be a bad idea.

The unspoken assumption in this argument is that multiple lives barely worth living generate more utility than a single very excellent life. At first this seems like an obvious truth, based on the following chain of logic:

1. It is obviously wrong for Person A, who has a life barely worth living, to kill Person B, who also has a life barely worth living, and use B's property to improve their own life.

2. The only reason something is wrong is that it decreases the level of utility.

3. Therefore, killing Person B must decrease the level of utility.

4. Therefore, two lives barely worth living must generate more utility than a single excellent life.

However, if equality is valued for its own sake, then the reason it is wrong to kill Person B might be because of the vast inequality in various aspects of utility (lifespan, for instance) that their death would create between A and B.

This means that a society that has a smaller population living great lives might very well be generating a much larger amount of utility than a larger society whose inhabitants live lives barely worth living.

45 comments

Comments sorted by top scores.

comment by CarlShulman · 2012-12-04T23:46:46.644Z · LW(p) · GW(p)

What if the utility monster had a highly parallel brain, composing 10^40 separate processes, each of which was of humanlike intelligence, that collectively operated the monster body? As it consumes more people it is able to generate/sustain/speed up more internal threads (somewhat similar to something that might happen if whole brain emulations or AIs are much less energy-intensive than humans)

Then equality considerations would also favor feeding humanity to the monster. Would you want to feed it in that case?

If not, the objection may simply not be about some impersonal feature like aggregate happiness or equality. An alternative place to look would be the idea that morality partly reflects efficient rules for social cooperation. Ex ante, a large majority of people can agree to social insurance because they or their loved ones might need it, or to reduce the threat of the disenfranchised poor. But unless one takes a strongly Rawlsian position ("I might have been a utility monster mental process"), then endorsing the pro-utility monster stance is predictably worse for everyone except the utility monster, so they're reluctant.

Replies from: Viliam_Bur, Ghatanathoah, Multiheaded
comment by Viliam_Bur · 2012-12-05T11:37:39.606Z · LW(p) · GW(p)

Upvoted. Before talking about a "utility monster" we should try to imagine a realistic one. One where we could believe that the utility is really increasing without diminishing returns.

Because if I can't imagine a real utility monsters, then my intuitions about it are really intuitions about something that is not a utility monster, but claims to be one, to get more resources. And obviously the correct solution in that case would be to oppose the claims of such fake utility monster. Now our brains just need to invent some rationalization that does not include saying that the utility monster is fake.

comment by Ghatanathoah · 2012-12-05T00:16:02.548Z · LW(p) · GW(p)

What if the utility monster had a highly parallel brain, composing 10^40 separate processes each of which was of humanlike intelligence that collectively operated the monster body?

If that was the case it would not be a utility monster. It would be a bunch of people piloting a giant robot that is capable of birthing more people. A utility monster is supposed to be one distinct individual.

Then equality considerations would also favor feeding humanity to the monster. Would you want to feed it in that case?

This is equivalent to asking me if I would kill someone in order to cause a new person to be born. No, I would not do that. This is probably partly due to one of those other possible values I discussed, such as consideration for prior existing people, or valuing a high average utility that counts dead people towards the average.

It may also be due to the large inequality in lifespan between the person who was devoured and the new poeple their death created.

An alternative place to look would be the idea that morality partly reflects efficient rules for social cooperation.

I don't know. If I was given the choice by Omega to create one of two worlds, one with a cannibalistic utility monster, and one with a utility monster that shared with others, and was assured that neither I, nor anyone I knew, nor anyone else on Earth, would ever interact with these worlds again, I still think that I would be motivated by my moral convictions to choose the second world.

You could chalk that up to signalling or habit forming, but I don't think that's it either. If Omega told me that it would erase the proceeding from my brain, so it would not contribute to forming good habits or allow me to signal, I'd still believe I had a moral duty to choose the second world.

Replies from: DanielVarga
comment by DanielVarga · 2012-12-05T20:26:53.471Z · LW(p) · GW(p)

If that was the case it would not be a utility monster. It would be a bunch of people piloting a giant robot that is capable of birthing more people. A utility monster is supposed to be one distinct individual.

Your ethical theory is in deep trouble if it depends on a notion of 'distinct individual' in any crucial way. It is easy to imagine scenarios where there is a continuous path from robot-piloting people to one giant hive mind. (Kaj wrote a whole paper about such stuff: Coalescing minds: Brain uploading-related group mind scenarios) Or we can split brain hemispheres and give both of them their own robotic bodies.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-12-05T21:15:01.981Z · LW(p) · GW(p)

I imagine it is possible to develop some ethical theory that could handle creatures capable of merging and splitting. One possibility might be to count "utility functions" instead of individuals. This, would, of course result in weird questions like if two people's preferences stop counting when they merge and then count again when they split. But at least it would stop someone from giving themselves a moral right to everything by making enough ems of themself.

Again, this idea probably has problems that need to be worked out. I very much doubt than I could figure out all the ethical implications in one response when Kaj wasn't able to in a huge paper. But I don't think it's an insurmountable problem.

comment by Multiheaded · 2012-12-05T19:44:56.989Z · LW(p) · GW(p)

"I might have been a utility monster's mental process!!!"

Maybe the utilitarians around here could cash up for a kickstarter of an indie B-movie with a title like this? It would make for good propaganda.

comment by Oligopsony · 2012-12-05T05:22:36.276Z · LW(p) · GW(p)

If you don't believe in qualia, what does "the Utility Monster's positive utility outweighs everyone else's misery" mean?

If one is a preference utlitarian, it means something like: "any given (or at least the modal) person would be willing to accept near-certain misery for a 1/7 billion shot of being the Utility Monster - that's how preferable being the Utility Monster is." In this case, the solution is simple: yes, we should feed the Utility Monster.

You may not be able to imagine any sort of experience for which humans would have that preference. If so, Utility Monsters are impossible and irrelevant.

As I said in another thread, I do think political concerns for equality are basically not concerned with hedonics. Whether this matters for your point or not depends on whether your concept of utility is preferential or hedonic.

Replies from: roystgnr, Ghatanathoah, Eugine_Nier
comment by roystgnr · 2012-12-06T19:29:08.364Z · LW(p) · GW(p)

This is probably due to my own ignorance, but I've only seen "preference utiltarianism" used to denote the idea that an individual's utility should be calculated from their own preferences (as opposed to some external measures of their happiness, virtue, whatever). Is it standard terminology to use the term to refer to this way of making interpersonal calculations of utility as well?

In any case, isn't there a problem with making this technique well-defined? If I would prefer to be me instead of my neighbor, then we'd conclude that I have higher utility, but if he would also prefer to be himself instead of me then we'd reach the contradictory conclusion that he has higher utility, and yet such pairs of preferences may be simultaneously true more often than not!

comment by Ghatanathoah · 2012-12-05T19:19:52.285Z · LW(p) · GW(p)

If you don't believe in qualia, what does "the Utility Monster's positive utility outweighs everyone else's misery" mean?

The traditional depiction of a utility monster is a creature with such intense emotions that its emotions overshadow everyone else's. Obviously this depiction doesn't work under a preference utilitarian framework.

But it might also be possible to conceive of an entity so good at converting resources into satisfied preferences that it would count as a Utility Monster under that framework. Amartya Sen called such an entity a "pleasure wizard." The Monster might be able to do this because it is a superintelligence, or it might have a very, very long life span.

It's probably easier to imagine if you consider a "disutility monster," an entity that is worse at converting resources into satisfied preferences than a normal person. For instance, people with severe illnesses need thousands of dollars to satisfy basic preferences, such as not dying, not pooping blood, walking, and so on, which other people can satisfy nearly for free.

If one is a preference utlitarian, it means something like: "any given (or at least the modal) person would be willing to accept near-certain misery for a 1/7 billion shot of being the Utility Monster

That is a good point. I wonder if positing a sufficiently talented utility monster would count as Pascal's Mugging.

I'm also wondering how reliable something like "being willing to pay for a chance of being a utility monster is" as a measure of utility. If probability is in the mind then I know ahead of time that I already have a 100% chance of not being the utility monster, owing to the rather obvious fact that I am not the utility monster. But it's quite possible that I don't understand probability correctly, I've always had trouble with math.

comment by Eugine_Nier · 2012-12-05T06:41:23.643Z · LW(p) · GW(p)

If one is a preference utlitarian, it means something like: "any given (or at least the modal) person would be willing to accept near-certain misery for a 1/7 billion shot of being the Utility Monster - that's how preferable being the Utility Monster is." In this case, the solution is simple: yes, we should feed the Utility Monster.

What does this mean if you don't believe in qualia?

comment by handoflixue · 2012-12-05T01:21:46.632Z · LW(p) · GW(p)

Heh, relativistic effects on morality.

To elaborate: Newtonian physics work within our "default" range of experience. If you go 99.99% of c, or are dealing with electrons, or a Dyson Sphere, then you'll need new models. For the most part, our models of reality see certain "thresholds", and you have to use different models for different sides of that threshold.

You see this in simple transitions like liquid <-> solid, and you see this pretty much any time you feed in incredibly small or large numbers. XKCD captures this nicely :)

So... the point? We shouldn't expect our morality to scale past a certain situation, and in fact it is completely reasonable to assume that there is NO model that covers both normal human utilities AND utility monsters.

Replies from: Ghatanathoah, None
comment by Ghatanathoah · 2012-12-05T01:28:56.427Z · LW(p) · GW(p)

That's a really great point. Do you think that attempts to create some sort of pluralistic consequentialism that tries to cover these huge situation more effectively, like I am doing, are a worthwhile effort, or do you think the odds of there being no model are high enough that the effort is probably wasted?

Replies from: Pentashagon
comment by Pentashagon · 2012-12-05T09:51:19.491Z · LW(p) · GW(p)

It's worth pointing out that relativity gives the right answers at 0.01% light speed too, it just takes more computations to get the answer. A more complex model of morality that gives the same answers to our simple questions as our currently held system of morals seems quite desirable.

comment by [deleted] · 2012-12-07T07:17:40.245Z · LW(p) · GW(p)

We shouldn't expect our morality to scale past a certain situation

Indeed, it would be a little weird if it did, though I suppose that depends on what specific set of behaviors and values one chooses to draw the morality box around, too -- I'm kind of wondering if "morality" is a red herring, although it's hard to find the words here. In local lingo, I'm sort of thinking "pebblesorters", as contrasted to moral agents, might be about as misleading as "p-zombies vs conscious humans."

comment by kilobug · 2012-12-05T09:00:43.928Z · LW(p) · GW(p)

Utilitarian ethics, while highly egalitarian in real life situations, does not place any intrinsic value on equality.

I don't agree with that. Utilitarian ethics don't specify how the utility function is calculated, especially how you make the aggregate function from all the individual utility. You can very well decide to use "average * gini" or any other compound formula that factor in equality, and you'll still be an "utilitarian ethics".

The "how to compute the aggregate" to me is one of the toughest problems left in utilitarian ethics, I don't see any aggregate (average, sum, median, average * gini, ...) which doesn't lead to absurd results in some cases. I fear that, like the human utility function is complicated, the aggregate function we should use is complicated and will contain sum, average, median and gini in a form or in another.

Replies from: Viliam_Bur, Ghatanathoah
comment by Viliam_Bur · 2012-12-05T11:46:35.260Z · LW(p) · GW(p)

In my opinion the toughest problem is to compare one person's utility with other person's utility. Doubly so if the "person" does not have to be homo sapiens (so we can't use neurons or hormones).

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-12-05T18:51:27.701Z · LW(p) · GW(p)

I don't deny that it's hard. But I think people do pretty well in our day to day lives by using our mind's capacity for sympathy. I think I can safely assume that if I kicked a guy in the nuts and stole his money his suffering from the assault and theft would outweigh the utility I got from the money (assuming I spent the money on frivolous things). I can tell this by simulating how I would react if such an event happened to me, and assuming the other guy's mind is fairly similar to mine.

Now, I could be wrong. Maybe the guy is a masochist with a fetish for being kicked in the nuts, and he was planning on spending the money he was carrying paying someone to do it for him. But perfect knowledge is impossible, so that's a problem with basically any endeavor. We don't give up on science because of all the problems obtaining knowledge, we shouldn't give up on morality either. You just do the best you can.

Obviously scaling sympathy to large populations is really hard. And attempting to project it onto alien minds is even harder. But I don't think it's impossible. The first idea that comes to mind would be to ask the alien mind what it wants in life, ranked in order of how much it wants them, and then map those onto a similar list of what I want.

Replies from: prase
comment by prase · 2012-12-06T22:35:20.946Z · LW(p) · GW(p)

I find it difficult to sympathise with people who exhibit traits characteristic for utility monsters and those people are usually still quite far away from the thought-experiment ideal of a utility monster. I am sure that if the monster told me what it wants, I'd do my best to prevent it from happening.

comment by Ghatanathoah · 2012-12-05T18:53:27.903Z · LW(p) · GW(p)

I don't agree with that. Utilitarian ethics don't specify how the utility function is calculated, especially how you make the aggregate function from all the individual utility.

I was referring to total and average utilitarianism, the two most common kinds.

I fear that, like the human utility function is complicated, the aggregate function we should use is complicated and will contain sum, average, median and gini in a form or in another.

I agree completely. I think we'll probably have to use multiple methods of aggregation and then combine the score in some way.

comment by daenerys · 2012-12-05T03:22:43.874Z · LW(p) · GW(p)

I really enjoyed this post and hope to see more like it! Especially as it touches on a subject I've been thinking about recently:

Epistemic Status- Just introspections that I'm thinking about. Not necessarily accurate.

It seems to me like some type of "equality" is something of a terminal value for me. I tend to have very strong negative emotional reactions to inequality. Intuitively, I prefer a world where everyone lives at a rather low level, than one where some people live at a low level, and some live at a high level. Intuitively, I prefer dust specks to torture.

But I'm not sure how much of this is that I actually value equality itself, versus whether it's that I think inequality causes disutility and unhappiness (in that people's happiness and contentment are very dependent on their place in the totem pole). Perhaps I feel like a big spread on the totem pole causes unhappiness, and that my actual terminal value is happiness.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-12-05T19:32:36.283Z · LW(p) · GW(p)

It seems to me like some type of "equality" is something of a terminal value for me. I tend to have very strong negative emotional reactions to inequality. Intuitively, I prefer a world where everyone lives at a rather low level, than one where some people live at a low level, and some live at a high level. Intuitively, I prefer dust specks to torture.

I think that the type of equality I value is closer to prioritarianism (caring most about the least fortunate) than literal equality. That is, I'd prefer a world where the least well off have 100 utility and the better off have 1000 utility than a world where everyone has ten utility.

comment by handoflixue · 2012-12-05T01:25:13.414Z · LW(p) · GW(p)

Or one could be snarky and succinct and just point out that morality doesn't need to handle utility monsters any more than biology needs to handle unicorns...

(Yes, this is a serious objection, and I'm fine with Crocker's Rules since I already admitted I'm being snarky :))

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-12-05T01:33:20.829Z · LW(p) · GW(p)

Or one could be snarky and succinct and just point out that morality doesn't need to handle utility monsters any more than biology needs to handle unicorns...

It's possible we might find some way to create utility monsters in the future though. For instance, a superorganism of brain emulators might count as a utility monster (it depends on if you think two identical ems are one person or two) because it is able to experience so much more at a time than a normal human, and therefore can use resources much more efficiently.

Replies from: handoflixue
comment by handoflixue · 2012-12-07T03:15:17.245Z · LW(p) · GW(p)

It's possible that one day we'll decide to genetically engineer unicorns. I'd suggest that the challenges of doing so are something we're simply not prepared to handle yet, because we don't have enough of a foundation to actually do it.

First, Possibility Space, in pretty much any domain, is by default mind-bogglingly huge. There's a lot of possible "utility monsters", and you probably don't want to over-generalize from one example. Humans are notoriously horrible at handling huge possibility spaces, so it's probably a good idea to focus on areas where we have a narrow search space.

Second, since we can't currently build or observe a utility monster, a lot of our assumptions about them are liable to be wrong - just like 10th century assumptions about space travel. Humans are notoriously horrible at "armchair philosophy", so it seems wise not to engage in it as a general rule.

Third, you seem to be looking at a problem that requires a revolutionary rather than evolutionary insight - it's not liable to be something you can easily brute force. Look at the path that lead us from Newton to Einstein to Heisenberg. Unless you already have a revolutionary insight, it seems best to focus on more evolutionary, fundamental advances.

Fourth, solving this problem, currently, gets you a very pretty mathematical equation that won't be useful until we meet or make an actual utility monster. Solving more "real world" approaches seems more likely to yield actual, usable insights.

(Note all these objections can be generalized in to a useful heuristic. But, also note that quite a lot of science wouldn't have occurred if EVERYONE had followed these rules. There's a time and a place for an exception, but when you have this much against you, it's worth considering whether you really think it's worth your time)

comment by Alejandro1 · 2012-12-05T00:27:38.754Z · LW(p) · GW(p)

Interesting post, I don't have much to contribute except proposing that any discussion of Utility Monsters should use for them the name "Felix".

comment by Jayson_Virissimo · 2012-12-10T10:54:28.791Z · LW(p) · GW(p)

Declaring [equality] to be a final value makes it invulnerable to arguments except those appealing to other, rival, “noncompossible” final values. Thus, it becomes a matter of (if we may put it so) “moral tastes.” It ceases to be a matter of agreement, unless it be the agreement to differ, to non est disputandum. This is a feasible stratagem, and a very safe one. But it fails in underpinning principles of distributive justice; for stating that equality is an ultimate value is one thing, to establish that it is just is another. The two are neither coextensive nor even commensurate.

-Anthony de Jasay, Justice and Its Surroundings

comment by [deleted] · 2012-12-16T10:46:20.562Z · LW(p) · GW(p)

There was some related discussion of equality and the desirability of Hansonia Malthusian Emulated Minds scenarios in the Utopia in Manna thread.

comment by fubarobfusco · 2012-12-06T00:06:46.455Z · LW(p) · GW(p)

It seems to me that the basis for equality is not first-order utility, but rather symmetry amongst utility changes.

If the lives of Person A and Person B are equal in value, then the world in which A kills B and loots the corpse is no better or worse than the world in which B kills A and loots the corpse.

A timeless argument: "If I decide to kill the other guy, then — since our situations are symmetric — he will decide to kill me, too; and neither of us will get to loot the corpse."

A social argument: "Some third party C doesn't care if I kill him or he kills me; but certainly prefers that neither of us kill the other and then turn on him with the combined resources of both. So C is motivated to deter either of us from killing and looting the other."

comment by AlexMennen · 2012-12-05T05:14:36.014Z · LW(p) · GW(p)

The only reason I see not to give a utility monster all the resources is that, if you are not the utility monster, you are unlikely to be moved by ethical arguments for doing so, given your incentive not to. When only one person wants everyone to follow an ethical system, it won't work.

Of course, this implies that if I have complete control over how everyone else's resources are distributed (and no one else has retaliatory control over my resources), that I should give them all to the utility monster. Many people find this counterintuitive, but then again, human intuition is not capable of grokking the concept of a utility monster.

Therefore, two lives barely worth living must generate more utility than a single excellent life.

No. There exists some N such that N lives barely worth living must generate more utility than a single excellent life (for some particular values of "barely worth living" and "excellent"). N need not be 2. (and this assumes that utility is real-valued as opposed to being valued in some non-Archimedian ordered field, although that does seem like a reasonable assumption)

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-12-05T23:03:33.768Z · LW(p) · GW(p)

No. There exists some N such that N lives barely worth living must generate more utility than a single excellent life (for some particular values of "barely worth living" and "excellent"). N need not be 2.

I agree that N need not equal 2. But I think a lot of people seem to think so. Why? Because whenever the Mere Addition Paradox is brought up no one suggests that the inhabitants of A ought to kill the extra people added in A+, take their stuff, and use it to enrich their own lives. If the only reason something is wrong is that it decreases utility, then the amount of utility the extra people are generating with their share of the resources must be larger than the amount the people in A would be capable of generating if they took them, or else killing them wouldn't be wrong.

Of course, I would argue that it is possible the real reason killing the extra people seems counterintuitive is because of the inequality it would create, not the disutility it would create. Therefore reasoning from the counterintuitiveness of killing the extra people that they must generate a certain amount of utility may be fallacious.

Of course, it may just be that I am misunderstanding the MAP. If there is some other reason why everyone agrees it's wrong to kill the people in A+ please let me know. I am getting worried that I'm missing something and would really like to be set straight.

Replies from: AlexMennen
comment by AlexMennen · 2012-12-06T01:40:15.018Z · LW(p) · GW(p)

I agree that N need not equal 2. But I think a lot of people seem to think so. Why? Because whenever the Mere Addition Paradox is brought up no one suggests that the inhabitants of A ought to kill the extra people added in A+, take their stuff, and use it to enrich their own lives.

That does not follow. For a total utilitarian, there should exist values of "barely worth living" and "excellent" such that N>2, but it is not true that the people with excellent lives (henceforth: "rich people") killing the people with lives barely worth living (henceforth: "poor people") and taking their resources would increase utility (the poor people's resources might provide negligible marginal utility for the rich people). Thus, the proposition that the rich people killing the poor people and taking their resources would decrease utility does not prove that the lives of 2 poor people together have higher utility than the lives of 1 rich person.

If there is some other reason why everyone agrees it's wrong to kill the people in A+ please let me know.

I'm having trouble following your intuition that the rich people killing the poor people to take their stuff would be likely to increase utility in naive utilitarian reasoning. This is the best argument that I could think of: "It presumably takes some positive amount of resources even to get someone's life to the point where it is just as good as if the person didn't exist, and that positive amount of resources would do a certain positive amount of good in the hands of the rich people. Thus, there should exist some quality of life that is worth living if the resources would otherwise go to waste, but not if the resources would otherwise go to someone else." Is that roughly what you were thinking? If so, Eliezer explains why that does not imply you should kill them and take their resources in the forth paragraph of The Lifespan Dilemma

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-12-07T00:38:45.966Z · LW(p) · GW(p)

the poor people's resources might provide negligible marginal utility for the rich people

That would likely be true in the first step with the rich people of "A." But after the first step is complete the idea is to repeat it over and over, with the inhabitants getting progressively impoverished, until one gets to population "Z" where everyone's life is barely worth living. Is there no step where the newly added poor people's resources might provide greater utility for, the previous inhabitants. For instance, could the slightly less poor people in existing population X gain utility by killing the newly added poor people in "X+"?

Is that roughly what you were thinking?

Yes.

If so, Eliezer explains why that does not imply you should kill them and take their resources in the forth paragraph of The Lifespan Dilemma

That is true and I agree with his reasoning. However, Eliezer is not a naive utilitarian, he seems to believe in complex, multifaceted values, of the type that I am advocating. My claim is that a naive utilitarian might hold such a belief.

Replies from: AlexMennen
comment by AlexMennen · 2012-12-07T01:54:32.565Z · LW(p) · GW(p)

That would likely be true in the first step with the rich people of "A." But after the first step is complete the idea is to repeat it over and over, with the inhabitants getting progressively impoverished, until one gets to population "Z" where everyone's life is barely worth living. Is there no step where the newly added poor people's resources might provide greater utility for, the previous inhabitants. For instance, could the slightly less poor people in existing population X gain utility by killing the newly added poor people in "X+"?

I said that there is likely to exist some values for "barely worth living" and "excellent" such that N>2 but it decreases utility for the rich people to kill the poor and take their resources. Pointing out that this is likely not to be true for all values for "barely worth living" and "excellent" such that N>2 does not refute my proof. I don't get where this N=2 thing came from. (lol, if this thread continues too much longer, we'll have to explain to the FBI why our statements that appeared to be calling for the murder of poor people were taken completely out of context.)

My claim is that a naive utilitarian might hold such a belief.

Okay, a naive utilitarian who doesn't see a difference between a person worth creating and a person worth not destroying would probably think doing that would have higher utility than doing nothing, and might think that it is better than distributing the resources evenly in certain situations. Where were we going with this?

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-12-07T20:25:13.110Z · LW(p) · GW(p)

I don't get where this N=2 thing came from.

It occurred to me when I was going through the MAP and had the thought "wait, why are we assuming that adding the new people and sharing with them always generates more utility, why are we assuming the amount of utility the people in A lose by sharing with A+ is always exceeded by the amount the people in A+ gain?" Then I realized that it was because if we ever assume otherwise than killing the new people would become acceptable, which is obviously wrong. Since then I've considered it an implicit assumption of MAP.

Where were we going with this?

I was trying to say that a more complex, multifaceted theory of ethics, such as the one I propose, is necessary to avoid various frightful implication of more simplified ethics.

Replies from: AlexMennen
comment by AlexMennen · 2012-12-07T21:17:30.000Z · LW(p) · GW(p)

wait, why are we assuming that adding the new people and sharing with them always generates more utility, why are we assuming the amount of utility the people in A lose by sharing with A+ is always exceeded by the amount the people in A+ gain?

Right, going from A+ to B might require increasing the amount of resources available if it has to avoid decreasing total utility, and if it does, then you can't derive the repugnant conclusion as an actual policy recommendation. Although diminishing marginal returns suggests that going from A+ to B usually will not require adding resources, but going from A to A+ will. [Edit: I was about to add a link to a post explaining this in more detail, but then I realized that you wrote it, so I guess you understand that]

Edit2: And you still haven't answered my question. Why N=2?

I was trying to say that a more complex, multifaceted theory of ethics, such as the one I propose, is necessary to avoid various frightful implication of more simplified ethics.

Forget "a naive utilitarian who doesn't ... might ...". If there are a bunch of people whose lives are so terrible that it would almost be better for them to kill them out of mercy, but not quite, and keeping them alive takes a lot of resources that could be very useful to others, I would endorse killing them, and I find that fairly intuitive. Do you disagree?

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-12-08T04:46:07.121Z · LW(p) · GW(p)

Edit2: And you still haven't answered my question. Why N=2?

I thought that N would have to equal 2 in order for the math to work out when claiming that going from A+ to B would always increase utility. It seems like otherwise you'd reach a point where it would lower utility to take wealth from A and give it to A+. But you've convinced me that my math might be off.

I think that I might have made the N=2 conclusion before I reached the "adding resources is neccessary conclusion" you alluded to earlier, and that it persisted as a cached thought even though my newer ideas made it obsolete.

If there are a bunch of people whose lives are so terrible that it would almost be better for them to kill them out of mercy, but not quite, and keeping them alive takes a lot of resources that could be very useful to others, I would endorse killing them, and I find that fairly intuitive.

I suppose if you put it that way. I think for me it would depend a lot on how wealthy the rest of society is, perhaps because I have prioritarian sympathies. But I can't say in principle that there aren't instances where it would be acceptable.

comment by shminux · 2012-12-04T23:17:12.866Z · LW(p) · GW(p)

It would be completely justified in killing other people and taking their things for itself, or enslaving them for its own benefit.

I thought the standard solution is to disregard the components of the Utility Monster's utility that are harmful to others, directly or indirectly. Even unbounded joy you may gain from torturing me does not require me to submit and suffer. See also the comments to my old post on Jews and Nazis.

Replies from: jimrandomh, Ghatanathoah, CarlShulman
comment by jimrandomh · 2012-12-05T00:06:50.419Z · LW(p) · GW(p)

That solution doesn't work. If consuming resources is counted as "harmful to others", then you end up saying that it should starve (with no good distinction with which to say that others shouldn't). If consuming resources doesn't count as harmful to others, then you end up giving it the whole universe. You want to give it some, but not all, of the resources. If you try to use references to property distinctions inside the utility function to do that, you've disqualified your utility function from the role of distinguishing good and bad legal and economic systems, and epistemology explodes.

Replies from: shminux, JoshuaZ
comment by shminux · 2012-12-05T01:05:48.324Z · LW(p) · GW(p)

You want to give it some, but not all, of the resources.

Right, one has to arbitrate between harmful components of various utility monsters (UM) (which most people are in the approximation of limited resources) somehow. But you should not need to kill or torture people just because the UM enjoys it a lot.

Now, how to optimize harmful preferences? If there are enough resources to saturate every non-UM utility, then there is no problem. If there isn't enough, the linear programming approach would reduce every non-UM to "life barely worth celebrating" and give the rest to the hungriest UM. Whether this is a good solution, I do not know.

If you try to use references to property distinctions inside the utility function to do that, you've disqualified your utility function from the role of distinguishing good and bad legal and economic systems, and epistemology explodes.

I did not follow that, feel free to give an example.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-12-05T22:11:58.891Z · LW(p) · GW(p)

But you should not need to kill or torture people just because the UM enjoys it a lot.

Again, I agree, but I did not mention that in the OP because many people would not have read our previous discussion, and might have been confused when I suddenly went off on a tangent about how "malicious preferences shouldn't count" in an essay on a totally different subject.

The relevant question then, is how should we split resources between the monster and between other people when attempting to satisfy preferences that do not involve harming other as an end in itself.

If there are enough resources to saturate every non-UM utility, then there is no problem.

I know I was the one who started using the word "saturate" in the first place, but after some thought "satisfice" is a much better approximation of what I meant.

I did not follow that, feel free to give an example.

I think he is arguing that someone might try to get out of giving the monster resources by claiming that the other people in the world own their share of resources, and that it is bad to take private property. The problem with this is that since property is a legal construct, one can simply argue that property rights should be abolished for the Monster's sake. If one tries to claim that property rights somehow transcend other utility concerns that means your utility function does not make any distinction between what kinds of property rights are good and which are bad,

I don't know why this makes epistemology explode either.

Also, I don't think you ever made such an argument in the first place, he was probably just mentioning it for completeness' sake.

comment by JoshuaZ · 2012-12-05T00:29:38.651Z · LW(p) · GW(p)

epistemology explodes.

How does this have anything to do with epistemology?

comment by Ghatanathoah · 2012-12-04T23:48:20.439Z · LW(p) · GW(p)

That would definitely work if the utility monster wanted to kill or enslave people just for kicks. I still stand by the idea we hammered out in the discussion there. Malicious preference shouldn't count. The real problem arises when considering on how to split the various resources in the society between the monster and its inhabitants so they can fulfill their "neutral" preferences (I am using the terminology from that discussion).

The Utility Monster problem suggests the idea that one individual might be so good at using resources to satisfy its "neutral" preferences that it would be better to give all the resources to it instead of sharing them among everyone.

Let's suppose the monster enjoys various activities that are "neutral," they are not directly harmful to other people. But they do use up resources. How many resources should be given to the monster and how many should be given to other people? Should we give the monster everything and let everyone else starve to death, because it will get so much more enjoyment out of them? Should everyone be given else just enough resources to live a life barely worth living, and then give everything else to the monster? Should everyone else be given resources until they reach their satiation point and then the monster gets the rest?

It seems wrong to give everything to the monster, even though that would result in the most satisfied neutral preferences. It also seems wrong to give people just enough for lives barely worth living. It seems best to me to share so that everyone can live great lives, even if giving stuff to the monster would be best.

Replies from: shminux
comment by shminux · 2012-12-04T23:58:05.644Z · LW(p) · GW(p)

Let's suppose the monster enjoys various activities that are "neutral," they are not directly harmful to other people. But they do use up resources.

What I said is

disregard the components of the Utility Monster's utility that are harmful to others, directly or indirectly.

Limited resource use should be counted as indirect harm, surely. Now, the problem is how to arbitrate between multiple Resource Monsters.

Should everyone else be given resources until they reach their satiation point and then the monster gets the rest?

I do not see any immediate problem with this approach.

It also seems wrong to give people just enough for lives barely worth living.

You mean "barely worth celebrating", surely?

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-12-05T00:31:01.011Z · LW(p) · GW(p)

Limited resource use should be counted as indirect harm, surely.

If there are a finite amount of resources then you harm other people just by existing, because by using resources to live you are reducing the amount available for other people to use. By "limited" do you mean "resource use above a certain threshold?" What would that threshold be? Would it change depending on how many resources a given society has?

Are you suggesting that everyone is entitled to a certain level of life quality, and that any desires that would reduce that level of life quality if fulfilled should count as "malicious?" That is a line of thought that hadn't fully occurred to me. It seems to have some similarities with prioritarianism.

You mean "barely worth celebrating", surely?

Yes. I used the other term in the OP because I thought not everyone who read it would have read Eliezer's essay and got stuck in the habit.

EDIT: When I said "you harm other people just by existing" that technically isn't true in the present because we live in a non-malthusian world with a growing economy. Adding more people actually increases the amount of resources available to everyone because there are more people to do work. Assume, for the sake of the argument, that in this thought experiment the amount of resources available to a society is fixed.

comment by CarlShulman · 2012-12-04T23:47:53.855Z · LW(p) · GW(p)

I don't think so, for any strong version of 'standard.' There's a simple modification where instead of the monster eating everyone else, the monster eats all the food that would have sustained everyone else, on triage grounds.