Skirting the mere addition paradox

post by Stuart_Armstrong · 2013-11-18T17:50:48.269Z · LW · GW · Legacy · 25 comments

Contents

25 comments

Consider the following facts:

  1. For any population of people of happiness h, you can add more people of happiness less than h, and still improve things.
  2. For any population of people, you can spread people's happiness in a more egalitarian way, while keeping the same average happiness, and this makes things no worse.

This sounds a lot like the mere addition paradox, illustrated by the following diagram:

This is seems to lead directly to the repugnant conclusion - that there is a huge population of people who's lives are barely worth living, but that this outcome is better because of the large number of them (in practice this conclusion may have a little less bite than feared, at least for non-total utilitarians).

But that conclusion doesn't follow at all! Consider the following aggregation formula, where au is the average utility of the population and n is the total number of people in the population:

au(1-(1/2)n)

This obeys the two properties above, and yet does not lead to a repugnant conclusion. How so? Well, property 2 is immediate - since only the average utility appears, the reallocating utility in a more egalitarian way does not decrease the aggregation. For property 1, define f(n)=1-(1/2)n. This function f is strictly increasing, so if we add more members of the population, the product goes up - this allows us to diminish the average utility slightly (by decreasing the utility of the people we've added, say), and still end up with a higher aggregation.

How do we know that there is no repugnant conclusion? Well, f(n) is bounded above by 1. So let au and n be the average utility and size of a given population, and au' and n' those of a population better than this one. Hence au(f(n)) < au'(f(n')) < au'. So the average utility can never sink below au(f(n)): the average utility is bounded.

So some weaker versions of the mere addition argument do not imply the repugnant conclusion.

25 comments

Comments sorted by top scores.

comment by ThrustVectoring · 2013-11-18T18:54:51.714Z · LW(p) · GW(p)

I believe that the proposed function does not follow the rule that adding positive value members is positive value. You can double the population to get any average utility that is greater than half the original utility, while not increasing the other part of the equation by doubling it (starts at, say, 0.98 and can be at most 1)

The correct answer is to factor out "more good can be done with more resources" from "more good can be done by using resources better". With this factorization, arguments for the repugnant conclusion only show that you want more resources, not that you're better off using resources by spamming minimally valuable lives.

Replies from: Vaniver
comment by Vaniver · 2013-11-18T23:16:04.015Z · LW(p) · GW(p)

I believe that the proposed function does not follow the rule that adding positive value members is positive value.

Right- the point is that the original repugnant conclusion is avoided if you replace "adding any number of people with positive happiness leads to a superior aggregation" with "there is some number of people with below-average utility who can be added which leads to a superior aggregation."

Replies from: ThrustVectoring
comment by ThrustVectoring · 2013-11-19T00:57:38.833Z · LW(p) · GW(p)

I don't think it's necessary to butcher your utility function calculations that way. Adding someone with a positive-value life is a good thing (else it would not be positive value).

comment by lmm · 2013-11-19T09:11:27.431Z · LW(p) · GW(p)

You're making a lot of posts about these wacky utility functions that avoid the repugnant conclusion. What's it all in aid of? The repugnant conclusion is not meant to be a general counterargument to utilitarianism, it's an argument against total utilitarianism specifically. There are many, many utility functions that avoid the repugnant conclusion. Please explain why this particular one is interesting.

Replies from: Stuart_Armstrong, V_V
comment by Stuart_Armstrong · 2013-11-20T10:37:30.588Z · LW(p) · GW(p)

What's it all in aid of?

It's just an investigation around these issues - no real conclusion yet.

comment by V_V · 2013-11-19T20:24:46.950Z · LW(p) · GW(p)

Indeed.

It is also worth noting that average utilitarianism has also its share of problems: killing off anyone with below-maximum utility is an improvement.
Stuart Armstrong's proposed aggregation function has essentially the same problem, since while it disincentivizes reducing the number of people, it doesn't disincentivizes it much at any significant population level.

BTW: all flavors of utilitarianism suffer from the fact that there is no known satisfactory way of comparing the utility of different people. Without interpersonal utility comparison, the point is moot.

Replies from: TheOtherDave, Ghatanathoah, ArisKatsaris
comment by TheOtherDave · 2013-11-19T22:10:55.537Z · LW(p) · GW(p)

average utilitarianism [...] killing off anyone with below-maximum utility is an improvement.

This is true insofar as it can be performed without creating significant disutility for their above-average-utility neighbors, and not otherwise. A community that would suffer greatly due to the deaths of half the people around them would not necessarily have its average utility increased by that operation.

Without interpersonal utility comparison, the point is moot.

True. Actually, it's worse than that... we don't even have a way to compare an individual's utility over time with any level of precision, though we talk casually as though we did. (If we had such an intertemporal utility comparison, then we could for example declare that the highest-utility state each individual has achieved in their lifetime is 1 unit of utility, and the lowest-utility state is 0 units, and interpolate linearly, recalibrating whenever an individual exceeds their previous maximum, and that would be one way of comparing interpersonal utilities at any given time. Of course, that might not be acceptable because it fails to account for the possibility that some individuals are just worth more than others, or for various other reasons, but it would at least be a place to start.)

Without that, utilitarianism is at best no more than a qualitative way to address ethical questions.

Replies from: V_V
comment by V_V · 2013-11-20T16:36:43.794Z · LW(p) · GW(p)

A community that would suffer greatly due to the deaths of half the people around them would not necessarily have its average utility increased by that operation.

You don't have to kill off half of the population to increase average utility.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-11-20T16:56:42.404Z · LW(p) · GW(p)

Yup, that's true.

And I initially misunderstood your "killing off anyone with below-maximum utility" to mean "for all X with below-maximum utility, kill X" rather than "select some X with below-maximum utility and kill X," sorry.

That said, if we're talking about individual or small-group cases, the argument against average-utilitarianism no longer feels quite so intuitively cut-and-dried.

That is: if selecting one X experiencing low utility and killing X does not cause significant utility-decrease (e.g. suffering, grief, anxiety, having-one's-memory-edited, etc.) among the survivors, I suspect quite a few people would more-or-less endorse X's death if it leaves a larger share of available resources for them to enjoy. So arguing "this can't possibly be a correct description of human morality because humans in fact reject it" is not quite so easy as in the kill-off-everyone-but-the-being-experiencing-highest-utility scenario (which humans reliably reject).

That being said, even in the one-corpse case, we can certainly counter that the people who endorse that are simply endorsing unethical/immoral behavior.

Replies from: V_V
comment by V_V · 2013-11-20T21:14:04.785Z · LW(p) · GW(p)

Well, consider some unlucky fellow without any significant family ties, without friends and without a job, living off government welfare. His death wouldn't generate much negative externalities, in fact, the externalities would be mostly positive, since he would stop receiving welfare.
Assume that you can compare personal utilities and it turns out that this guy has below-average utility. Would it be moral to kill him? Average utilitarianism says yes.

I suppose that the moral intuitions of most, though not all, people would be against killing him, at least not in a obvious way (some might be in favour of taking his welfare away and letting him starve to death, though, but I doubt that these kind of people use an utilitarian type of moral reasoning).

Replies from: Stuart_Armstrong, TheOtherDave
comment by Stuart_Armstrong · 2013-11-21T11:18:03.469Z · LW(p) · GW(p)

Would it be moral to kill him? Average utilitarianism says yes.

So would total utilitarianism, if his resources were reallocated to other people of more efficient happiness levels (or to new individuals brought into the world).

Replies from: V_V
comment by V_V · 2013-11-21T19:27:01.234Z · LW(p) · GW(p)

That's why I'm not a fan of utilitarianism in its various forms.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-11-22T10:11:19.712Z · LW(p) · GW(p)

You'll get no argument from me there :-)

comment by TheOtherDave · 2013-11-20T21:46:47.293Z · LW(p) · GW(p)

Assume that you can compare personal utilities and it turns out that this guy has below-average utility. [..] I suppose that the moral intuitions of most, though not all, people would be against killing him,

I understand why you say this, but I'm not quite sure I agree.

I mean, I certainly agree that most people, if asked that question in those terms, would say "of course not! killing this poor lonely friendless unemployed wretch would be wrong."

But I'm less sure that most people, if placed in a situation where they express their revealed preferences without framing them explicitly, would make decisions that were consistent with that answer.

And if I actually worked out what "below-average utility" means in terms that make intuitive sense to people... e.g., how much is this fellow actually suffering on a daily basis?... I'm genuinely unsure what most people would say, even if asked explicitly. Especially if our mechanism for comparing personal utilities, unlike the one I proposed above, does not arbitrarily conclude that each individual's lifetime maximum is equivalent for purposes of comparison, as I expect most people's intuitions in fact don't.

That said, I certainly agree with you that most of the people who are in favor of letting the hungry starve, etc., are not using any sort of aggregated utilitarian moral reasoning.

comment by Ghatanathoah · 2014-01-20T08:25:40.059Z · LW(p) · GW(p)

It is also worth noting that average utilitarianism has also its share of problems: killing off anyone with below-maximum utility is an improvement.

No it isn't. This can be demonstrated fairly simply. Imagine a population consisting of 100 people. 99 of those people have great lives, 1 of those people has a mediocre one.

At the time you are considering doing the killing the person with the mediocre life, he has accumulated 25 utility. If you let him live he will accumulate 5 more utility. The 99 people with great lives will accumulate 100 utility over the course of their lifetimes.

If you kill the guy now average utility will be 99.25. If you let him live and accumulate 5 more utility average utility will be 99.3. A small, but definite improvement.

I think the mistake you're making is that after you kill the person you divide by 99 instead of 100. But that's absurd, why would someone stop counting as part of the average just because they're dead? Once someone is added to the population they count as part of it forever.

It is also worth noting that average utilitarianism has also its share of problems: killing off anyone with below-maximum utility is an improvement.

It's true that some sort of normalization assumption is needed to compare VNM utility between agents. But that doesn't defeat utilitarianism, it just shows that you need to include a meta-moral obligation to make such an assumption (and to make sure that assumption is consistent with common human moral intuitions about how such assumptions should be made).

As it happens, I do interpersonal utility comparisons all the time in my day-to-day life using the mental capacity commonly referred to as "empathy." The normalizing assumption I seem to be making is to assume that others people's minds are similar to mine, and match their utility to mine on a one to one basis, doing tweaks as necessary if I observe that they value different things than I do.

comment by ArisKatsaris · 2013-11-19T23:08:17.585Z · LW(p) · GW(p)

It is also worth noting that average utilitarianism has also its share of problems: killing off anyone with below-maximum utility is an improvement.

If one averages across all time, the strong preference of people to not be killed would suffice to more than cancel the benefits of their non-participation in the future averages.

Replies from: V_V, Stuart_Armstrong
comment by V_V · 2013-11-20T16:36:07.053Z · LW(p) · GW(p)

I don't know what you mean by "average across time". You typically discount across time.
Anyway, utilitarianism is a form of consequentialism in that it assigns moral preferences to world states rather than transitions. Being killed is a transition in any description at any meaningful level of abstraction, hence you can't assign an utility to it. If you do, then you have an essentially dentological ethics, not utilitarianism.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2013-11-20T19:34:25.173Z · LW(p) · GW(p)

I don't know what you mean by "average across time"

I mean calculating the average utility of the whole timeline, not of particular discrete moments in time.

An example. Let's say we're in the year 2020 and considering whether it's cool to murder 7 billion people in order to let a person-of-maximum-utility lead an optimal life from 2021 onwards. By utility in this case I mean "satisfaction of preferences" (preference utilitarianism) rather than "happiness".

If we do so, a calculation that treats 2020 and 2021 as separate "worlds" might say "If 7 billion people are killed, 2021 will have a much higher average utility than 2020, so we should do it in order to transit to the world of 2021"

But I'd calculate it differently: If 7 billion people are killed between 2020 and 2021, the people of 2020 have far less utility because they very strongly prefer to not be killed, and their killings would therefore grossly reduce the satisfaction of their preferences. Therefore the average utility in the timeline as a whole would be vastly reduced by their murders.

Anyway, utilitarianism is a form of consequentialism in that it assigns moral preferences to world states rather than transitions

One just needs treat 'world-states' 4-dimensionally, as 'timeline-states'...

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-11-21T11:27:04.295Z · LW(p) · GW(p)

If you could genetically modify future humans to make them indifferent to being killed, would you do that, since it would facilitate the mass murder?

comment by Stuart_Armstrong · 2013-11-20T10:46:57.395Z · LW(p) · GW(p)

Rather than counting on other factors (people's preferences) to avoid outcomes we feel are bad, I think it would be better to encode the badness of these outcomes directly.

comment by JGWeissman · 2013-11-18T18:18:37.995Z · LW(p) · GW(p)

For any population of people of happiness h, you can add more people of happiness less than h, and still improve things.

I think that this property, at least the way you are interpreting it, does not fully represent the intuition that leads to the repugnant conclusion. A stronger version would be: For any population of people, you want add more people with positive happiness (keeping the happiness of the already existing people constant), and still improve things.

I don't think your unintuitive aggregation formula would be compatible with that.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-11-20T10:34:00.863Z · LW(p) · GW(p)

I think that this property, at least the way you are interpreting it, does not fully represent the intuition that leads to the repugnant conclusion.

I agree. That's why I didn't present my aggregation formula as a counterexample to the mere addition paradox, but merely being connected to it.

comment by Vaniver · 2013-11-18T23:17:56.086Z · LW(p) · GW(p)

So the average utility can never sink below au(f(n)): the average utility is bounded.

So some weaker versions of the mere addition argument do not imply the repugnant conclusion.

I'm not seeing how this is meaningfully different- it still argues that the average utility should be the lower bound. You've created a lower bound that's perhaps more acceptable than "lives barely worth living," but you could have done that just as easily by saying "it's good to add lives with utility scores more than 9" instead of "it's good to add lives with utility scores more than 0." It's not obvious to me that this lower bound is the one you would want to use- should the initial population size really determine the minimal future happiness?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2013-11-20T10:36:16.459Z · LW(p) · GW(p)

I'm not advocating this aggregation formula - for one, it still allows people to be killed and replaced with happier versions, and this is a net good. It's more an investigation of various aggregation properties.

comment by buybuydandavis · 2013-11-19T07:59:21.214Z · LW(p) · GW(p)

For any population of people, you can spread people's happiness in a more egalitarian way, while keeping the same average happiness, and this makes things no worse.

It is worse for the people whose happiness you've spread to others.