The Mere Cable Channel Addition Paradox

post by Ghatanathoah · 2012-07-26T07:20:05.081Z · LW · GW · Legacy · 147 comments

The following is a dialogue intended to illustrate what I think may be a serious logical flaw in some of the conclusions drawn from the famous Mere Addition Paradox

EDIT:  To make this clearer, the interpretation of the Mere Addition Paradox this post is intended to criticize is the belief that a world consisting of a large population full of lives barely worth living is the optimal world. That is, I am disagreeing with the idea that the best way for a society to use the resources available to it is to create as many lives barely worth living as possible.  Several commenters have argued that another interpretation of the Mere Addition Paradox is that a sufficiently large population with a lower quality of life will always be better than a smaller population with a higher quality of life, even if such a society is far from optimal.  I agree that my argument does not necessarily refute this interpretation, but think the other interpretation is common enough that it is worth arguing against.

EDIT: On the advice of some of the commenters I have added a shorter summary of my argument in non-dialogue form at the end.  Since it is shorter I do not think it summarizes my argument as completely as the dialogue, but feel free to read it instead if pressed for time.

Bob:  Hi, I'm with R&P cable.  We're selling premium cable packages to interested customers.  We have two packages to start out with that we're sure you love.  Package A+ offers a larger selection of basic cable channels and costs $50.  Package B offers a larger variety of exotic channels for connoisseurs,  it costs $100.  If you buy package A+, however, you'll get a 50% discount on B. 

Alice:  That's very nice, but looking at the channel selection, I just don't think that it will provide me with enough utilons.

Bob: Utilons?  What are those?

Alice: They're the unit I use to measure the utility I get from something.  I'm really good at shopping, so if I spend my money on the things I usually spend it on I usually get 1.5 utilons for every dollar I spend.  Now, looking at your cable channels, I've calculated that I will get 10 utilons from buying Package A+ and 100 utilons from buying Package B.  Obviously the total is 110, significantly less than the 150 utilons I'd get from spending $100 on other things.  It's just not a good deal for me.

Bob:  You think so?  Well it so happens that I've met people like you in the past and have managed to convince them.  Let me tell you about something called the "Mere Cable Channel Addition Paradox."

Alice:  Alright, I've got time, make your case.

Bob:  Imagine that the government is going to give you $50.  Sounds like a good thing, right?

Alice:  It depends on where it gets the $50 from.  What if it defunds a program I think is important?

Bob:  Let's say that it would defund a program that you believe is entirely neutral.  The harms the program causes are exactly outweighed by the benefits it brings, leaving a net utility of zero.

Alice:  I can't think of any program like that, but I'll pretend one exists for the sake of the argument.  Yes, defunding it and giving me $50 would be a good thing.

Bob:  Okay, now imagine the program's beneficiaries put up a stink, and demand the program be re-instituted.  That would be bad for you, right?

Alice:  Sure.  I'd be out $50 that I could convert into 75 utilons.

Bob:  Now imagine that the CEO of R&P Cable Company sleeps with an important senator and arranges a deal.  You get the $50, but you have to spend it on Package A+.  That would be better than not getting the money at all, right?

Alice: Sure.  10 utilons is better than zero.  But getting to spend the $50 however I wanted would be best of all.

Bob:  That's not an option in this thought experiment.  Now, imagine that after you use the money you received to buy Package A+, you find out that the 50% discount for Package B still applies.  You can get it for $50.  Good deal, right?

Alice:  Again, sure.  I'd get 100 utilons for $50. Normally I'd only get 75 utilons.

Bob:  Well, there you have it.  By a mere addition I have demonstrated that a world where you have bought both Package A+ and Package B is better than one where you have neither.  The only difference between the hypothetical world I imagined and the world we live in is that in one you are spending money on cable channels.  A mere addition.  Yet you have admitted that that world is better than this one.  So what are you waiting for?  Sign up for Package A+ and Package B!

And that's not all.  I can keep adding cable packages to get the same result.  The end result of my logic, which I think you'll agree is impeccable, is that you purchase Package Z, a package where you spend all the money other than that you need for bare subsistence on cable television packages.

Alice:  That seems like a pretty repugnant conclusion. 

Bob:  It still follows from the logic.  For every world where you are spending your money on whatever you have calculated generates the most utilons there exists another, better world where you are spending all your money on premium cable channels.

Alice:  I think I found a flaw in your logic.  You didn't perform a "mere addition."  The hypothetical world differs from ours in two ways, not one.  Namely, in this world the government isn't giving me $50.  So your world doesn't just differ from this one in terms of how many cable packages I've bought, it also differs in how much money I have to buy them.

Bob: So can I interest you in a special form of the package?  This one is in the form of a legally binding pledge.  You pledge that if you ever make an extra $50 in the future you will use it to buy Package A+.

Alice:  No.  In the scenario you describe the only reason buying Package A+ has any value is that it is impossible to get utility out of that money any other way.  If I just get $50 for some reason it's more efficient for me to spend it normally.

Bob:  Are you sure?  I've convinced a lot of people with my logic.

Alice:  Like who?

Bob:  Well, there were these two customers named Michael Huemer and Robin Hanson who both accepted my conclusion.  They've both mortgaged their homes and started sending as much money to R&P cable as they can.

Alice:  There must be some others who haven't.

Bob:  Well, there was this guy named Derek Parfit who seemed disturbed by my conclusion, but couldn't refute it.  The best he could do is mutter something about how the best things in his life would gradually be lost if he spent all his money on premium cable.  I'm working on him though, I think I'll be able to bring him around eventually.

Alice:  Funny you should mention Derek Parfit.  It so happens that the flaw in your "Mere Cable Channel Addition Paradox" is exactly the same as the flaw in a famous philosophical argument he made, which he called the "Mere Addition Paradox."

Bob:  Really? Do tell?

Alice:  Parfit posited a population he called "A" which had a moderately large population with large amounts of resources, giving them a very high level of utility per person.  Then he added a second population, which was totally isolated from the other population.  How they were isolated wasn't important, although Parfit suggested maybe they were on separate continents and can't sail across the ocean or something like that.  These people don't have nearly as many resources per person as the other population, so each person's level of utility is lower (their lack of resources is the only reason they have lower utility).  However, their lives are still just barely worth living.  He called the two populations "A+."

Parfit asked if "A+" was a better world than "A."  He thought it was, since the extra people were totally isolated from the original population they weren't hurting anyone over there by existing.  And their lives were worth living.  Follow me so far?

Bob: I guess I can see the point.

Alice: Next Parfit posited a population called "B," which was the same as A+. except that the two populations had merged together.  Maybe they got better at sailing across the ocean, it doesn't really matter how.  The people share their resources.  The result is that everyone in the original population had their utility lowered, while everyone in the second had it raised. 

Parfit asked if population "B" was better than "A+" and argued that it was because it had a greater level of equality and total utility.

Bob: I think I see where this is going.  He's going to keep adding more people, isn't he?

Alice:  Yep.  He kept adding more and more people until he reached population "Z," a vast population where everyone had so few resources that their lives were barely worth living.  This, he argued, was a paradox, because he argued that most people would believe that Z is far worse than A, but he had made a convincing argument that it was better.

Bob:  Are you sure that sharing their resources like that would lower the standard of living for the original population?  Wouldn't there be economies of scale and such that would allow them to provide more utility even with less resources per person?

Alice: Please don't fight the hypothetical.  We're assuming that it would for the sake of the argument.

Now, Parfit argued that this argument led to the "Repugnant Conclusion," the idea that the best sort of world is one with a large population with lives barely worth living.  That confers on people a duty to reproduce as often as possible, even if doing so would lower the quality of their and everyone else's lives.

He claimed that the reason his argument showed this was that he had conducted "mere addition."  The populations in his paradox differed in no way other than their size.  By merely adding more people he had made the world "better," even if the level of utility per person plummetted.  He claimed that "For every population, A, with a high average level of utility there exists another, better population, B, with more people and a lower average level of utility."

Do you see the flaw in Parfit's argument? 

Bob:  No, and that kind of disturbs me.  I have kids, and I agree that creating new people can add utility to the world.  But it seems to me that it's also important to enhance the utility of the people who already exist. 

Alice: That's right.  Normal morality tells us that creating new people with lives worth living and enhancing the utility of people that already exist are both good things to use resources on.  Our common sense tells us that we should spend resources on both those things.  The disturbing thing about the Mere Addition Paradox is that it seems at first glance to indicate that that's not true, that we should only devote resources to creating more people with barely worthwhile lives.  I don't agree with that, of course.

Bob:  Neither do I. It seems to me that having a large number of worthwhile lives and a high average utility are both good things and that we should try to increase them both, not just maximize one.

Alice:  You're right, of course.  But don't say "having a high average utility."  Say "use resources to increase the utility of people who already exist."

Bob:  What's the difference? They're the same thing, aren't they?

Alice:  Not quite.  There are other ways to increase average utility than enhancing the utility of existing people.  You could kill all the depressed people, for instance.  Plus, if there was a world where everyone was tortured 24 hours a day, you could increase average utility by creating some new people who are only tortured 23 hours a day.

Bob:  That's insane!  Who could possibly be that literal-minded?

Alice:  You'd be surprised.  The point is, a better way to phrase it is "use resources to increase the utility of people who already exist," not "increase average utility."  Of course, that still leaves some stuff out, like the fact that it's probably better to increase everyone's utility equally, rather than focus on just one person.  But it doesn't lead to killing depressed people, or creating slightly less tortured people in a Hellworld.

Bob:  Okay, so what I'm trying to say is that resources should be used to create people, and to improve people's lives.  Also equality is good. And that none of these things should completely eclipse the other, they're each too valuable to maximize just one.  So a society that increases all of those values should be considered more efficient at generating value than a society that just maximizes one value.  Now that we're done getting our terminology straight, will you tell me what Parfit's mistake was?

Alice:  Population "A" and population "A+" differ in two ways, not one. Think about it.  Parfit is clear that the extra people in "A+" do not harm the existing people when they are added.  That means they do not use any of the original population's resources.  So how do they manage to live lives worth living?  How are they sustaining themselves?

Bob:  They must have their own resources.  To use Parfit's example of continents separated by an ocean;  each continent must have its own set of resources.

Alice:  Exactly.  So "A+" differs from "A" both in the size of its population, and the amount of resources it has access to.  Parfit was not "merely adding" people to the population.  He was also adding resources.

Bob: Aren't you the one who is fighting the hypothetical now?

Alice:  I'm not fighting the hypothetical.  Fighting the hypothetical consists of challenging the likelihood of the thought experiment happening, or trying to take another option than the ones presented.  What I'm doing is challenging the logical coherence of the hypothetical.  One of Parfit's unspoken premises is that you need some resources to live a life worth living, so by adding more worthwhile lives he's also implicitly adding resources.  If he had just added some extra people to population A without giving them their own continent full of extra resources to live on then "A+" would be worse than "A."

Bob:  So the Mere Addition Paradox doesn't confer on us a positive obligation to have as many children as possible, because the amount of resources we have access to doesn't automatically grow with them.  I get that.  But doesn't it imply that as soon as we get some more resources we have a duty to add some more people whose lives are barely worth living?

Alice: No.  Adding lives barely worth living uses the extra resources more efficiently than leaving Parfit's second continent empty for all eternity.  But, it's not the most efficient way.  Not if you believe that creating new people and enhancing the utility of existing people are both important values. 

Let's take population "A+" again.  Now imagine that instead of having a population of people with lives barely worth living, the second continent is inhabited by a smaller population with the same very high percentage of resources and utility per person as the population of the first continent.  Call it "A++. " Would you say "A++" was better than "A+?"

Bob:  Sure, definitely. 

Alice:  How about a world where the two continents exist, but the second one was never inhabited?  The people of the first continent then discover the second one and use its resources to improve their level of utility.

Bob:  I'm less sure about that one, but I think it might be better than "A+."

Alice:  So what Parfit actually proved was: "For every population, A, with a high average level of utility there exists another, better population, B, with more people, access to more resources and a lower average level of utility."

And I can add my own corollary to that:  "For every population, B, there exists another, better population, C, that has the same access to resources as B, but a smaller population and higher average utility."

Bob: Okay, I get it.  But how does this relate to my cable TV sales pitch?

Alice:  Well, my current situation, where I'm spending my money on normal things is analogous to Parfit's population "A."  High utility, and very efficient conversion of resources into utility, but not as many resources.  We're assuming, of course, that using resources to both create new people and improve the utility of existing people is more morally efficient than doing just one or the other.

The situation where the government gives me $50 to spend on Package A+ is analogous to Parfit's population A+.  I have more resources and more utility.  But the resources aren't being converted as efficiently as they could be. 

The situation where I take the 50% discount and buy Package B is equivalent to Parfit's population B.  It's a better situation than A+, but not the most efficient way to use the money.

The situation where I get the $50 from the government to spend on whatever I want is equivalent to my population C.  A world with more access to resources than A, but more efficient conversion of resources to utility than A+ or B.

Bob: So what would a world where the government kept the money be analogous to?

Alice: A world where Parfit's second continent was never settled and remained uninhabited for all eternity, its resources never used by anyone.

Bob: I get it.  So the Mere Addition Paradox doesn't prove what Parfit thought it did?  We don't have any moral obligation to tile the universe with people whose lives are barely worth living?

Alice:  Nope, we don't.  It's more morally efficient to use a large percentage of our resources to enhance the lives of those who already exist.

Bob:  This sure has been a fun conversation.  Would you like to buy a cable package from me?  We have some great deals.

Alice: NO! 

SUMMARY:

My argument is that Parfit’s Mere Addition Paradox doesn’t prove what it seems to.  The argument behind the Mere Addition Paradox is that you can make the world a better place by the “mere addition” of extra people, even if their lives are barely worth living.  In other words : "For every population, A, with a high average level of utility there exists another, better population, B, with more people and a lower average level of utility." This supposedly leads to the Repugnant Conclusion, the belief that a world full of people whose lives are barely worth living is better than a world with a smaller population where the people lead extremely fulfilled and happy lives. 

Parfit demonstrates this by moving from world A, consisting of a population full of people with lots of resources and high average utility, and moving to world A+.  World A+ has an addition population of people who are isolated from the original population and not even aware of the other’s existence. The extra people live lives just barely worth living.  Parfit argues that A+ is a better world than A because everyone in it has lives worth living, and the additional people aren’t hurting anyone by existing because they are isolated from the original population.

Parfit them moves from World A+ to World B, where the populations are merged and share resources.  This lowers the standard of living for the original people and raises it for the newer people.  Parfit argues that B must be better than A+, because it has higher total utility and equality. He then keeps adding people until he reaches Z, a world where everyones’ lives are barely worth living and the population is vast.  He argues that this is a paradox because most people would agree that Z is not a desirable world compared to A.

I argue that the Mere Addition Paradox is a flawed argument because it does not just add people, it also adds resources.  The fact that the extra people in A+ do not harm the original people of A by existing indicates that their population must have a decent amount of resources to live on, even if it is not as many per person as the population of A.  For this reason what the Mere Addition Paradox proves is not that you can make the world better by adding extra people, but rather that you can make it better by adding extra people and resources to support them.  I use a series of choices about purchasing cable television packages to illustrate this in concrete terms.

I further argue for a theory of population ethics that values both using resources to create lives worth living, and using resources to enhance the utility of already existing people, and considers the best sort of world to be one where neither of these two values totally dominate the other.  By this ethical standard A+ might be better than A because it has more people and resources, even if the average level of utility is lower.  However, a world with the same amount of resources as A+, but a lower population and the same, or higher average utility as A is better than A+.

The main unsatisfying thing about my argument is that while it avoids the Repugnant Conclusion in most cases, it might still lead to it, or something close to it, in situations where creating new people and getting new resources are, as one commenter noted, a “package deal.”   In other words, a situation where it is impossible to obtain new resources without creating some new people whose utility levels are below average.  However, even in this case, my argument holds that the best world of all is one where it would be possible to obtain the resources without creating new people, or creating a smaller amount of people with higher utility.

In other words, the Mere Addition Paradox does not prove that: "For every population, A, with a high average level of utility there exists another, better population, B, with more people and a lower average level of utility." Instead what the Mere Addition Paradox seems to demonstrate is that: "For every population, A, with a high average level of utility there exists another, better population, B, with more people, access to more resources and a lower average level of utility."  Furthermore, my own argument demonstrates that: "For every population, B, there exists another, better population, C, which has the same access to resources as B, but a smaller population and higher average utility."

147 comments

Comments sorted by top scores.

comment by cousin_it · 2012-07-26T09:22:26.617Z · LW(p) · GW(p)

Okay, so Parfit's paradox doesn't prove that we should make more people if our resources are constant. And it doesn't prove that we should make more people when we get more resources. But it might still prove that we should agree to make more people and more resources if it's a package deal.

More concretely, if you had a button that created (or made accessible) one additional unit of resource and a million people using that resource to live lives barely worth living, would you press that button? Grabbing only the resources and skipping the people isn't on the menu of the thought experiment. It seems to me that if you would press that button, and also press the next button that redistributes all existing resources equally among existing people, then the repugnant conclusion isn't completely dead...

Replies from: shokwave, steven0461, Ghatanathoah, shminux, army1987, GLaDOS, Kaj_Sotala
comment by shokwave · 2012-07-26T19:36:11.366Z · LW(p) · GW(p)

But it might still prove that we should agree to make more people and more resources if it's a package deal.

It does, but by definition.

Let X and Y be populations. Each population has a number of people and an amount of resources. Resources are distributed evenly, so the average utility of a population and each individual's utility is given by: resources over people. We will say the "standard of living", the level at which a life is 'barely worth living', is a utility of 1. And we will say that Z is when the utility is below the standard of living. These are our definitions.

For numbers, let's say X and Y start out with 100 people and 500 resources, giving each a utility of 5. This is good!
In X, we will perform the false method: simply adding people. In one step, we go to 105 people (utility 4.7, still good), then 110 (utility 4.5) and in 80 steps we will have reached our repugnant Z, with 505 people and 500 resources giving us a utility of 0.99.
Now in Y, we will perform the strengthened method: absorb a small population with bare minimum living standards, thus bringing everyone down slightly. In one step, we got to 105 people and 505 resources (4.8 utility, still good) then 110 and 510 (4.6, still good) and then Z arrives ....

No, it doesn't. Utility in Y will asymptotically approach 1 from above and we will never reach Z. Thus, the repugnant conclusion is dead.

You may argue that "just barely above the absolute bare minimum" is not worth living, but you won't get very far: previously, we defined any life above the minimum standard as worth living. So if you say that, instead, 2 utility is the minimum worth living for, Y will asymptotically approach 2. And you can hardly argue that "just above 2" isn't worth living for, because you just said before that 2 is the minimum! So yes, the repugnant conclusion is truly dead.

(An analogy for this population Y is colonising new planets: the older planets will be affluent, but the frontier new colonies will be hardscrabble and just barely worth it. But this is not a repugnant conclusion! This is like Firefly, and that would be badass!)

Or you may argue that comparing our original Y to a Y++ after many steps, it's obvious that Y is better. But this won't get you far either, because in what way is Y better than Y++? If you tell me this comparison beforehand, I will no longer desire to add people when it would reverse that comparison, and if you don't tell me, well, that's unfair - it's no surprise that optimising for one criterion might abandon other criteria, especially ones it didn't know about.

Footnote:
I tried this:

b = 500
a = 100
while (b / a) > 1
b += 5
a += 5
end

and it didn't terminate, thus the student became enlightened.

Replies from: magfrump, tgb
comment by magfrump · 2012-07-29T13:44:05.777Z · LW(p) · GW(p)

But this is not a repugnant conclusion! This is like Firefly, and that would be badass!

I used almost this exact line in a discussion with my girlfriend about a week ago (talking about Everything Matters!.

comment by tgb · 2012-07-27T02:06:02.616Z · LW(p) · GW(p)

I dislike this post. I don't mean this to be a personal attack and I don't want to come off as hostile, but I do want to make my objections known. I am choosing to state my reasons in lieu of downvoting.

First, "It does, but by definition." is clearly false, otherwise you wouldn't spend 6 paragraphs explaining it. This is something of a pet peeve of mine from grading homework, but whatever, it's not important.

More importantly, its not really addressing the problems being discussed here. The discussion is whether 100 people at 500 resources is better than your asymptotically-worthless massive population, which is something that you don't mention at all. Instead, you argue that if we have N+400 resources and N people and each person needs 1 resource to barely survive, then everyone survives when resources are evenly distributed, no matter what N you pick. Okay, but the conclusion is somehow "the repugnant conclusion is dead"? To be honest, I thought you were trying to argue in favor of the repugnant conclusion, at least in the specialized case of a universe that offers you N resources for every additional N people. But the only conclusion I see you really reaching is that a lot of people at a better-than-dead state is better than a world where there aren't people - this doesn't strike me as very exciting.

It seems fairly clear to me that one way in which Y is better than Y+ is that Y has greater average utility.

That said, I think most of my dislike for this post is caused by the tone and manner of expression. It was fairly disorganized and overly long. The tone was demeaning and combative: assuming the reader will disagree with basic premises and the use of phrases like "thus the student became enlightened". Note how the top-level post gives the opposing voice to a fictional character rather than forcing it upon the reader - this is a much friendlier approach.

Lastly, can you tell me where you bought your Halting Machine? I wouldn't mind one for myself... ;)

Replies from: shokwave
comment by shokwave · 2012-07-27T04:59:23.252Z · LW(p) · GW(p)

It seems fairly clear to me that one way in which Y is better than Y+ is that Y has greater average utility.

Yeah, on reflection the post is very unclear. I agree with the quoted sentiment, but the point I should have made was that we get to Y+ by a process that reduces average utility (redistributing resources evenly), so it doesn't seem surprising or confusing that Y has greater average utility.

comment by steven0461 · 2012-07-26T20:20:34.669Z · LW(p) · GW(p)

But it might still prove that we should agree to make more people and more resources if it's a package deal.

As in, "human resources".

Replies from: asparisi
comment by asparisi · 2012-07-27T00:35:39.960Z · LW(p) · GW(p)

Or any scenario where adding more people increases our capacity to take advantage of available resources. (such as most agricultural communities throughout history)

comment by Ghatanathoah · 2012-07-26T10:04:09.130Z · LW(p) · GW(p)

But it might still prove that we should agree to make more people and more resources if it's a package deal.

You're right, my argument does not prohibit the particular hypothetical you offered up. The one quibble I have is that I'm not sure how much resources "one unit" is, but it would have to be a sizable amount for a million people to live lives barely worth living on it.

In fact, your hypothetical is pretty much structurally identical to the cable bill hypothetical that Bob offers up. And, if you recall, Alice does not disagree that buying Package A+ would be irrational if the government really was going to give her $50 if she did it.

So I might have only killed the repugnant conclusion 99.9% dead. For now I'm content with that, I've eliminated it as a possibility from any situation that is remotely likely to happen in real life, and that's good enough for now.

As for whether I'd push the button? I probably wouldn't, even though my argument doesn't exclude it. However, I don't know if that's because there is some other moral objection to the repugnant conclusion that I haven't articulated yet, or if it's just because I can be kind of selfish sometimes.

Replies from: cousin_it
comment by cousin_it · 2012-07-26T10:34:06.706Z · LW(p) · GW(p)

I've eliminated it as a possibility from any situation that is remotely likely to happen in real life

Hmm, I can imagine situations where you can't extract the resources without adding people. For example, should humans settle a place if it can support life, but only at a low level of comfort, and exporting resources from there isn't economically viable?

Replies from: Ghatanathoah, Richard_Kennaway, None
comment by Ghatanathoah · 2012-07-26T23:39:13.183Z · LW(p) · GW(p)

It seems to me that if the settlement is done voluntarily that it must fulfill some preference that the settlers value more than comfort. Freedom, adventure, or the feeling that you're part of something bigger, to name three possibilities. For that reason their lives couldn't really be said to have lowered in quality. If it's done involuntarily my first instinct is to say that no, we shouldn't do it, although you could probably get me to say yes by introducing some extenuating circumstance, like it being the only way to prevent extinction.

Of course, this then brings up the issue of whether or not the settlers should have children who might not feel the same way they do. I'm much less sure about the morality of doing that.

Replies from: cousin_it
comment by cousin_it · 2012-07-27T10:39:28.397Z · LW(p) · GW(p)

Yes, the scenario involves adding people, not just moving them around. That's what makes population ethics tricky.

comment by Richard_Kennaway · 2012-07-26T11:18:50.744Z · LW(p) · GW(p)

Such as, for example, the Moon or Mars?

comment by [deleted] · 2012-07-26T14:03:56.849Z · LW(p) · GW(p)

I would say yes, to the extent that it reduces species ex-risk to have those extra people. (For instance, having a Mars colony as per RichardKennaway's example would reduce Ex-Risk) However, it is possible that adding extra people in some cases might instead increase Ex-Risk (say, a slum outside of a city which might breed disease that spreads to the city) and in that case I might say No.

That's a separate problem with the repugnant conclusion that bothers me sometimes. It appears to be the case that at some point the average function starts greatly increasing ex-risk at a later point even though it doesn't do that at the beginning. If you are down to Muzak and Potatoes, a potato famine wipes you out.

So if you have "Potatoes, Carrots and Muzak" in Pop Y, and "Potatoes" in Pop Z, averaging it out to "Potatoes and Muzak" for everyone might increase average happiness, and Pop Z wouldn't mind, but it wouldn't be safe for Pop Y and Z together as a species, because they lose the safety of being able to come back from a potato famine.

That also seems to come with a built in idea of what kind of averaging is acceptable and where there are limits on averaging. Taking from a richer populations status to improve a poorer populations health would be fine. Taking from a richer populations health or safety to improve a lower populations safety would be unreasonable.

And a life where your health and safety are well guaranteed certainly sounds a hell of a lot better than "barely worth living." so it doesn't descend down into repugnance.

Basically, if instead of just looking at Population and Utility, you look at Population, Utility, and Ex-risk, the problem seems to vanish. It seems to say "Yes, add" and "Yes average" when I want it to add and average and say "no, don't add" and "No don't average" when I want it to now add and not average.

You could also just say "Well, Ex-Risk is part of my utility function" but that seems to lead to tricky calculation questions such as:

Approximately what is the Ex-Risk at a life barely worth living, utility wise? Presumably, less Ex-Risk would make the life more worth living, and More Ex-Risk would make the life less worth living? Is that still the case here? Can it flip the sign? Can an increase to Ex-Risk and nothing else make a life which is currently worth living not worth living?

Maybe I need to answer those questions, although I'm not sure where to start. Or maybe I just need to separate out multiplicative utility and additive utility?

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-26T21:51:19.465Z · LW(p) · GW(p)

That's a separate problem with the repugnant conclusion that bothers me sometimes. It appears to be the case that at some point the average function starts greatly increasing ex-risk at a later point even though it doesn't do that at the beginning. If you are down to Muzak and Potatoes, a potato famine wipes you out.

This criticism has been made before. I think the standard reply was that it may indeed be the case that we would need to have a life somewhat above the level of "barely worth living" in order to guard against the possibility that some sort of disaster would lower the quality of the people's lives to such an extent that they were no longer worth living. However, such a standard of living would likely still be low enough for the Repugnant Conclusion to remain repugnant.

comment by shminux · 2012-07-26T18:53:08.103Z · LW(p) · GW(p)

I find it repugnant to even consider creating people with lives worse than the current average. So some resources will just have to remain unused, if that's the condition.

Replies from: jkaufman, Ghatanathoah
comment by jefftk (jkaufman) · 2012-07-27T11:53:53.688Z · LW(p) · GW(p)

What do you find repugnant about it?

Replies from: shminux
comment by shminux · 2012-07-27T17:07:27.365Z · LW(p) · GW(p)

Intentionally creating people less happy than I am. Think about it from the parenting perspective. Would you want to bring unhappy children into the world (your personal happiness level being the baseline), if you could predict their happiness level with certainty?

Replies from: Vaniver, TheOtherDave, jkaufman
comment by Vaniver · 2012-07-28T05:00:21.216Z · LW(p) · GW(p)

Intentionally creating people less happy than I am.

That is, your life is the least happy life worth living? If you reflectively endorse that, we ought to have a talk on how we can make your life better.

Replies from: Benquo, shminux
comment by Benquo · 2018-08-13T18:54:15.103Z · LW(p) · GW(p)

This, in conjunction with some other stuff I've been working on, prompted me to rethink some things about my priorities in life. Thanks!

comment by shminux · 2012-07-28T18:40:38.560Z · LW(p) · GW(p)

Again, a misunderstanding. See my other reply.

Replies from: Vaniver
comment by Vaniver · 2012-07-28T23:06:37.933Z · LW(p) · GW(p)

It's not clear to me that this is a misunderstanding. I think that my life is pretty dang awesome, and I would be willing to have children that are significantly less happy than I am (though, ceteris paribus, more happiness is better). If you aren't, reaching out with friendly concern seems appropriate.

Replies from: shminux
comment by shminux · 2012-07-29T01:38:25.633Z · LW(p) · GW(p)

I would be willing to have children that are significantly less happy than I am

Remember, not "provided I already have children, I'm OK with them being significantly less happy than I am", but "Knowing for sure that my children will be significantly less happy than I am, I will still have children". May not give you pause, but probably will to most (first-world) people.

Replies from: magfrump
comment by magfrump · 2012-07-29T13:41:48.440Z · LW(p) · GW(p)

I suspect that most first-world people are significantly less happy than many happy people on LW, and that those people on LW would still be very happy to have children who were as happy as average first-worlders, though reasonably hoping to do better.

comment by TheOtherDave · 2012-07-27T17:50:10.957Z · LW(p) · GW(p)

Well... hrm.

I have evidence that if my current happiness level is the baseline, I prefer the continued existence of at least one sub-baseline-happy person (myself) to their nonexistence. That is, when I go through depressive episodes in which I am significantly less happy than I am right now, I still want to keep existing.

I suspect that generalizes, though it's really hard to have data about other people's happiness.

It seems to me that if I endorse that choice (which I think I do), I ought not reject creating a new person whom I would otherwise create, simply because their existence is sub-baseline-happy.

That said, it also seems to me that there's a level of unhappiness below which I would prefer to end my existence rather than continue my existence at that level. (I go through periods of those as well, which I get through by remembering that they are transient.) I'm much more inclined to treat that level as the baseline.

Replies from: shminux
comment by shminux · 2012-07-28T18:38:59.571Z · LW(p) · GW(p)

I prefer the continued existence of at least one sub-baseline-happy person (myself) to their nonexistence.

This does not contradict what I said. Creation != continued existence, as emphasized in the OP. There is a significant hysteresis between the two. You don't want to have children less happy than you are, but you won't kill your own unhappy children.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-07-28T19:54:56.905Z · LW(p) · GW(p)

Agreed that creation != continued existence.

There are situations under which I would kill my own unhappy children. Indeed, there are even such situations where, were they happier, I would not kill them. However, "less happy than I am" does not describe those situations.

Replies from: shminux
comment by shminux · 2012-07-28T22:05:14.301Z · LW(p) · GW(p)

Looks like we agree, then.

comment by jefftk (jkaufman) · 2012-07-27T19:58:06.386Z · LW(p) · GW(p)

Intentionally creating people less happy than I am

This probably isn't the same as "creating people with lives worse than the current average".

your personal happiness level being the baseline

Why would that be the baseline? I'm lucky enough to have a high happiness set point, but that doesn't mean I think everyone else has lives that are not worth living.

Would you want to bring unhappy children into the world?

Unhappy as in net negative for their life? No. Unhappy as in "less happy than average"? Depends what the average is, but quite possibly.

comment by Ghatanathoah · 2012-07-26T23:29:57.052Z · LW(p) · GW(p)

I've considered this possibility as well.

One argument that's occurred to me is that adding more people in A+ might actually be harming the people in population A because the people in population A would presumably prefer that there not be a bunch of desperately poor people who need their help kept forever out of reach, and adding the people in A+ violates that preference. Of course, the populations are not aware of each others' existence, but it's possible to harm someone without their knowledge, if I spread dirty rumors about someone I'd say that I harmed them even if they never find out about it.

However, I am not satisfied with this argument, it feels a little too much like a rationalization to me. It might also suggest that we ought to be careful about how we reproduce in case it turns out that there are aliens out there somewhere living lives far more fantastic than ours are.

Replies from: shminux
comment by shminux · 2012-07-27T03:19:40.850Z · LW(p) · GW(p)

Of course, the populations are not aware of each others' existence, but it's possible to harm someone without their knowledge

Instrumentally, if there is absolutely no interaction, not even indirect, is possible between the two groups, there is no way one group can harm another.

it's possible to harm someone without their knowledge, if I spread dirty rumors about someone I'd say that I harmed them even if they never find out about it.

True, but only because rumors can harm people, so the "no interaction" rule is broken.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-27T07:10:32.361Z · LW(p) · GW(p)

True, but only because rumors can harm people, so the "no interaction" rule is broken.

I'm not sure about that. I don't think most people would want rumors spread about them, even if the rumors did nothing other than make some people think worse about them (but they never acted on those thoughts).

Similarly, it seems to me that someone who cheats on their spouse and is never caught has wronged their spouse, even if their spouse is never aware of the affair's existence, and the cheater doesn't spend less money or time on the spouse because of it.

Now, suppose I have a strong preference to live in a universe where innocent people are never tortured for no good reason. Now, suppose someone in some far-off place that I can never interact with tortures an innocent person for no good reason. Haven't my preferences been thwarted in some sense?

Replies from: shminux
comment by shminux · 2012-07-27T08:06:41.979Z · LW(p) · GW(p)

Now, suppose I have a strong preference to live in a universe where innocent people are never tortured for no good reason. Now, suppose someone in some far-off place that I can never interact with tortures an innocent person for no good reason. Haven't my preferences been thwarted in some sense?

How do you know it is not happening right now? Since there is no way to tell, by your assumption, you might as well assume the worst and be perpetually unhappy. I warmly recommend instrumentalism as a workable alternative.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-27T08:42:14.006Z · LW(p) · GW(p)

There is no need to be unhappy over situations I can't control. I know that awful things are happening in other countries that I have no control over, but I don't let that make me unhappy, even though my preferences are being perpetually thwarted by those things happening. But the fact that it doesn't make me unhappy doesn't change the fact that it's not what I'd prefer.

comment by A1987dM (army1987) · 2012-07-26T16:28:24.500Z · LW(p) · GW(p)

Indeed, I immediately thought “what's the difference between the government giving you $50 that you can only spend on cables, and it just giving you cables?”.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-26T21:36:26.731Z · LW(p) · GW(p)

Indeed, I immediately thought “what's the difference between the government giving you $50 that you can only spend on cables, and it just giving you cables?”.

There isn't one. The reason I phrased it that way was to help keep the link between the various steps in the thought experiment as clear as possible.

comment by GLaDOS · 2012-08-03T08:54:32.748Z · LW(p) · GW(p)

also press the next button that redistributes all existing resources equally among existing people, then the repugnant conclusion isn't completely dead...

I think a button redistribution all existing resources equally among existing people is one I'd almost certainly not press.

comment by Kaj_Sotala · 2012-07-30T10:36:15.822Z · LW(p) · GW(p)

This might be getting into semantics, but I don't think your proposed dilemma really qualifies as the RC anymore. The RC was interesting because it seemed to derive an obviously unacceptable conclusion (a world full of people whose lives are barely worth living) from premises / steps that were all individually obviously acceptable. Yours employs a step (create people whose lives are barely worth living, without getting enough extra resources to make up for it) that's already ethically ambiguous, due to clearly leading to a world with a population dominated by people whose lives are barely worth living.

Replies from: cousin_it
comment by cousin_it · 2012-07-30T11:15:47.578Z · LW(p) · GW(p)

In my argument the button could create people and resources leading to a standard of living just below the current average, like in the original RC.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-07-30T11:40:41.871Z · LW(p) · GW(p)

Point taken, though that's still a more morally ambiguous step than the equivalent in the original RC. There are already plenty of people today who think that people shouldn't have more children due to the Earth's resources being limited. That's not an exact mapping to "creating new people that gave us some small amount of extra resources", but it's close and brings to mind the same arguments.

comment by Unnamed · 2012-07-26T19:26:46.607Z · LW(p) · GW(p)

Parfit argued that this argument led to the "Repugnant Conclusion," the idea that the best sort of world is one with a large population with lives barely worth living.

So the Mere Addition Paradox doesn't prove what Parfit thought it did? We don't have any moral obligation to tile the universe with people whose lives are barely worth living?

I'm pretty sure that this is not what Parfit was arguing.

As I understand it, Parfit's Repugnant Conclusion was that, given any possible world (even one with billions of people who each have an extremely high quality of life), there is a better possible world in which everyone has a life that is barely worth living (better because the population is much larger, and "barely worth living" is better than nothing). The argument he made was that the Repugnant Conclusion followed from most theories of population ethics (that is, most attempts to define "better" in this context), but most people refused to accept it.

That does not mean that a high-population low-quality-of-life world is the best possible world; a possible world with the same high population and higher quality of life would be even better. And it does not necessarily mean that we should strive for a world with high population and low quality of life; which possible world we should strive for depends on which possible worlds are reachable from here. But it does mean accepting that a hypothetical possible World 1, where there are lots of people and everyone has a life that is barely worth living, is better than a hypothetical possible World 2, where there are fewer people (though perhaps still billions or more) and everyone has a high quality of life. Many people refuse to accept this conclusion and find it repugnant, even if it is implied by the moral theory that they endorse.

Replies from: steven0461, Ghatanathoah
comment by steven0461 · 2012-07-26T20:19:08.776Z · LW(p) · GW(p)

Exactly. The original post is straightforwardly wrong, and doesn't even do its readers the courtesy of including a one-line summary that lets them avoid having to read the whole thing. The fact that it's at +40 is a damning indictment of LessWrong's ability to tell good arguments from bad.

Replies from: torekp
comment by torekp · 2012-07-26T23:49:47.064Z · LW(p) · GW(p)

The only serious mistake I see in the original post is that it misinterprets Parfit. I agree with Unnamed that it does. But LessWrongers haven't necessarily read Parfit, and they may have seen his ideas misused to argue in the way the post criticizes, so they can't really be expected to detect the misinterpretation.

comment by Ghatanathoah · 2012-07-26T22:24:33.740Z · LW(p) · GW(p)

As I understand it, Parfit's Repugnant Conclusion was that, given any possible world (even one with billions of people who each have an extremely high quality of life), there is a better possible world in which everyone has a life that is barely worth living (better because the population is much larger, and "barely worth living" is better than nothing).

The Mere Addition Paradox was the main argument Parfit used to argue that a possible world with a larger population and a lower quality of life was necessarily better. My argument is that the MAP doesn't show this at all. I am aware that it was not the only argument Parfit used, but it was the most effective, in my opinion, so I wanted to take it on.

The argument he made was that the Repugnant Conclusion followed from most theories of population ethics (that is, most attempts to define "better" in this context), but most people refused to accept it.

It helps that I am already using a somewhat abnormal theory of population ethics. Alice and Bob elucidate it to a limited extent, but it's somewhat similar to the "variable value principle" described in Stanford's page on the subject. Basically I argue that having high total and high average utility are both valuable and that it's morally good to increase both. I use the somewhat clunkier phrases "use resources to create lives worth living" and "use resources to enhance the utility of existing people" to avoid things like Ng's Sadistic Conclusion and Parfit's Absurd Conclusion.

According to the theory I am using, possible World 1 is worse than hypothetical World 2, providing both worlds have access to the same level of resources. My solution to the Mere Addition Paradox seems to indicate that World 1 might be better than World 2 if it has access to many more resources to convert into utility. However, a world with a smaller population, higher average utility, and the same level of resources as World 1 would always be better (providing the higher average utility was obtained by spending resources enhancing existing people's utility, not by killing depressed people or something like that).

Replies from: Unnamed
comment by Unnamed · 2012-07-27T01:50:18.544Z · LW(p) · GW(p)

The Mere Addition Paradox was the main argument Parfit used to argue that a possible world with a larger population and a lower quality of life was necessarily better.

What Parfit argued is that, given any possible world, there is a better world with a larger population and a lower quality of life (according to most people's definitions of "better"). There is even a better world with a much larger population and a quality of life that is barely above zero. It sounds like you agree, but you're just noting that the higher-population, lower-quality-of-life, better world also differs in other ways; in particular, it has more resources.

At least that's how I read it when you say: "For every population, A, with a high average level of utility there exists another, better population, B, with more people, access to more resources and a lower average level of utility." To me, that sounds like you are biting the bullet and accepting the Repugnant Conclusion. You just think that the conclusion isn't so repugnant, because those worlds also differ in amount of resources.

Is the following a fair summary of your position?: When looking at the possible future worlds that are reachable from a given starting point, a barely-worth-living world will never be the best world to aim for, because there is always a better option which has higher quality of living (i.e., an option that makes better use of the resources available at the starting point).

Replies from: Ghatanathoah, Ghatanathoah
comment by Ghatanathoah · 2012-07-27T05:21:04.917Z · LW(p) · GW(p)

What Parfit argued is that, given any possible world, there is a better world with a larger population and a lower quality of life (according to most people's definitions of "better"). There is even a better world with a much larger population and a quality of life that is barely above zero. It sounds like you agree, but you're just noting that the higher-population, lower-quality-of-life, better world also differs in other ways; in particular, it has more resources.

My understanding of Parfit is that he believed the Mere Addition Paradox showed that a world that differed in no other way besides having a larger population size and a lower quality of life was better than one with a smaller population and a higher quality of life. That's why it's called the Mere Addition Paradox, because you arrive at the Paradox by adding more people, redistributing resources, and doing nothing else. That is what I understand to be the Repugnant Conclusion. What makes it especially repugnant is that it implies that people in the here and now have a duty to overpopulate the world.

You seem to have understood the Repugnant Conclusion to be the belief that there is any possible society that has a larger population and lower quality of life than another society, but is also better than that society. To avoid quibbling over which of us has an accurate understanding of the topic I'll just call my understanding of it RC1 and your understanding RC2.

I do not accept RC1. According to RC1 a world with a high population and low quality of life is better than a world that has the same amount of resources as the first world, a lower population, and a higher quality of life. I do not accept this. To me the second world is clearly better.

I might accept RC2. If I get your meaning RC2 means that there is always a better population that is larger and with lower quality of life, but it might have to be quite a bit larger and have access to many more resources in order to be better. For instance according to RC2 a planet of 10 billion people with lives barely worth living might not be better than a planet of 8 billion people with wonderful lives. However, a galaxy full of 10 trillion people with lives barely worth living and huge amounts of resources might be better then the planet of 8 billion people with wonderful lives.

Would you agree that I have effectively refuted RC1, even if you don't think I refuted RC2?

To me, that sounds like you are biting the bullet and accepting the Repugnant Conclusion. You just think that the conclusion isn't so repugnant, because those worlds also differ in amount of resources.

Again, I think I might accept what you think the RC means (RC2). However, I do not accept my understanding of the Repugnant Conclusion (RC1), which is that in two otherwise identical worlds the one with the lower quality of live and larger population is better.

I think the reason my post is so heavily upvoted is that a great many members of this community have the same understanding of what the Repugnant Conclusion means as I do.

Is the following a fair summary of your position?: When looking at the possible future worlds that are reachable from a given starting point, a barely-worth-living world will never be the best world to aim for, because there is always a better option which has higher quality of living

Yes.

Replies from: Nisan
comment by Nisan · 2012-07-27T22:01:28.217Z · LW(p) · GW(p)

My understanding of Parfit is that he believed the Mere Addition Paradox showed that a world that differed in no other way besides having a larger population size and a lower quality of life was better than one with a smaller population and a higher quality of life.

No, the statement is that for any world with a sufficiently high quality of life, there is some world that differs in no other way besides having a larger population size and a lower quality of life which is better.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-28T06:27:01.250Z · LW(p) · GW(p)

I don't see how your phrasing is significantly different from mine. In any case, I completely disagree with that statement. I believe that for any world with a large population size and a very low quality of life there is some world that differs in no other way besides having a smaller population size and a higher quality of life which is better.

The reason I believe this is that I have a pluralist theory of population ethics that holds that a world that devotes some of its efforts to creating lives worth living and some of its efforts to improve lives that already exist is better than a world that only does the former, all other things being equal.

Replies from: Nisan
comment by Nisan · 2012-07-28T07:48:47.147Z · LW(p) · GW(p)

Note that your statement does not contradict the Mere Addition Paradox.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-28T08:11:12.118Z · LW(p) · GW(p)

Note that your statement does not contradict the Mere Addition Paradox.

You're right. It doesn't contradict it 100%. A world a trillion people with lives barely worth living might still be better than a world with a thousand people with great lives. However, it could well be worse than a world with half a trillion people with great lives.

What my theory primarily deals with is finding the optimal, world, the world that converts resources into utility most efficiently. I believe that a world with a moderately sized population with a high standard of living is the best world, all other things being equal.

However, you are quite correct that the Mere Addition Paradox could still apply if all things are not equal. A world with vastly more resources than the first one that converts all of its resources into building a titanic population of lives barely worth living might be better if its population is huge enough, because it might produce a greater amount of value in total, even if is less optimal (that is, it converts resources into value less efficiently). However, a world with the same amount of resources as that has a somewhat smaller population and a higher standard of living would be both better and more optimal.

So I think that my statement does contradict the Mere Addition Paradox in ceteris parabis situations, even if it doesn't in situation where all things aren't equal. And I think that's something.

Replies from: Nisan
comment by Nisan · 2012-07-28T18:10:02.560Z · LW(p) · GW(p)

No. Your statement does not contradict the Mere Addition Paradox, even in, as you say, "ceteris paribus situations". This is really a matter of first-order logic.

comment by Ghatanathoah · 2012-07-27T21:04:55.068Z · LW(p) · GW(p)

Alright, I think I found where we disagree. I am basically going to just repeat some things I just said in a reply to Thrasymachus, but that's because I think the sources of my disagreement with him are pretty much the same as the sources of my disagreement with you:

I interpreted the Repugnant Conclusion to mean that a world with a large population with lives barely worth living is the optimal world, given the various constraints placed on it. In other words, given a world with a set amount of resources, the optimal way to convert those resources to value is to create a huge population with lives barely worth living. I totally disagree with this.

You interpreted the Repugnant Conclusion to mean that a world with a huge population of lives barely worth living may be a better world, but not necessarily the optimal world. I may agree with this.

To use a metaphor imagine a 25 horsepower engine that works at 100% efficiency, generating 25 horsepower. Then imagine a 100 horsepower engine that works at 50% efficiency, generating 50 horsepower. The second engine is better at generating horsepower than the first one, but it is less optimal at generating horsepower, it does not generate it the best it possibly could.

So, if you accept my pluralist theory of value (that places value on both creating new people, and improving the lives of existing ones), we might also say that a population Z, consisting of a galaxy full of 3 quadrillion of people that uses there sources of the galaxy to give them lives barely worth living, would be better than A, a society consisting of planet full of ten billion people that uses the planet's resources to give its inhabitants very excellent lives. However, Z would be less morally optimal than A because A uses all the resources of the planet to give people excellent lives, while Z squanders its resources creating more people. We could then say that Y, a galaxy full of 1 quadrillion people with very excellent lives is both better than Z and more optimal than Z. We could also say that Y is better than A, and equally optimal as A. However, Y might be worse (but more optimal) than a galaxy with a septillion people living lives barely worth living. Similarly, we might say that A is both more optimal than, and better than B, a planet of 15 billion people living lives barely worth living.

The arguments I have made in the OP have been directed at the idea that a population full of lives barely worth living is the optimal population, the population that converts the resources it has into value most efficiently (assuming you accept my pluralist moral theory's definition of efficiency). You have been arguing that even if that population is the most efficient at generating value, there might be another population so much huger that it could generate more value, even if it is much less efficient at doing so. I do not see anything contradictory about those two statements. I think that I mistakenly thought you were arguing that such a society would also be more optimal.

And if that is all the Repugnant Conclusion is I fail to see what all the fuss is about. The reason it seemed so repugnant to me was that I thought it argued that a world full of people with lives barely worth living was the very best sort of world, and we should do everything we can to bring such a world about. However, you seem to imply that that isn't what it means at all. If the Mere Addition Paradox and the Repugnant Conclusion do not imply that we have a moral imperative to bring a vastly populated world about then all it is is a weird thought experiment with no bearing on how people should behave. A curiosity, nothing more.

Even if your argument is a more accurate interpretation of Parfit, I think that idea that a world full of people barely worth living is the optimal one is still a common enough idea that it merits a counterargument. And I think the reason the OP is so heavily upvoted is that many people hold the same impression of Parfit that I did.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-07-29T20:43:43.260Z · LW(p) · GW(p)

Nice dialogue!

I think that the term "barely worth living" is a terrible source of equivocation that underlies a lot of the apparent paradoxicalness. "Barely worth living" can mean that, if you're already alive and don't want to die, your life is almost but not quite horrible enough that you would rather commit suicide than endure. But if you're told that somebody like this exists, it is sad news that you want to hear as little as possible. You may not want to kill them, but you also wouldn't have that child if you were told that was what your child's life would be like. What Parfit postulates should be called, to avoid equivocation, "A life barely worth celebrating" - it's good news and you say "Yay!" but very softly. I'd even argue that this should be a universal standard for all discussions of the Repugnant Conclusion.

Replies from: private_messaging, Kaj_Sotala, Ghatanathoah, army1987
comment by private_messaging · 2012-08-02T06:11:34.648Z · LW(p) · GW(p)

I think 'barely worth living' is universally applicable. Anyone's life can be seen as 'barely worth living' by sufficiently advanced spoiled child. E.g. we would see all cavemen's lives as 'barely worth living' all while those guys say, ohh the hunting been great this year.

comment by Kaj_Sotala · 2012-07-30T10:52:19.131Z · LW(p) · GW(p)

Reading your comment (and others in this vein) and realizing that the RC isn't as bad as I'd thought it was, and therefore doesn't show human morals to be so inconsistent as I'd thought them to be, makes me update towards human morals in general maybe not being so inconsistent at all. (At least within an individual; not so much between cultures.)

comment by Ghatanathoah · 2012-07-30T07:18:58.023Z · LW(p) · GW(p)

What Parfit postulates should be called, to avoid equivocation, "A life barely worth celebrating" - it's good news and you say "Yay!" but very softly. I'd even argue that this should be a universal standard for all discussions of the Repugnant Conclusion.

Excellent point. I'll try to remember to do that if I end up discussing this again.

Replies from: shminux
comment by shminux · 2012-07-30T07:34:57.919Z · LW(p) · GW(p)

"barely worth creating" is probably a less ambiguous term.

comment by A1987dM (army1987) · 2012-07-30T10:09:09.318Z · LW(p) · GW(p)

Yes, I had thought about setting the zero of the function to be summed across individuals to a higher level than “just barely good enough for them not to want to die”. The problem with that is that then there would be people who don't want to die but still have a negative utility, and even a total utilitarian would conclude they had better die (at least in “dry water” models when you neglect the grief of their friends and family, and the cessation of the externalities of their life).

Edit: It looks like “dry water” has acquired a meaning totally unrelated to the one I had in mind. (It was the derogatory term John von Neumann used to refer to models of fluids without viscosity, whose proprieties are very different from those of real fluids.)

comment by Thrasymachus · 2012-07-27T01:47:21.827Z · LW(p) · GW(p)

I agree with Unnamed that this post misunderstands Parfit's argument by tying it empirical claims about resources that have no relevance.

Just imagine God is offering you choices between different universes with inhabitants of the stipulated level of wellbeing: he offers you A, then offers you to take A+, then B, then B+, etc. If you are interested in maximizing aggregate value you'll happily go along with each step to Z (indeed, if you are offered all the worlds from A to Z at once an aggregate maximizer will go straight for Z. This is what the repugnant conclusion is all about: it has nothing whatsoever to do with whether or not Z (or the 'mechanism' of mere addition to get from A to Z) is feasible under resource constraint, but that if this were possible, maximizing aggregate value obliges we take this repugnant conclusion. I don't want to be mean, but this is a really basic error.

The OP offers something much better when offering a pluralist view to try and get out of the mere addition paradox by saying we should have separate term in our utility function for average level of well-being (further, an average of currently existing people), and that will stop us reaching the repugnant conclusion. However, it only delays the inevitable. Given the 'average term' doesn't dominate (or is lexically prior to) the total utility term, there will be acceptable deals this average total pluralist should accept where we lose some average but gain more than enough total utility to make up for it. Indeed, for a person affecting view we can make it so that the 'original' set of people in A get even better:

A : 10 people at wellbeing 10
A+: 10 People at wellbeing 20 & 1 million at wellbeing 9.5
B: 1 million and ten people at wellbeing 9.8.

A to A+ and A+ to B increase total utility. Moving from A to A+ is a drop in average utility by a bit under 0.5 points, but multiples the total utility by around 100 000, and all the people in A have double their utility. So it seems a pluralist average/total person view is should accept these moves, and so should we're off to the repugnant conclusion again (and if they don't, we can make even stronger examples like 10^10 new people in A with wellbeing 9.99 and everyone originally in A gets 1 million utility, etc.)

Aside 1: Person affecting views (caring about people who 'already' exist) can get you out of the repugnant conclusion, but has their own costs: Intransitivity. If you only care about people who exist, then A -> A+ is permissible (no one is harmed), A+ --> B is permissible (because we are redistributing well being among people who already exist), but A --> B is not permissible. You can also set up cycles whereby A>B>C>A.

Aside 2: I second the sentiment that the masses of upvotes this post has received reflects poorly on the LW collective philosophical acumen ('masses', relatively speaking: I don't think this post deserves a really negative score, but I don't think a post that has such a big error in it should be this positive, still less be exhorted to be 'on the front page'). I'm currently writing a paper on population ethics (although I'm by no means an expert on the field), but seeing this post get so many upvotes despite the fatal misunderstanding of plausibly the most widely discussed population ethics case signals you guys don't really understand the basics. This undermines the not-uncommon LW trope that analytic philosophy is not 'on the same level' as bone fide LW rationality, and makes me more likely to account for variance between LW and the 'mainstream view' on ethics, philosophy of mind, quantum mechanics (or, indeed, decision theory or AI) as LWers being on the wrong side of the Dunning-Kruger effect.

Replies from: Ghatanathoah, Ghatanathoah
comment by Ghatanathoah · 2012-11-08T06:37:15.829Z · LW(p) · GW(p)

Indeed, for a person affecting view we can make it so that the 'original' set of people in A get even better:

A : 10 people at wellbeing 10 A+: 10 People at wellbeing 20 & 1 million at wellbeing 9.5 B: 1 million and ten people at wellbeing 9.8.

A to A+ and A+ to B increase total utility. Moving from A to A+ is a drop in average utility by a bit under 0.5 points, but multiples the total utility by around 100 000, and all the people in A have double their utility. So it seems a pluralist average/total person view is should accept these moves, and so should we're off to the repugnant conclusion again (and if they don't, we can make even stronger examples like 10^10 new people in A with wellbeing 9.99 and everyone originally in A gets 1 million utility, etc.)

I've been thinking about this argument (which is formally called the Benign Addition Paradox) for a few months, and I'm no longer sure it holds up. I began to think about if I would support doing such a thing in real life. For instance, I wondered if I would push a button that would create a bunch of people who are forced to be my slaves for a couple days per week, but are freed for just long enough each week that their lives could be said to be worthwhile. I realized that I would not.

Why? Because if I created those people with lower utility than me, I would immediately possess an obligation to free them and then transfer some of my utility to them, which would reduce my level of utility. So, if we adopt a person-affecting view, we can adopt the following rule: Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off.

So A+ is worse than A because the people who previously existed in A have a moral duty to transfer some of their utility to the new people who were added. They have a duty to convert A+ into B, which would harm them.

Now, you might immediately bring up Parfit's classic argument where the new people are geographically separated from the existing people, and therefore incapable of being helped. In that case, hasn't no harm been done, since the existing people are physically incapable of fulfilling the moral obligation they have? No, I would argue. It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it.

I think that the geographic separation argument seems plausible because it contaminates what is an essentially consequentialist argument with virtue ethics. The geographic separation is no one's fault, no one choose to cause it, so it seems like it's morally benign. Imagine, instead, that you had the option of pushing a button that would have two effects:

1) It would create a new group of people who would be your slaves for a few days each week, but be free long enough that their life could be said to be barely worthwhile.

2) It would create an invincible, unstoppable AI that will thwart any attempt to equalize utility between the new people and existing people. It will even thwart an attempt by you to equalize utility if you change your mind.

I don't know about you, but I sure as hell wouldn't push that button, even though it does not differ from the geographic separation argument in any important way.

Of course, this argument does create some weird implications. For instance, it implies that there might be some aliens out there with a much higher standard of living than we have, and we are inadvertently harming them by reproducing. However, it's possible that the reason that this seems so counterintuitive is that when contemplating it we are mapping it to the real world, not the simplified world we have been using to make our arguments in so far. In the real world we can raise the following practical objections:

1) We do not currently live in a world where utility is Pareto efficient. In the various addition paradox arguments it is assumed to be, but that is a simplifying assumption that does not reflect the real world. Generally when we create a new person in this day and age we increase utility, both by creating new family members and friends for people, and by allowing greater division of labor to grow the economy. So adding new people might actually help the aliens by reducing their moral obligation.

2) We already exist, and stopping people from having children generally harms them. So even if the aliens would be better off if we had never existed, now that we exist our desire to reproduce has to be taken into account.

3) If we ever actually meet the aliens, it seems likely that through mutual trade we could make each other both better off.

Of course, as I said before, these are all practical objections that don't affect the principle of the thing. If the whole "possibly harming distant aliens by reproducing" thing still seems too counterintuitive to you, you could reject the person-affecting principle, either in favor of an impersonal type of morality, or in favor of some sort of pluralist ethics that takes both impersonal and person-affecting morality into account.

You've been one of my best critics in this, so please let me know if you think I'm onto something, or if I'm totally off-base.

Aside: Another objection to the Benign Addition Paradox I've come up with goes like this.
A: 10 human beings at wellbeing 10.

A+: 10 human beings at wellbeing 50 &1million sadistic demon-creatures at wellbeing 11. The demon-creatures derive 9 wellbeing each from torturing humans or watching humans being tortured.

B: 10 human beings at wellbeing -10,000 (from being tortured by demons) & 1 million sadistic demon creatures at wellbeing 20 (9 of which they get from torturing the 10 humans).

All these moves raise total utility, average utility, and each transition benefits all the persons involved, yet B seems obviously worse than A. The most obvious solutions I could think of were:

1) The "conferring a moral obligation on someone harms them" argument I already elucidated.

2) Not counting any utility derived from sadism towards the total.

I'm interested in what you think.

Replies from: Alicorn, Thrasymachus
comment by Alicorn · 2012-11-08T07:18:33.823Z · LW(p) · GW(p)

a person has a moral obligation and is prevented from fulfilling it

It is traditionally held in ethics that "ought implies can" - that is, that you don't have to do any things that you cannot in fact do.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-11-08T08:24:46.510Z · LW(p) · GW(p)

That is true, but I think that the discrepancy arises from me foolishly using a deontologically-loaded word like "obligation," in a consequentialist discussion.

I'll try to recast the language in a more consequentialist style: Instead of saying that, from a person-affecting perspective: "Adding new people to the world is worse if the addition makes existing people worse off, or confers upon the existing people a moral obligation to take an action that will make them worse off."

We can instead say: "An action that adds new people to the world, from a person-affecting perspective, makes the world a worse place if, after the action is taken, the world would be made a better place if all the previously existing people did something that harmed them."

Instead of saying: "It seems to me that a world where a person has a moral obligation and is prevented from fulfilling it is worse than one where they have one and are capable of fulfilling it."

We can instead say: "It seems to me that a world where it is physically impossible for someone to undertake an action that would improve it is worse than one where it is physically possible for someone to undertake that action."

If you accept these premises then A+ is worse than A, from a person-affecting perspective anyway. I don't think that the second premise is at all controversial, but the first one might be.

I also invite you to consider a variation of the Invincible Slaver AI variant of the problem I described. Suppose you had a choice between 1. Creating the slaves and the Invincible Slaver AI & 2. Doing nothing. You do not get the choice to create only the slaves, it's a package deal, slave and Slaver AI or nothing at all. Would you do it? I know I wouldn't.

comment by Thrasymachus · 2012-11-11T10:03:11.745Z · LW(p) · GW(p)

Don't have as much time as I would like, but short and (not particularly) sweet:

I think there is a mix up between evaluative and normative concerns here. We could say that the repugnant conclusion world is evaluated better than the current world, but some fact about how we get there (via benign addition or similar) is normatively unacceptable. But even then that seems a big bullet to bite - most of us think the RC is worse than a smaller population with high happiness (even if lower aggregate), not that it is better but it would be immoral for us to get there.

Another way of parsing your remarks is to say that when the 'levelling' option is available to us, benign addition is no longer better than leaving things as they are by person-affecting lights. So B < A, and if we know we can move from A+ --> B, A+ < A as well. This has the unfortunate side-effect of violating irrelevance of independent alternatives (if only A and A+ are on offer, we should say A+ > A, but once we introduce B, A > A+). Maybe that isn't too big a bullet to bite, but (lexically prior) person affecting restrictions tend to lead to funky problems where we rule out seemingly great deals (e.g. a trillion blissful lives for the cost of a pinprick). That said, everything in population ethics has nasty conclusions...

However, I don't buy the idea that we can rule out benign addition because the addition of moral obligation harms someone independent of the drop in utility they take for fulfilling it. It seems plausible that a fulfilled moral obligation makes the world a better place. There seem to be weird consequences if you take this to be lexically prior to other concerns for benign addition: on the face of it, this suggests we should say it is wrong for people in the developing world to have children (as they impose further obligations on affluent westerners), or indeed, depending on the redistribution ethic you take, everyone who isn't the most well-off person. Even if you don't and say it is outweighed by other concerns, this still seems to be misdiagnosing what should be morally salient here - there isn't even a pro tanto concern for poor parents not to have children because they'd impose further obligations on richer folks to help.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-11-11T11:20:30.067Z · LW(p) · GW(p)

I think there is a mix up between evaluative and normative concerns here.

That's right, my new argument doesn't avoid the RC for questions like "if two populations were to spontaneously appear at exactly the same time which would be better?"

Another way of parsing your remarks is to say that when the 'levelling' option is available to us, benign addition is no longer better than leaving things as they are by person-affecting lights. So B < A, and if we know we can move from A+ --> B, A+ < A as well.

What I'm actually arguing is that A+ is [person-affecting] worse than A, even when B is unavailable. This is due to following the axiom of transitivity backwards instead of forwards. If A>B and A+<B then A+<A. If I was to give a more concrete reason for why A+<A I would say that the fact that the A people are unaware that the + people exist is irrelevant, they are still harmed. This is not without precedent in ethics, most people think that a person who has an affair harms their spouse, even if their spouse never finds out.

However, after reading this essay by Eliezer, (after I wrote my November 8th comment) I am beginning to think the intransitivity that the person affecting view seems to create in the Benign Addition Paradox is an illusion. "Is B better than A?" and "Is B better than A+" are not the same question if you adopt a person affecting view, because the persons being affected are different in each question. If you ask two different questions you shouldn't expect transitive answers.

There seem to be weird consequences if you take this to be lexically prior to other concerns for benign addition: on the face of it, this suggests we should say it is wrong for people in the developing world to have children (as they impose further obligations on affluent westerners

I know, the example I gave was that we all might be harming unimaginably affluent aliens by reproducing. I think you are right that even taking the objections to it that I gave into account, it's a pretty weird conclusion.

there isn't even a pro tanto concern for poor parents not to have children because they'd impose further obligations on richer folks to help.

I don't know, I've heard people complain about poor people reproducing and increasing the burden on the welfare system before. Most of the time I find these complainers repulsive, I think their complaints are motivated by ugly, mean-spirited snobbery and status signalling, rather than genuine ethical concerns. But I suspect that a tiny minority of the complainers might have been complaining out of genuine concern that they were being harmed.

Maybe that isn't too big a bullet to bite, but (lexically prior) person affecting restrictions tend to lead to funky problems where we rule out seemingly great deals (e.g. a trillion blissful lives for the cost of a pinprick).

Again, I agree. My main point in making this argument was to try to demonstrate that a pure person-affecting viewpoint could be saved from the benign addition paradox. I think that even if I succeeded in that, the other weird conclusions I drew (i.e., we might be hurting super-rich aliens by reproducing) demonstrate that a pure person-affecting view is not morally tenable. I suspect the best solution might be to develop some pluralist synthesis of person-affecting and objective views.

Replies from: Thrasymachus
comment by Thrasymachus · 2012-11-11T18:52:53.982Z · LW(p) · GW(p)

It seems weird to say A+ < A on a person affecting view even when B is unavailable, in virtue of the fact that A now labours under an (unknown to them, and impossible to fulfil) moral obligation to improve the lives of the additional persons. Why stop there? We seem to suffer infinite harm by failing to bring into existence people we stipulate have positive lives but necessarily cannot exist. The fact (unknown to them, impossible to fulfil) obligations are non local also leads to alien-y reductios. Further, we generally do not want to say impossible to fulfil obligations really obtain, and furthermore that being subject to them harms thus - why believe that?

Intransitivity

I didn't find the Eliezer essay enlightening, but it is orthodox to say that evaluation should have transitive answers ("is A better than A+, is B better than A+?"), and most person affecting views have big problems with transitivity: consider this example.

World 1: A = 2, B = 1 World 2: B = 2, C = 1 World 3: C = 2, A = 1

By a simple person affecting view, W1>W2, W2>W3, W3>W1. So we have an intransitive cycle. (There are attempts to dodge this via comparative harm views etc., but ignore that).

One way person affecting views can not have normative intransitivity (which seems really bad) is to give normative principles that set how you pick available worlds. So once you are in a given world (say A), you can say that no option is acceptable that leads to anyone in that world ending up worse off. So once one knows there is a path to B via A+, taking the first step to A+ is unacceptable, but it would be okay if no A+ to B option was available. This violates irrelevance of independent alternatives and leads to path dependency, but that isn't such a big bullet to bite (you retain within-choice ordering).

Synthesis

I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble. One can get RC so long as the 'total term' has some weight (ie. not lexically inferior to) person-affecting wellbeing, because we can just offer massive increases in impersonal welfare that outweigh the person affecting harm. Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view - indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.

W1: A=10, B=5 W2: B=6, C=2 W3: C=3, A=1 W4: A=2, B=1

Even if you almost entirely value impersonal harm and put a tiny weight on person affecting harm, we can make sure we only reduce total welfare very slightly between each world so it can be made up for by person affecting benefit. It seems the worst of both worlds. I find accepting the total view (and the RC) the best out.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-11-12T07:04:51.434Z · LW(p) · GW(p)

most person affecting views have big problems with transitivity

That is because I don't think the person affecting view asks the same question each time (that was the point of Eliezer's essay). The person-affecting view doesn't ask "Which society is better, in some abstract sense?" It asks "Does transitioning from one society to the other harm the collective self-interest of the people in the original society?" That's obviously going to result in intransitivity.

I doubt there is going to be any available synthesis between person affecting and total views that will get out of trouble.....Conversely, we can keep intransitivity and other costly consequences with a mixed (non-lexically prior) view - indeed, we can downwardly dutch book someone by picking our people with care to get pairwise comparisons, eg.

I think I might have been conflating the "person affecting view" with the "prior existence" view. The prior existence view, from what I understand, takes the interests of future people into account, but reserves present people the right to veto their existence if it seriously harms their current interest. So it is immoral for existing people to create someone with low utility and then refuse to help or share with them because it would harm their self-interest, but it is moral [at least in most cases] for them to refuse to create someone whose existence harms their self-interest.

Basically, I find it unacceptable for ethics to conclude something like "It is a net moral good to kill a person destined to live a very worthwhile life and replace them with another person destined to live a slightly more worthwhile life." This seems obviously immoral to me. It seems obvious that a world where that person is never killed and lives their life is better than one where they were killed and replaced (although one where they were never born and the person with the better life was born instead would obviously be best of all).

On the other hand, as you pointed out before, it seems trivially right to give one existing person a pinprick on the finger in order to create a trillion blissful lives who do not harm existing people in any other way.

I think the best way to reconcile these two intuitions is to develop a pluralist system where prior-existence concerns have much, much, much larger weight than total concerns, but not infinitely large weight. In more concrete terms, it's wrong to kill someone and replace them with one slightly better off person, but it could be right to kill someone and replace them with a quadrillion people who lead blissful lives.

This doesn't completely avoid the RC of course. But I think that I can accept that. The thing I found particularly repugnant about the RC is that a RC-type world is the best practicable world, ie, the best possible world that can ever be created given the various constraints its inhabitants face. That's what I want to avoid, and I think the various pluralist ideas I've introduced successfully do so.

You are right to point out that my pluralist ideas do not avoid the RC for a sufficiently huge world. However, I can accept that. As long as an RC world is never the one we should be aiming for I think I can accept it.

comment by Ghatanathoah · 2012-07-27T08:35:23.682Z · LW(p) · GW(p)

I agree with Unnamed that this post misunderstands Parfit's argument by tying it empirical claims about resources that have no relevance.

My argument was against the Mere Addition Paradox, which works by progressively adding more and more people, and the common belief that one of the implications of the MAP is that we have a moral duty to devote all our resources to creating extremely large amounts people.

My main goal is to integrate the common intuition that A+ is better than A with the intuition that creating a vast number of people with low quality of life is bad. Parfit supports the intuition that A+ is better than A by pointing out that the extra people are not doing the inhabitants of A any harm by existing. I point out the reason that this is true is that the extra inhabitants come with their own resources, and that a society with those extra resources, but less people (A++), would be even better.

Just imagine God is offering you choices between different universes with inhabitants of the stipulated level of wellbeing: he offers you A, then offers you to take A+, then B, then B+, etc.

If each world had the same amount of resources then I'd choose A, it's the most efficient one at converting resources into overall value.

My understanding of Parfit's point is that it let you argue that all other things being equal, a huge population with low quality of life is better than a small one with high quality of life. This is what I am trying to refute. Like Unnamed, you don't seem to think this is necessarily what the MAP implies.

This is what the repugnant conclusion is all about: it has nothing whatsoever to do with whether or not Z (or the 'mechanism' of mere addition to get from A to Z) is feasible under resource constraint, but that if this were possible, maximizing aggregate value obliges we take this repugnant conclusion. I don't want to be mean, but this is a really basic error.

Again, my main point in writing this was to attack the chain of logic that leads from the intuition that adding a few people to A+ will do no harm to the Repugnant Conclusion. In other words, to attack the paradoxical nature of the MAP. I am aware that there are other arguments for the RC that require other responses such as the one about maximizing aggregate utility. Would you buy those cable packages if the government wasn't forcing you to?

Perhaps I should have started with the pluralist values, since they were sort of the underpinning of my argument. I am basically advocating a system where both creating new lives worth living, improving the utility of those who exist, and possibly other values such as equality, contribute to Overall Value. However, they have diminishing returns relative to each other (if saying that the value of creating a life worth living changes gives you the creeps just keep the value of doing that constant and change the value of the others, it's essentially the same). I'm not sure if increasing total utility should be a contributing value on its own, or if it is just a side-effect of increasing both the number of lives worth living and the average utility simultaneously.

So the more lives worth living you have, the greater the contributions that enhancing the utility existing lives contributes to overall value. For instance, in a very small population using resources to create a life worth living might contribute 1 Overall Value Point (OVP) while using those same resources to improve existing lives might only produce 0.5 OVPs. However, as the population grows larger, improving existing lives generates more and more OVPs, while creating lives worth living shrinks or remains constant.*

So maybe, if you added a vast amount of lives worth living to a you could generate the same amount of OVP that you could by increasing the average utility just a little. But it would be a fantastically inefficient way to generate OVP. A world where some of the resources used to sustain all those lives were instead used to enhance the lives of those who already exist would be a world with vastly more overall value.

Given the 'average term' doesn't dominate (or is lexically prior to) the total utility term, there will be acceptable deals this average total pluralist should accept where we lose some average but gain more than enough total utility to make up for it.

Is this any different from the Zeno's paradoxes of motion? I.E. you're basically saying that there is no point where the changes are big enough to become undesirable, so eventually we'll get to a point that everyone agrees is undesirable. How is that any different from saying Achilles will never catch the tortoise?

*I imagine that actually the values might also change relative to the resources available. Having 8 billion lives worth living on one planet seems like a good amount, but having just 8 billion lives worth living in a whole galaxy seems like a waste.

Replies from: Thrasymachus
comment by Thrasymachus · 2012-07-27T10:38:20.712Z · LW(p) · GW(p)

1) I don't think anyone in the entire population ethics literature reads Parfit as you do: the moral problem is not one of feasibility via resource constraint, but rather just that Z is a morally preferable state of affairs to A, even if it is not feasible. Again, the paradoxical nature of the MAP is not harmed even if it demands utterly infeasible or even nomologically impossible, but that were we able to actualize Z we should do it.

Regardless, I don't see how the 'resource constraint complaint' you make would trouble the reading of Parfit you make. Parfit could just stipulate that the 'gain' in resources required from A to A+ is just an efficiency gain, and so A -> Z (or A->B, A->Z) does not involve any increase in consumption. Or we could stipulate the original population in A, although giving up some resources are made happier by knowing there is this second group of people, etc. etc. So it hardly seems necessarily the case that A to A+ demands increased consumption. Denying these alternatives looks like hypothetical fighting.

2) I think the pluralist point stands independently of the resource constraint complaint. But you seem imply a fact you value efficient resource consumption independently: you prefer A because it is a more efficient use of resources, you note there might be diminishing returns to the value of 'added lives' so adding lives becomes a merely inefficient way of adding value, etc. Yet I don't think we should care about efficiency save as an instrument of getting value. All things equal a world with 50 Utils burning 2 million utils is better than one with 10 utils burning 10. So (again) objections to feasibility or efficiency shouldn't harm the MAP route to the repugnant conclusion.

3) I take your hope for escaping the MAP is getting some sort of weighted sum or combination of total utility, the utility of those who already exist, and possibly average utility of lives will get us our 'total value'. However, unless you hold that the 'average term' or the 'person affecting' term are lexically prior to utility (so no amount of utility can compensate for a drop in either), you are still susceptible to a variant of the MAP I gave above:

A : 10 people at wellbeing 10
A+: 10 People at wellbeing 20 & 1 million at wellbeing 9.5
B: 1 million and ten people at wellbeing 9.8.

So the A to A+ move has a small drop in average but a massive gain in utility, and persons already existing gain a boost in their wellbeing (and I can twist the dials even more astronomically). So if we can add these people, redistributing between them such that total value and equality increases seems plausible. And so we're off to the races. It might be the case that each move demands arbitrarily massive (and inefficient) use resources to actualize - but, again, this is irrelevant to a moral paradox. The only way the diminishing marginal returns point would help avoid MAP if they were asymptotic to some upper bound. However, cashing things out that way looks implausible, and also is vulnerable to intransitivity.

I don't see the similarity to Zeno's paradoxes of motion - or, at least, I don't see how this variant is more similar to Zeno than the original MAP is. Each step from A to A+ to B .... to Z, either originally or in my variant to make life difficult for your view is a step that increases total value. Given transitivity, Z will be better than A. If you think this is unacceptably Zeno like, then you could just make that complaint to the MAP simpliciter (although, FWIW, I think there are sufficient disanalogies as Zeno only works by taking each 'case' asymptotically closer to the singularity when tortoise and achilles meet, by contrast MAP is expanding across relevant metrics, so it seems more analogous to a Zeno case where Achilles is ahead of the Tortoise).

Replies from: Ghatanathoah, Ghatanathoah
comment by Ghatanathoah · 2012-07-27T21:03:20.348Z · LW(p) · GW(p)

I don't think anyone in the entire population ethics literature reads Parfit as you do: the moral problem is not one of feasibility via resource constraint, but rather just that Z is a morally preferable state of affairs to A, even if it is not feasible.

The view I am criticizing is not that Z may be preferable to A, under some circumstances. It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A. This may not be how Parfit is correctly interpreted, but it is a common enough interpretation that I think it needs to be attacked.

Again, the paradoxical nature of the MAP is not harmed even if it demands utterly infeasible or even nomologically impossible, but that were we able to actualize Z we should do it.

Again, my complaint with the paradox is not that, if Z and A are our only choices, that A is preferable to Z. Rather my complaint is the interpretation that if we were given some other alternative, Y that has a much larger population than A, but a smaller population and higher quality of life than Z, that Z would be preferable to Y as well.

All things equal a world with 50 Utils burning 2 million utils is better than one with 10 utils burning 10. So (again) objections to feasibility or efficiency shouldn't harm the MAP route to the repugnant conclusion.

Again, I admitted that my solution might allow a MAP route to the repugnant conclusion under some instances like the one you describe. My main argument is that under circumstances where our choices are not constrained in such a manner, it is better to pick a society with a higher quality of life and lower population.

So the A to A+ move has a small drop in average but a massive gain in utility, and persons already existing gain a boost in their wellbeing (and I can twist the dials even more astronomically). So if we can add these people, redistributing between them such that total value and equality increases seems plausible. And so we're off to the races. It might be the case that each move demands arbitrarily massive (and inefficient) use resources to actualize - but, again, this is irrelevant to a moral paradox.

Again, my objection is not that going this route is preferable is the best choice if it is the only choice we are allowed. My objection is to people who interpret Parfit to mean that even under circumstances where we are not in such a hypothetical and have more option to choose from, we should still choose the world with lives barely worth living (i.e. Robin Hanson). Again, those people may be interpreting Parfit incorrectly, which in turn makes my criticism seem like an incorrect interpretation of Parfit. But I think it is a common enough view that it deserves criticism.

In light of your and Unnamed's comments I have edited my post and added an explanatory paragraph at the beginning, which says:

"EDIT: To make this clearer, the interpretation of the Mere Addition Paradox this post is intended to criticize is the belief that two societies that differ in no way other than that one has a higher population and lower quality of life than the other, that that society is necessarily better than the one with the lower population and higher quality of life. Several commenters have argued that this is not a correct interpretation of the Mere Addition Paradox. They seem to claim that a more correct interpretation is that a sufficiently large population with a lower quality of life is better than a smaller one with a higher quality of life, but that it may need to differ in other ways (such as access to resources) to be truly better. They may be right, but I think that it is still a common enough interpretation that it needs attacking. The main practical difference between the interpretation that I am attacking and the interpretation they hold is that the former confers a moral obligation to create as many people as possible, regardless of its effects on quality of life, but the later does not."

Let me know if that deals sufficiently with your objections.

Replies from: Michael_Sullivan
comment by Michael_Sullivan · 2012-07-28T05:44:29.660Z · LW(p) · GW(p)

" It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A. This may not be how Parfit is correctly interpreted, but it is a common enough interpretation that I think it needs to be attacked."

Generally it's a good idea to think twice and reread before assuming that a published and frequently cited paper is saying something so obviously stupid.

Your edit doesn't help much at all. You talk about what others "seem to claim", but the argument that you have claimed Parfit is making is so obviously nonsensical, that it would lead me to wonder why anyone cites his paper at all, or why any philosophers or mathematicians have bothered to refute or support it's conclusions with more than a passing snark. A quick google search on the term "Repugnant Conclusion" leads to a wikipedia page that is far more informative than anything you have written here.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-28T07:32:11.325Z · LW(p) · GW(p)

Generally it's a good idea to think twice and reread before assuming that a published and frequently cited paper is saying something so obviously stupid.

It doesn't seem any less obviously stupid to me then the more moderate conclusion you claim that Parfit has drawn. If you really believe that creating a new lives barely worth living (or "lives someone would barely choose to live," in your words) is better than increasing the utility of existing lives then the next logical step is to confiscate all the resources people are using to live standards of life higher than "a life someone would barely choose to live" and use them to make more people instead. That would result in a society identical to the previous one except that it has a lower quality of life and a higher population.

Perhaps it would have sounded a little better if I had said "It is the view that if the only ways Z and A differ is that Z has a higher population, and lower quality of life, then Z is preferable to A, providing that Z's larger population is large enough that it has higher total utility than A." I disagree with this of course, it seems to me that total and average utility are both valuable, and one shouldn't dominate the other.

Also, I'm sorry to have retracted the comment you commented on, I did that before I noticed you had commented on it. I decided that I could explain my ideas more briefly and clearly in a new comment and posted that one in its place.

comment by Ghatanathoah · 2012-07-28T06:04:04.886Z · LW(p) · GW(p)

Okay, I think I finally see where our inferential differences are and why we seem to be talking past each other. I'm retracting my previous comment in favor of this one, which I think explains my view much more clearly.

I interpreted the Repugnant Conclusion to mean that a world with a large population with lives barely worth living is the optimal world, given the various constraints placed on it. In other words, given a world with a set amount of resources, the optimal way to convert those resources to value is to create a huge population with lives barely worth living. I totally disagree with this.

You interpreted the Repugnant Conclusion to mean that a world with a huge population of lives barely worth living may be a better world, but not necessarily the optimal world. I may agree with this.

To use a metaphor imagine a 25 horsepower engine that works at 100% efficiency, generating 25 horsepower. Then imagine a 100 horsepower engine that works at 50% efficiency, generating 50 horsepower. The second engine is better at generating horsepower than the first one, but it is less optimal at generating horsepower, it does not generate it the best it possibly could.

So when you say:

All things equal a world with 50 Utils burning 2 million utils is better than one with 10 utils burning 10.

We can agree say (if you accept my pluralist theory) that the first world is better, but the second one is more optimal. The first world has generated more value, but the second has done a more efficient job of it.

So, if you accept my pluralist theory, we might also say that a population Z, consisting of a galaxy full of 3 quadrillion of people that uses there sources of the galaxy to give them lives barely worth living, would be better than A, a society consisting of planet full of ten billion people that uses the planet's resources to give its inhabitants very excellent lives. However, Z would be less morally optimal than A because A uses all the resources of the planet to give people excellent lives, while Z squanders its resources creating more people. We could then say that Y, a galaxy full of 1 quadrillion people with very excellent lives is both better than Z and more optimal than Z. We could also say that Y is better than A, and equally optimal as A. However, Y might be worse (but more optimal) than a galaxy with a septillion people living lives barely worth living. Similarly, we might say that A is both more optimal than, and better than B, a planet of 15 billion people living lives barely worth living.

The arguments I have made in the OP have been directed at the idea that a population full of lives barely worth living is the optimal population, the population that converts the resources it has into value most efficiently (assuming you accept my pluralist moral theory's definition of efficiency). You have been arguing that even if that population is the most efficient at generating value, there might be another population so much huger that it could generate more value, even if it is much less efficient at doing so. I do not see anything contradictory about those two statements. I think that I mistakenly thought you were arguing that such a society would also be more optimal.

And if that is all the Repugnant Conclusion is I fail to see what all the fuss is about. The reason it seemed so repugnant to me was that I thought it argued that a world full of people with lives barely worth living was the very best sort of world, and we should do everything we can to bring such a world about. However, you seem to imply that that isn't what it means at all. If the Mere Addition Paradox and the Repugnant Conclusion does not imply that we have a moral imperative to bring a vastly populated world about then all it is is a weird thought experiment with no bearing on how people should behave. A curiosity, nothing more.

Even if your argument is a more accurate interpretation of Parfit, I think that idea that a world full of people barely worth living is the optimal one is still a common enough idea that it merits a counterargument. And I think the reason the OP is so heavily upvoted is that many people held the same impression of Parfit that I did.

comment by Kaj_Sotala · 2012-07-26T12:49:51.773Z · LW(p) · GW(p)

I liked this, but found that the dialog format made the argument you're making excessively drawn out and hard to follow. I would have preferred there to be a five-paragraph (say) recap of your criticism after the dialogue.

Replies from: Ghatanathoah, FiftyTwo, magfrump
comment by Ghatanathoah · 2012-07-26T23:18:11.569Z · LW(p) · GW(p)

I have followed your advice and added a recap at the end.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-07-30T10:45:03.689Z · LW(p) · GW(p)

Awesome, thanks. :-)

comment by FiftyTwo · 2012-07-26T21:36:23.905Z · LW(p) · GW(p)

Thats interesting, after reading this I thought it was one of the best examples I'd seen of the dialogue format being used to explain and argument and its objections.

If our difference of opinion isn't just due to some arbitrary factors about our aesthetic preferences it might be that you are more familiar with the original argument, so didn't benefit as much from having it explained it detail. Do you think thats accurate?

comment by magfrump · 2012-07-29T13:49:52.425Z · LW(p) · GW(p)

I found that the dialog format made it easy for me to follow, but still overly drawn out.

comment by John_Maxwell (John_Maxwell_IV) · 2012-07-26T22:59:47.366Z · LW(p) · GW(p)

I think people find the repugnant conclusion repugnant because they are using two different definitions for a "life barely worth living".

Society has very strong social norms against telling people to commit suicide. When someone's life is really miserable, almost no one tells them that killing themselves is the best thing they can do. Even euthanasia for people who are permanently and unavoidably suffering is controversial. So from a utilitarian perspective, you could say that people tend to have a strong "pro-life" bias, even when more life means more suffering.

But let's consider which lives actually have marginal benefit. Is it really actually a morally positive thing to bring in to the world someone who's life is going to be miserable?

Consider someone who is going to have an unpleasant childhood, an unpleasant adulthood, work too many hours at a job they don't enjoy as an adult, have their spouse die early, and finally die a lonely and isolated death themselves. Would you really bring someone like that in to the world given the choice? (Assuming no positive externalities from them working at their job.)

But let's say you meet this person in college and you can tell how the rest of their life is going to go. Would you encourage them to commit suicide, thereby reaping a moral surplus? Probably not, because now you're operating under the original definition of "life worth living" since the question is whether to kill instead of whether to bring in to existence.

Where is the "utilitarian zero point" actually at? In my view, the zero point represents someone whose life has ups and downs, and all the ups exactly cancel out all the downs. So now say we had a world where everyone's life has ups and downs, and for every person, there are ever-so-slightly more ups than downs--and there are tons and tons of people. That sounds pretty desirable, doesn't it?

Replies from: torekp, aelephant
comment by torekp · 2012-07-26T23:57:56.256Z · LW(p) · GW(p)

Another dimension to the ups and downs, as I mentioned elsewhere:

I think the "Repugnancy" comes from picturing a very low but positive quality of life as some kind of dull gray monotone, instead of the usual ups and downs, and then feeling enormous boredom, and then projecting that boredom onto the scenario.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-27T08:39:28.241Z · LW(p) · GW(p)

I think the "Repugnancy" comes from picturing a very low but positive quality of life as some kind of dull gray monotone, instead of the usual ups and downs, and then feeling enormous boredom, and then projecting that boredom onto the scenario.

I think a world where I felt a huge amount of awful downs over the course of my life is also pretty darn repugnant. Yes, I'd feel some ups as well, but it seems like a world where a smaller population felt almost no downs is probably better than a larger population with lots of downs.

comment by aelephant · 2012-07-28T04:35:10.493Z · LW(p) · GW(p)

But let's say you meet this person in college and you can tell how the rest of their life is going to go. Would you encourage them to commit suicide, thereby reaping a moral surplus?

One problem I see is that there is no way to tell. You may have an idea, but there is no way to know with 100% certainty that they won't turn things around and lead a net happy life down the line.

Replies from: aelephant
comment by aelephant · 2012-07-29T00:49:42.884Z · LW(p) · GW(p)

I'm not sure why I am getting voted down for the above comment. Is it because I am being perceived as "attacking the hypothetical"? In this case, maybe I just interpreted John_Maxwell_IV's comment differently. By "you can tell" does that mean that we have perfect knowledge of the entirety of the future of that person's life? Even if this were true, we are also an agent that can influence that future. I would prefer to act to alter the future (i.e. make that person happier) than to act to motivate them to commit suicide. Maybe I'm just weird in that I'd rather make people happy than make them dead.

Replies from: magfrump
comment by magfrump · 2012-07-29T13:55:40.797Z · LW(p) · GW(p)

I didn't downvote, but to me it feels like attacking the hypothetical; that would be my guess.

Obviously in real life most people (certainly, I think, most LWers) are VERY VERY HIGH above the "zero line" or whatever, so these sorts of questions feel pretty abstract to me.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-08-18T07:02:16.672Z · LW(p) · GW(p)

Obviously in real life most people (certainly, I think, most LWers) are VERY VERY HIGH above the "zero line" or whatever, so these sorts of questions feel pretty abstract to me.

I don't think that's obvious. Something like 1/5 of LW is depressed.

I suspect commuting is below the zero line, i.e. people would fast-forward through their commutes even if it meant never getting those hours back.

comment by Kaj_Sotala · 2012-07-26T12:48:01.579Z · LW(p) · GW(p)

Why do I never have discussions like this with telemarketers?

Replies from: FiftyTwo, Jayson_Virissimo
comment by FiftyTwo · 2012-07-26T21:47:45.619Z · LW(p) · GW(p)

They've discovered long enlightening dialogues are not cost effective in time usage to sales made. Therefore they've established a rationalist blacklist who they avoid calling.

comment by Jayson_Virissimo · 2012-07-27T10:17:33.652Z · LW(p) · GW(p)

Why do I never have discussions like this with telemarketers?

Have you ever tried? As it turns out, I haven't. On the other hand, when I was a kid, I remember my dad once giving an entire sales lecture to a telemarketer (he was a sales manager at the time) who was demonstrating poor marketing skills.

comment by buybuydandavis · 2012-07-27T01:55:51.778Z · LW(p) · GW(p)

Is this anything more than pointing out that Parfit's argument, as generally discussed, doesn't model resource constraints?

Are we assuming that he has never responded to this before? That this materially changes his conclusions?

At least in the wikipedia article, the lack of resource constraint was immediately obvious to me, but I don't think it materially changes the conclusions.

Solve for dU+/dN < 0, where U+=Total Utilons for person with net positive life (are some people only counting utilons above T+?), N = Number of persons, R = Resources, T+ = Utilon threshold for positive life, U = Total Utilons, u = personal utilons.

We could come up with a first order analysis of this.

When doing the analysis, let's note that people don't just consume resources, but produce them as well.

Yes, adding in resource constraint probably makes the average level higher, but I don't think anywhere near my current modest lifestyle. With any personal utilon function of resource use with a humanly accurate decreasing marginal utility, much of the resources I consume would have greater marginal utility for someone with u~T+.

So in the end, I think a "repugnant enough" conclusion stands. We're all a little above subsistence.

comment by David_Gerard · 2012-07-26T14:38:51.360Z · LW(p) · GW(p)

Front page worthy.

Replies from: Michael_Sullivan, Cyan, jsalvatier
comment by Michael_Sullivan · 2012-07-28T05:28:19.514Z · LW(p) · GW(p)

Not even close. The primary content of the OP is based on a straw man due to a massive misunderstanding of the mathematical arguments about the Repugnant Conclusion.

The conclusion of what Partfit actually demonstrated goes something more like this:

For any coherent mathematical definition of utility such that there is some additive functions which allows you to sum the utility of many people to determine U(population), the following paradox exists:

Given any world with positive utility A, there exists at least one other world B with more people, and less average utiity per person which your utility system will judge to be better, i.e.: U(B) > U(A).

Parfit does not conclude that you necessarily reach world B by maximizing reproduction from world A nor that every world with more people and less average utility is better. Only worlds with a higher total utility are considered "better". This of course implies either more resources, or more utility efficient use of resources in the "better" world.

The cable channel analogy would be to say "As long as every extra cable channel I add provides at least some constant positive utility epsilon>0, even if it is vanishingly small, there is some number of cable channels I can put into your feed that will make it worth $100 to you." Is this really so hard to accept? It seems obviously true even if irrelevant to real life where most of us would have diminishing marginal utility of cable channels.

Parfit's point is that it is hard for the human brain to accept the possibility that some world with uncounted numbers of people with lives just barely worth living could possibly be better than any world with a bunch of very happy high utility people (he can't accept it himself), even though any algebraically coherent system of utility will lead to that very conclusion.

John Maxwell's comment gets to the heart of the issue, the term "just barely worth living". Philosophy always struggles where math meets natural language, and this is a classic example.

The phrase "just barely worth living" conjures up an image of a life that is barely better than the kind of neverending torture/loneliness scenario where we might consider encouraging suicide.

But the taboos against suicide are strong. Even putting aside taboos, there are large amounts of collateral damage from suicides. The most obvious is that anyone who has emotional or family connections to a suicide will suffer. Even people who are very isolated, will have some connection, and suicide could trigger grief or depression in any people who encounter them or their story. There are also some very scary studies about suicide and accident rates going up in the aftermath of publicized suicides or accidents, due to social lemming like programming in humans.

So it is quite rational for most people to not consider suicide until their personal utility is highly negative if they care at all about the people or world around them. For most of us, a life just above the suicide threshold would be a negative utility life and a fairly large negative utility.

A life with utility positive epsilon is not a life of sadness or pain, but a life that we would just barely choose to live, as a disembodied soul given a choice of life X or non-existence. Such a life, IMO will be comfortably clear of the suicide threshold, and would, in my opinion, represent an improvement in the world. Why wouldn't it? It is by definition, a life that someone would choose to have rather than not have! How could that not improve the world?

Given this interpretation of "just barely worth living", I accept the so-called Repugnant conclusion, and go happily on my way calculating utility functions.

RC is just the mirror image of the tortured person versus 3^^^^3 persons with dust specks in their eyes debate.

Tabooing "life just barely worth living", and then shutting up and multiplying led me to realize that the so-called Repugnant conclusion wasn't repugnant after all.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-28T07:16:36.042Z · LW(p) · GW(p)

The primary content of the OP is based on a straw man due to a massive misunderstanding of the mathematical arguments about the Repugnant Conclusion.

Even if that is the case I think that that strawman is commonly accepted enough that it needs to be taken down.

Given any world with positive utility A, there exists at least one other world B with more people, and less average utiity per person which your utility system will judge to be better, i.e.: U(B) > U(A).

I believe that creating a life worth living and enhancing the lives of existing people to both be contributory values that form Overall Value. Furthermore, these values have diminishing returns relative to each other, so in a world with low population creating new people is more valuable, but in a world with a high population improving the lives of existing people is of more value.

Then I shut up and multiply and get the conclusion that the optimal society is one that has a moderately sized population and a high average quality of life. For every world with a large population leading lives barely worth living there exists another, better world with a lower population and higher quality of life.

Now, there may be some "barely worth living" societies so huge that their contribution to overall value is larger than a much smaller society with a higher standard of living, even considering diminishing returns. However, that "barely worth living" society would in turn be much worse than a society with a somewhat smaller population and a higher standard of living. For instance, a planet full of lives barely worth living might be better than an island full of very high quality lives. However, it would be much worse than a planet with a somewhat smaller population, but a higher quality of life.

Parfit does not conclude that you necessarily reach world B by maximizing reproduction from world A nor that every world with more people and less average utility is better. Only worlds with a higher total utility are considered "better".

I'm not interesting in maximizing total utility. I'm interested in maximizing overall value, of which total utility is only one part.

A life with utility positive epsilon is not a life of sadness or pain, but a life that we would just barely choose to live, as a disembodied soul given a choice of life X or non-existence. Such a life, IMO will be comfortably clear of the suicide threshold, and would, in my opinion, represent an improvement in the world.

To me it would, in many cases, be morally better to use the resources that would be used to create a "life that someone would choose to have" to instead improve the lives of existing people so that they are above that threshold. That would contribute more to overall value, and therefore make an even bigger improvement in the world.

Why wouldn't it? It is by definition, a life that someone would choose to have rather than not have! How could that not improve the world?

It's not that it wouldn't improve the world. It's that it would improve the world less than enhancing the utility of the people who already exist instead. You can criticize someone who is doing good if they are passing up opportunities to do even more good.

RC is just the mirror image of the tortured person versus 3^^^^3 persons with dust specks in their eyes debate.

Not really. In "torture vs specks" your choice will have the same effect on total and average utility (they either both go down a little or both go down a lot). In the RC your choice will affect them differently (one goes up and the other goes down). Since total and average utility (or more precisely, creating new lives worth living and enhancing existing lives) are both contribute to overall value if you shut up and multiply you'll conclude that the best way to maximize overall value is to increase both of them, not maximize one at the expense of the other.

Replies from: koning_robot
comment by koning_robot · 2012-09-02T12:12:03.569Z · LW(p) · GW(p)

What is this Overall Value that you speak of, and why do the parts that you add matter? It seems to me that you're just making something up to rationalize your preconceptions.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-09-13T02:18:11.468Z · LW(p) · GW(p)

Overall Value is what one gets when one adds up various values, like average utility, number of worthwhile lives, equality, etc. These values are not always 100% compatible with each other, often a compromise needs to be found between them. They also probably have diminishing returns relative to each other.

When people try to develop moral theories they often reach insane-seeming normative conclusions. One possible reason for this is that they have made genuine moral progress which only seems insane because we are unused to it. But another possible (and probably more frequent) reason is that they have an incomplete theory that fails to take something of value into account.

The classic example of this is the early development of utilitarianism. Early utilitarian theories that maximized pleasure sort of suggested the insane conclusion that the ideal society would be one full of people who are tended by robots while blissed out on heroin. It turned out the reason it drew this insane conclusion was that it didn't distinguish between types of pleasure, or consider that there were other values than pleasure. Eventually preference utilitarianism came along and proved far superior because it could take more values into account. I don't think it's perfected yet, but it's a step in the right direction.

I think that there are likely multiple values in aggregating utility, and that the reason the Repugnant Conclusion is repugnant is that it fails to take some of these values into account. For instance, total number of worthwhile lives, and high average utility are likely both of value. A world with higher average utility may be morally better than one with lower average utility and a larger population, even if it has lower total aggregate utility.

Related to this, I also suspect that the reason that it seems wrong to sacrifice people to a utility monster, even though that would increase total aggregate utility, is that equality is a terminal value, not a byproduct of diminishing marginal returns in utility. A world where a utility monster shares with people may be a morally better world, even if it has lower total aggregate utility.

I think that moral theories that just try to maximize total aggregate utility are actually oversimplifications of much more complex values. Accepting these theories, instead of trying to find what they missed, is Hollywood Rationality. For every moral advancement there are a thousand errors. The major challenge of ethics is determining when a new moral conclusion is genuine moral progress and when it is a mistake.

comment by Cyan · 2012-07-26T15:43:57.601Z · LW(p) · GW(p)

Ditto.

Replies from: Cyan
comment by Cyan · 2012-07-26T20:32:05.159Z · LW(p) · GW(p)

Yikes. I'm reverting to neutrality on this post until I assess it more carefully (if I ever get around to it).

comment by jsalvatier · 2012-07-26T19:40:49.579Z · LW(p) · GW(p)

Agreed. This is an argument I haven't heard before.

comment by shokwave · 2012-07-26T19:07:38.765Z · LW(p) · GW(p)

So to summarise, we have A, where there's a large affluent population.
Then we move to A+, where there's a large affluent population and separately, a small poor population (whose lives are still just barely worth living). We intuit that A+ is better than A.
Then we move to B, where we combine A+'s large affluent population with the small poor population and get a very large, middle-upper-class population. We intuit that B is better than A+, and transitively, better than A.
This reasoning suggests that hypothetical C would be better than B or A, and so on until Z, which is our universe tiled with just-barely-worth-living people, and that seems repugnant.

Ghatanathoah's claim is that moving from A to A+ adds some number of people, and some amount of resources that gives them worthwhile lives. Moving from A+ to B then fairly re-distributes resources. This is therefore shown to be a good act. But moving from B directly to C adds just people, and fairly re-distributes resources. This has not been shown to be a good act! So the repugnant conclusion fails because it gets you to agree to a particular act, and then uses sleight of hand to swap that particular act out for a different, worse act while still holding you to your original agreement!

Replies from: Thrasymachus
comment by Thrasymachus · 2012-07-27T01:58:39.620Z · LW(p) · GW(p)

We can make the same dance of moves from B to B+ (more people, worthwhile lives) and then B+ to C (redistribution and aggregate value increase). So, unless you are willing to deny transitivity, then moving from B to C is what we should do. Rinse and repeat until Z.

(This is assuming you mean resources as well being. However, the OPs resources point isn't responsive to Parfit's argument).

Replies from: shokwave
comment by shokwave · 2012-07-27T04:44:09.657Z · LW(p) · GW(p)

The thing is, you never actually get to Z. if you do add people and enough resources for their bare minimum, you approach Z from above but never actually reach it - the standard of living never drops below the bare minimum.

It is perhaps cheating to say that Z is when average utility drops below the bare minimum. If the Repugnant Conclusion is that we prefer A to Z, even though all the lives in both are worth living, then that is another matter.

Replies from: Thrasymachus
comment by Thrasymachus · 2012-07-27T10:05:04.169Z · LW(p) · GW(p)

Lives in Z are stipulated to be above the neutral level so they are better lived than not. The repugnancy is that they are barely worth living, so just above this level, and most people find that a very large population of lives barely worth living is not preferable to a smaller one with very good lives.

Replies from: shokwave
comment by shokwave · 2012-07-27T12:46:28.740Z · LW(p) · GW(p)

most people find that a very large population of lives barely worth living is not preferable to a smaller one with very good lives.

Sure, so adding poor people to a rich world and averaging out the resources is bad, not good, and we shouldn't do it. It seems to me that the argument that the argument for adding people doesn't take into account this preference for a few rich over many poor.

Also, there may be anthropic reasons for that preference: would you rather be born as one of 500 rich, or one of 10,000 poor? Now, would you rather a 5% chance of existing as a rich person (95% not-exist) or a 100% chance of existing as a poor person?

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2012-07-28T20:12:02.687Z · LW(p) · GW(p)

Sure, so adding poor people to a rich world and averaging out the resources is bad, not good, and we shouldn't do it.

Which step(s) do you disagree with? Adding poor people or averaging the utility?

Parfit defends the first step by saying that it's a "mere addition". Poor people on they're own are (somewhat) good. Rich people on their own are good. Therefore the combination of the two is better than either.

The second step (averaging the resources) is supposed to be intuitively obvious. We can tweak the mathematics so that the quality of life of the rich only goes down a tiny amount to bring the poor up to their level. If the rich could end all poverty by giving a very small amount wouldn't that be the right thing to do?

comment by asparisi · 2012-07-26T13:18:05.877Z · LW(p) · GW(p)

I like the idea here, but it seems like the paradox survives.

Say that A consists of 2 populations and 2 sets of resources. (Px, Py) (Rx, Ry) Rx is enough to grant some high number of utilions per person in Px: say 100, while Ry is only enough to grant some very low number of utilions per person: say .0001.

In A+ we combine them. Assuming that Px=Rx for the moment, the average Utility lowers to 50.00005 per person. And with each group and resources added, you get closer to .0001.

And it doesn't have to stop there, if we imagine some future population Sx who have resources Sy that allows them to get 10^-10 utilions per person, for example.

Which means that as long as doing so adds some arbitrarily small amount of resources, adding people is a net benefit. Which might produce a curve, at some point, where adding people doesn't help to add resources. But if it doesn't then the problem still stands.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-26T21:30:41.300Z · LW(p) · GW(p)

while Ry is only enough to grant some very low number of utilions per person: say .0001

And it doesn't have to stop there, if we imagine some future population Sx who have resources Sy that allows them to get 10^-10 utilions per person, for example.

Part of the original premise of the paradox is that even the people of population Z, the vast population the paradox leads to, still have lives that are "barely worth living." So you can't go that far, the population has to have enough utilons per person that their lives are barely worth living.

Which means that as long as doing so adds some arbitrarily small amount of resources, adding people is a net benefit.

Yes, one weakness in this argument is that it still allows the Mere Addition Paradox to happen if the following criteria are met:

  1. The addition of a new person also adds more resources.

  2. The amount of new resources added is enough to give the new person a life "barely worth living."

  3. The only way to obtain those new resources is to create that new person. The people currently existing would be unable to obtain the resources without creating that person.

I think that my argument is still useful because of the low odds of encountering a situation that fulfills all those criteria, especially 3, in real life. It means that people no longer need to worry that they are committing some horrible moral wrong by not having as many children as possible.

Replies from: asparisi
comment by asparisi · 2012-07-27T00:29:49.082Z · LW(p) · GW(p)

I think I mostly agree with you, although we'd have to define how many utilions per person were "worth living" for your criticism against example Sx to work. And actually, for most of human history, I think that adding a new person was, on the whole, more likely to add resources, particularly in agricultural communities and in times of war. (Which is why we've only seen the reversal of this trend relatively recently)

I am not sure that your 3rd criteria is required: it would seem that as long as adding a new person added more utilions than not, adding a new person would be preferable. But in those cases, it might form a curve rather than a line, where you get diminishing returns after getting a population of a certain size, eliminating (at least) the paradoxical element.

I do think that the insight of talking about resources to utility in a good insight here, but it's good to know where it is weak.

Replies from: gwern, Ghatanathoah
comment by gwern · 2012-07-27T00:48:08.639Z · LW(p) · GW(p)

And actually, for most of human history, I think that adding a new person was, on the whole, more likely to add resources, particularly in agricultural communities and in times of war. (Which is why we've only seen the reversal of this trend relatively recently)

Well, yes; Malthusian models would even predict this, since if another person didn't add resources, that reduces resources per capita (the denominator increased, the numerator didn't), and this could continue until resources per capita fall below subsistence, at which point every additional person must cause an additional death/failure-to-reproduce/etc. and the population has reached a steady state.

So every new additional person does allow new resources to be opened up or exploited - more marginal farmland farmed - but every new resource is (diminishing marginal returns, the best stuff is always used first) worse than the previous new resource...

comment by Ghatanathoah · 2012-07-27T06:20:02.467Z · LW(p) · GW(p)

And actually, for most of human history, I think that adding a new person was, on the whole, more likely to add resources, particularly in agricultural communities and in times of war.

That might be correct. However, my argument also deals with the most efficient way to create people who add resources (when I argued A++ was better than A+).

For instance, suppose there are enough resources to sustain 100 people at a life barely worth living can be extracted from a mine, and you need to create some people to do it. A person working by hand can extract 1 person worth of resources, enough for their own subsistance. A person with mining equipment can extract 10 people worth of resources. You can either create 100 people who do it by hand, or you can create 10 people and make them mining equipment (assume that creating and maintaining the mining equipment is as expensive as creating 15 people with lives barely worth living). Which should you do?

I would argue that, unless the human population is near extinction levels, you should create the 10 people with the mining equipment. This is because it will create a large surplus of 75 people worth of resources to enhance the lives of the 10 people, and other people who already exist.

Replies from: asparisi
comment by asparisi · 2012-07-27T06:45:08.887Z · LW(p) · GW(p)

Technology can alter these economies, and I am certainly not saying we should all go to subsistance farming to avoid the paradox. I think making the calculation "equipment=making X lives" is a little off the mark: typically, you'd subtract Utilions (if you are trading for the mining equipment) and add workers. (for repair/maintainence) so you might end up with, say, 12 people, 10 who mine and 2 who repair, and 85 utilions rather than 100. But the end math of who gets how much ends up about the same as your hypothetical.

comment by Nisan · 2012-07-27T22:23:03.892Z · LW(p) · GW(p)

I further argue for a theory of population ethics that values both using resources to create lives worth living, and using resources to enhance the utility of already existing people, and considers the best sort of world to be one where neither of these two values totally dominate the other.

If you try to formalize this theory of population ethics, I believe you will find that it's susceptible to Mere Addition type paradoxes. See, for example, articles about "population ethics" and "axiological impossibility theorems" on Gustav Arrhenius' website.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-28T06:45:41.582Z · LW(p) · GW(p)

If you try to formalize this theory of population ethics, I believe you will find that it's susceptible to Mere Addition type paradoxes.

It is, but the important point is that the paradoxes that it is vulnerable to do not seem to mandate reproduction, as a more traditional theory would.

For instance, my theory would say that Planet A with a moderate population of people living excellent lives is better than Planet B with a larger population whose lives are barely worth living. However, it might also say that Galaxy B full of people with lives barely worth living is better than Planet A, because it has so many people that it produces enough value to swamp planet A, even if it does so less efficiently. However, my theory would also say that Galaxy A, which has a somewhat smaller population and a higher quality of life than Galaxy B, is better than Galaxy B.

My theory is not for finding the best population, it is about finding the optimal population. It is about finding what the best possible population is given whatever resource constraints a society has. It does not bother me that you are able to dream up a better society if you remove those resource constraints, such a society might be better, but it would also be using resources less optimally. The best sort of society is one that uses the resources it has available to both create lives worth living, and using resources to enhance the utility of already existing people.

Replies from: Nisan
comment by Nisan · 2012-07-28T18:38:45.408Z · LW(p) · GW(p)

I've come up with a formalization of everything you've said here that I think is a steel man for your position. Somewhat vaguely: A population with homogeneous utility defines a point in a two-dimensional space — one dimension for population size, one for individual utility. Our preferences are represented by a total, transitive, binary relation on that space. The point of the Mere Addition Paradox is that a set of reasonable axioms rules out all preferences. The point you're making is that if we're restricted by resources to some region of that space, then we only need to think about our preferences on a one-dimensional Pareto frontier. And one can easily come up with preferences on that frontier that satisfy all the nice axioms.

Very well. Just so long as the Pareto frontier doesn't change, there is no paradox.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-29T05:27:40.881Z · LW(p) · GW(p)

I think you've got it. Thanks for formalizing that for me, I think it will help me a lot in the future!

If you're interested in where I got some of these ideas from, by the way, I derived most of my non-Less Wrong inspiration from the work of philosopher Alan Carter.

comment by Oscar_Cunningham · 2012-07-28T21:10:38.783Z · LW(p) · GW(p)

Upvoted for being the kind of post I want on LessWrong, but I agree with the posters above who say that you misunderstand the point of the paradox. Thrasymachus articulates why most clearly. You do however make a compelling argument that even if we accept that A<Z we should still spend some resources on increasing happiness. The hypothetical Z presumes more resources than we have. Given that we can't reach Z even by using all our resources, knowing A<Z isn't doesn't tell us anything because Z isn't one of our options. If we spent all our resources on population growth we'd only achieve Z-, a smaller population than Z with the same happiness, this might we be worse than A.

EDIT: Not that I accept A<Z. I resolve the non-transitivity by taking A+<A.

Replies from: Thrasymachus
comment by Thrasymachus · 2012-07-29T14:52:12.466Z · LW(p) · GW(p)

Not that I accept A<Z. I resolve the non-transitivity by taking A+<A.

That's really interesting. Why?

And would you also take A+ < A if we fiddled the numbers to get.

A: P1 at 10
A+: P1 at 20, P2-20 at 8
B: P1-20 at 9

So we can still get to the RP, yet A+ seems a really good deal versus A.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2012-07-29T15:44:17.088Z · LW(p) · GW(p)

What I actually value is average happiness. All else being equal, I don't think adding people whose lives are just worth living is a good thing. (Often all else is not equal, I do support adding more people if it will create more interesting diversity, for example).

I don't quite understand your example, what does "P2-20" mean? I'd also need to know the populations. Anyway, I think your point is that we can increase the happiness of P1 as we go from A to A+. In that case we might well have A<A+, but then we would have B<A+ also.

Replies from: Thrasymachus, Thrasymachus, wedrifid
comment by Thrasymachus · 2012-08-01T00:13:50.261Z · LW(p) · GW(p)

[Second anti-average util example]:

It also means that if the average value of a population is below zero, adding more lives that are below zero (but not as far below zero as the average of the population) is a good thing to do.

comment by Thrasymachus · 2012-07-30T17:10:01.186Z · LW(p) · GW(p)

Sorry, P2-20 means 19 persons all at 8 units of welfare. The idea was to intuition pump the person affecting restriction: A+ is now strictly better for everyone, including the person who was in A, and so it might be more intuitively costly to say it is in fact A>A+

You may well have thought about all the 'standard' objections to average util in population ethics cases, but just in case not:

Average util seems to me implausible, particularly in different number cases: for example why bringing lives into existence which are positive (even really positive) would be wrong just because they would be below the average of the lives who already exist.

Related to averaging is dealing with separability: if we're just averaging all-person happiness, than whether it is a good thing to bring a person on earth will depend on the wellbeing of aliens in the next super-cluster (if they're happier than us, then anti-natalism seems to follow). Biting the bullet here seems really costly, and I'm not sure what other answers one could give. If you have some in mind, please let me know!

comment by wedrifid · 2012-07-29T16:19:56.058Z · LW(p) · GW(p)

What I actually value is average happiness.

ie. Death To All The Whiners! Be happy or die!

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2012-07-29T17:14:27.532Z · LW(p) · GW(p)

Each death adds it's own negative utility. Death is worse than the difference in utilities between the situations before and after the death.

Replies from: wedrifid
comment by wedrifid · 2012-07-29T17:23:36.554Z · LW(p) · GW(p)

It sounds like you may have similar actual preference as I. (I just wouldn't dream of calling it "average happiness".)

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2012-07-29T17:29:46.188Z · LW(p) · GW(p)

Cool. I don't really believe average happiness either (but I'm a lot closer to it than valuing total happiness). I wouldn't steal from the poor to give to the rich, even if the rich are more effective at using resources.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-30T07:05:49.944Z · LW(p) · GW(p)

Cool. I don't really believe average happiness either (but I'm a lot closer to it than valuing total happiness).

I think that saying that "I value improving the lives of those who already exist," is a good way to articulate your desire to increase average utility, but also spell out the fact that you find it bad to increase it by other means, like killing unhappy people.

It also articulates the fact that you would (I assume) be opposed to creating a person who is tortured 23 hours a day in a world filled completely with people being tortured 24 hours a day, even though that would increase average utility.

I also assume that while you believe in something like average utility, you don't think that a universe with only one person with a utility of 100 is just as morally good as a universe with a trillion people who each have a utility of 100. So you probably also value having more people to some extent, even if you value it incrementally much less than average utility (I refer to this value as "number of worthwhile lives").

I wouldn't steal from the poor to give to the rich, even if the rich are more effective at using resources.

It sounds like you must also value equality for its own sake, rather than as a side-effect of diminishing marginal utility. I think I am also coming around to this way of thinking. I don't think equality is infinitely valuable, of course, it needs to be traded off against other values. But I do think that, for example, a world where people are enslaved to a utility monster is probably worse than one where they are free, even if that diminishes total aggregate utility.

In fact, I'm starting to wonder if total utility is a terminal value, or if increasing it is just a side effect of wanting to simultaneously increase average utility and the number of worthwhile lives.

Replies from: Oscar_Cunningham
comment by Oscar_Cunningham · 2012-07-30T09:31:35.899Z · LW(p) · GW(p)

Agreed on all counts.

(Apart from: I wouldn't say that I was maximising others utilty. I'd say I was maximising their happiness, freedom, fulfilment, etc. A utility function is an abstract mathematical thing. We can prove that rational agents behave as if they were trying to maximise some utility function. Since I'm trying to be a rational agent I try and make sure my ideas are consistent with a utility function, and so I sometimes talk of "my utility function".

But when I consider other people I don't value their utility functions. I just directly value their happiness, freedom, fulfilment, and so on. I don't value their utility functions because, One, they're not rational and so they don't have utility functions. Two, valuing each other's utility would lead to difficult self-reference. But mostly Three, on introspection I really do just value their happiness, freedom, fulfilment, etc. and not their utility.

The sense in which they do have utility is that each contributes utility to me. But then there's no such thing as "an individual's utility" because (as we've seen) the utility other people give to me is a combined function of all of their happiness, freedom, fulfilment, and so on.)

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-31T06:26:51.329Z · LW(p) · GW(p)

Apart from: I wouldn't say that I was maximising others utilty. I'd say I was maximising their happiness, freedom, fulfilment, etc

I think I understand. I tend to use the word "utility" to mean something like "the sum total of everything a person values." Your use is probably more precise, and closer to the original meaning.

I also get very nervous of the idea of maximizing utility because I believe wholeheartedly that value is complex. So if we define utility too narrowly and then try to maximize it we might lose something important. So right now I try to "increase" or "improve" utility rather than maximize it.

comment by Luke_A_Somers · 2012-07-26T13:36:17.031Z · LW(p) · GW(p)

Are you sure that sharing their resources like that would lower the average level of utility? Wouldn't there be economies of scale and such...

In the original, the A+ to B- transition RAISES the average. It's the A to A+ transition that lowers the average.

Also, I think it's rather aside from the point to begin talking about resource management.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-26T21:18:45.749Z · LW(p) · GW(p)

In the original, the A+ to B- transition RAISES the average. It's the A to A+ transition that lowers the average.

I edited the post to fix that. Thank you for pointing that out.

Also, I think it's rather aside from the point to begin talking about resource management.

It is, that's why Alice tells Bob to stop fighting the hypothetical and puts the conversation back on track. The reason I added that line was to illustrate a wrong way to approach the MAP.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-07-27T13:16:45.321Z · LW(p) · GW(p)

I'm talking about Alice's challenge later -

Parfit was not "merely adding" people to the population. He was also adding resources.

Your criticism of the hypothetical does not fill the standard you apply to it.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-29T05:47:17.285Z · LW(p) · GW(p)

After discussing this with Unnamed and Thrasymachus I think the main issue is that I was attacking the idea that the world of the Repugnant Conclusion represents the optimal society. That is, I was arguing that creating a RC-type world does not represent the most efficient way for a society to convert the resources it has into utility.

However, I think I gave the impression that I was talking about the idea that an RC-type world can never be better (have more utility period, regardless of how efficiently it was obtained). I was not disputing this. I concede that a very small society that converts all the resources it has into utility as optimally as possible may still have less utility than a society that is so huge and has so many resources that it can produce more utility by pure brute force. Keep in mind that I regard utility as being generated most effectively by having a combo of high average and total wellbeing, rather than just maximizing one or the other.

For instance, let's say a small island with a moderate-sized population who have wonderful lives converts resources into utility at 100 utilons per resource point, and has access to 10 resource points. Result: 1000 utilons.

Then let's imagine a huge continent with a huge population that has somewhat less pleasant lives and converts resources into utility at 50 utilons per resource point, and has access to 30 resource points. Result, 1500 utilons. So the continent could be regarded as better, even though it is less optimal.

I believe that talking about resource management is relevant when talking about optimality. You are right, however, that it is not very relevant when talking about betterness, since when postulating better possible societies you can postulate that they have any amount of resources you want.

comment by Nisan · 2012-07-27T22:15:49.592Z · LW(p) · GW(p)

Parfit was not "merely adding" people to the population. He was also adding resources.

Parfit could easily reply that in world A, there are unused resources beyond the reach of the population, and in world A+ these resources are used by the extra people.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-28T06:47:18.963Z · LW(p) · GW(p)

From the perspective of the thought experiment adding resources that didn't exist before and making resources that are already there available are not different in any significant way.

comment by Giles · 2012-07-26T23:58:05.548Z · LW(p) · GW(p)

Can I try my own summary?

According to Parfit's premises, the bestest imaginable world is one with an enormous number of extremely happy people. This world isn't physically possible though due to resource constraints.

The mere addition thing shows that in general we are indifferent between small numbers of happy people and large numbers of unhappy people (actually the argument just shows "no worse than" - you'd need to run a similar argument in the other direction to show you're actually indifferent).

Now consider the (presumably finite?) space of all worlds that are possible given resource constraints. Pick the world W0 with the highest utility, U0.

Now think about the U0 indifference curve. Where does W0 lie along this curve? It is surely whichever world is cheapest - the only world along this curve that fits within our budget (actually there may be multiple points that come exactly on budget, but there won't be any that come below budget because then we'd be able to up the happiness/population a bit and achieve some greater utility U1).

if the cheapest world on our indifference curve happens to be the one with an enormous amount of very unhappy people, then we'd reach a repugnant conclusion. But given our original assumptions that's not necessarily so.

comment by CronoDAS · 2012-07-26T09:54:20.532Z · LW(p) · GW(p)

Alice: Let's take population "A+" again. Now imagine that instead of having a population of people with lives barely worth living, the second continent is inhabited by a smaller population with the same very high percentage of resources and utility per person as the population of the first continent. Call it "A++. " Would you say "A++" was better than "A+?"

Bob: Sure, definitely.

I don't find this obvious. I also don't find it obvious that A+ is better than A, or even that some people existing is better than no people existing. My ethical intuitions just don't seem to give answers for this kind of thing, even on the personal level of trying to answer the question "On a purely selfish basis, is it better for me, personally, to exist or not to exist?" My usual approach of asking myself "Do I anticipate experiencing pleasure or misery from this situation?" doesn't return an answer that makes any sense, because I can't experience either pleasure or misery if I don't exist.

Suppose I define my utility as pleasure / (misery^2). (Misery is worse than pleasure is good.) If I don't exist, misery is zero, which is wonderful. But my pleasure is also zero, which is terrible. 0/0 is undefined, so when I try to calculate the utility of not existing, all I get is an error. That's the kind of situation I feel like I'm in.

Replies from: army1987, Adele_L
comment by A1987dM (army1987) · 2012-07-26T16:32:39.784Z · LW(p) · GW(p)

What's the difference between "On a purely selfish basis, is it better for me, personally, to exist or not to exist?" and "Would I commit suicide, all other things being equal?"?

Replies from: CronoDAS, None, Nisan
comment by CronoDAS · 2012-07-29T00:07:56.538Z · LW(p) · GW(p)

"Would I commit suicide, all other things being equal?"?

My suicide affects other people. I have both selfish and altruistic desires; "not wanting other people to grieve for me" is a good enough reason not to kill myself.

comment by [deleted] · 2012-07-26T20:02:00.991Z · LW(p) · GW(p)

I read "not existing" as "not ever existing", so the difference is everything that happened between when you would have started existing and when you would have committed suicide.

Replies from: army1987
comment by A1987dM (army1987) · 2012-07-26T22:30:45.763Z · LW(p) · GW(p)

(English badly needs separate words for ‘physically exist at a particular time’ and ‘exist, in an abstract timeless sense’. Lots of philosophical discussion such as A-theory vs B-theory would then be shown to be meaningless: does the past exist? Taboo “exist”: the past no longer exists_1, but it exists_2 nevertheless.)

comment by Nisan · 2012-07-27T22:12:24.946Z · LW(p) · GW(p)

You can tell whether a timeless decision agent would prefer to have been born by giving it opportunities to make decisions that acausally increase its probability of being born.

EDIT: For example, you can convince the agent that it was created because its creator believed that the agent would probably make paperclips. If the TDT agent values its existence, it will make paperclips.

I don't think a causal decision agent has anything that can be called a "preference to have been born".

comment by Adele_L · 2012-07-26T14:42:05.526Z · LW(p) · GW(p)

So once your misery goes below one unit, you get insane gains in utility for small reductions in misery?

Replies from: CronoDAS
comment by CronoDAS · 2012-07-29T00:06:44.532Z · LW(p) · GW(p)

I don't think my actual utility in real life follows that equation, but it's an example that has properties that make the example work. (Another analogy would be that the utility of being dead comes out to the square root of minus one, which can't be directly compared with real numbers.)

comment by Nisan · 2012-07-27T22:16:52.067Z · LW(p) · GW(p)

"use resources to increase the utility of people who already exist," not "increase average utility."

I agree, and I'd like to see a formal treatment of this idea.

comment by Giles · 2012-07-27T02:32:55.692Z · LW(p) · GW(p)

OK... I have another criticism of the repugnant conclusion not based on resource constraints.

We can imagine a sequence of worlds A, B, C, D... each with a greater population, lower average happiness and greater utility than the previous. But did anyone say that the happiness has to converge to zero?

If we're indifferent between worlds with the same number of people and the same average happiness then yes, it does converge to zero. But if we choose some other averaging function then not necessarily. When going from A+ to B- we might lower the left column only a tiny amount, and when we repeat the process all the way to Z then people might be only modestly less happy than they were in A.

comment by The_Duck · 2012-07-27T01:45:32.996Z · LW(p) · GW(p)

what the Mere Addition Paradox proves is not that you can make the world better by adding extra people, but rather that you can make it better by adding extra people and resources to support them.

Your conclusion seems to be that the "repugnant conclusion" is not actually repugnant. That is, the "dystopian" world of a large population leading barely worthwhile lives is better than the original world of a smaller population leading fulfilled lives. You argue that this is possible because the larger world has more resources.

I think resources are irrelevant: the conclusion is still repugnant, at least to my ethical intuitions. A galactic civilization making full use of the resources of the Milky Way, but in which lives are just barely worthwhile, is worse than a civilization stuck on Earth and using only its resources, but thereby sustaining a small paradise. It doesn't matter that the galactic civilization has vastly more resources.

Now, it's almost certainly true that using the resources of the galaxy we could make a civilization with much more value than either the dystopia or the Earth-bound paradise, if we did things right. No one is claiming that the galactic dystopia is optimal, given any quantity of resources. But you need to revise your ethical theory if, of those two choices, your ethical theory prefers the galactic dystopia, and your ethical intuitions disagree.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-07-27T06:05:57.305Z · LW(p) · GW(p)

Your conclusion seems to be that the "repugnant conclusion" is not actually repugnant. That is, the "dystopian" world of a large population leading barely worthwhile lives is better than the original world of a smaller population leading fulfilled lives.

My impression of the repugnant conclusion was that it claimed that a large population leading barely worthwhile lives is, all other things being equal, always better than a smaller one leading much better lives, even if the worlds they exist in are otherwise identical. For this reason I thought that I had refuted the repugnant conclusion if I demonstrated that in two worlds with identical access to resources, the one with the smaller population with high average utility is optimal.

In other words I thought that the repugnant conclusion implied: Earthly Paradise<Galactic Paradise with trillions of people<Galactic Dystopia with quadrillions of people.

I thought that it was enough to refute the repugnant conclusion if I demonstrated: Earthly Paradise<Galactic Dystopia<Galactic Paradise.

No one is claiming that the galactic dystopia is optimal, given any quantity of resources.

The impression I got was that the Repugnant Conclusion claimed precisely that. I thought it claimed that it is always better to use resources to create another life barely worth living then it is to improve existing lives, as long as everyone else was already at the "barely worth living" level.

But you need to revise your ethical theory if, of those two choices, your ethical theory prefers the galactic dystopia, and your ethical intuitions disagree.

You may be right. The idea that the galactic dystopia is better than the Earthly paradise is still kind of repugnant. I may just have to accept that A+ is worse than A for some reason.

comment by mytyde · 2012-09-01T01:12:47.325Z · LW(p) · GW(p)

This sounds like a 'bacteria colony' analysis of humanity. It seems to me that by defining the hypothetical situation so narrowly, it is turned into a de-facto trigonometric equation, a graph function dependent only on the variable quantity of resources available.

It only sounds like a reasonable conclusion because of the ridiculous assumption that creating more people is a moral imperative, when in reality if people enjoyed a high standard of living they would choose when to reproduce in part as a function of deciding not to lower their own standard of living. Are we to understand instead that societal resources would be devoted to coercive baby-making towards fulfilling an abstract ideal of morality?!

Unborn babies are not immoral, so why should born babies be moral?

comment by JohnEPaton · 2012-07-30T02:55:39.438Z · LW(p) · GW(p)

What is the tradeoff between average utility and total utility? Presumably a world with only ten people who all have tremendous utility would be just as repugnant as Parfit's world.

Replies from: Thrasymachus
comment by Thrasymachus · 2012-07-30T17:13:19.810Z · LW(p) · GW(p)

It should be noted that if you have any tradeoff between average and total util you can still get MAPed into the RP: just add enough 'total' utility in the A+-->B move so this compensates for the drop in average, and then iterate.

Lexical priority would work (read average util, and only use total util as a tiebreaker), but this view seems to stand or fall with average util: if we find average util too costly, average util+lexically inferior total util is unlikely to be significantly cheaper.

comment by FiftyTwo · 2012-07-26T21:46:29.361Z · LW(p) · GW(p)

Very good article.

One issue with the scenario as set up by Parfit is that it assumes the resources most important for human life are those which exist independently of the population (so there is some zero sum distribution going on). Whereas there are a lot of things important to utility that are positively correlated with the population (even if we ignore economies of scale in using resources there are ideas, entertainment, social interaction etc. that increase with the population). If I recall the studies on correlates of happiness (And Lukeprog's posts) well it might mean that we are obliged to tile the universe with people with happy personal lives and personally fulfilling jobs, to the extent that can be done using the minimum resources per person.

In summary I suppose I'm agreeing with your conclusion that it is about efficiency, but possibly population increase would be more efficient.

comment by Dreaded_Anomaly · 2012-07-26T08:50:41.243Z · LW(p) · GW(p)

This belongs on the front page. Very well done.

I have never agreed with the Repugnant Conclusion, but I have always had trouble putting my disagreement into words. Your dialog makes several important points very clearly:

But don't say "having a high average utility." Say "use resources to increase the utility of people who already exist."

...

So "A+" differs from "A" both in the size of its population, and the amount of resources it has access to. Parfit was not "merely adding" people to the population. He was also adding resources.