The Very Repugnant Conclusion

post by Stuart_Armstrong · 2019-01-18T14:26:08.083Z · LW · GW · 19 comments

I haven't seen the very repugnant conclusion mentioned much here, so I thought I'd add it, as I need it as an example in a subsequent post [LW · GW].

Basically, the repugnant conclusion says:

Some people come to accept the repugnant conclusion, sometimes reluctantly. More difficult to accept is the very repugnant conclusion:

This one feels more negative than the standard repugnant conclusion, maybe because it strikes at our egalitarian and prioritarian instincts, or maybe because of the nature of suffering.

Anyway, my motto on these things is generally:

19 comments

Comments sorted by top scores.

comment by NRW · 2019-01-18T16:04:24.869Z · LW(p) · GW(p)

I think the conclusion is 'repugnant' because people don't want to admit the extent to which they are anti-natalists.

There is a big proportion of current lives that, if I could instantiate an infinity of, I wouldn't think it was a good thing. And the 'repugnant conclusion' is just them realizing they actually feel this way when asked if they would like to shut up and multiply a negative number by infinity.

comment by Jan_Kulveit · 2019-01-18T18:03:52.799Z · LW(p) · GW(p)

Btw, when it comes to any practical implications, both of these repugnant conclusions depend on likely incorrect aggregating of utilities. If we aggregate utilities with logarithms/exponentiation in the right places [LW · GW], and assume the resources are limited, the answer to the question “what is the best population given the limited resources” is not repugnant.

comment by c0rw1n · 2019-01-18T18:51:42.573Z · LW(p) · GW(p)

If your theory leads you to an obviously stupid conclusion, you need a better theory.

Total utilitarianism is boringly wrong for this reason, yes.

What you need is non-stupid utilitarianism.

First, utility is not a scalar number, even for one person. Utility and disutility are not the same axis: if I hug a plushie, that is utility without any disutility and if I kick a bedpost, that is disutility without utility, and if I do both at the same time, neither of those ends up compensating for each other. They are not the same dimension with the sign reversed. This is before going into the details where, for example, preference utilitarianism is a model where each preference is its own axis, and so is each dispreference. Those axes are sometimes orthogonal and sometimes they trade off against each other, a little or a lot. The numbers are fuzzy and imprecise, and the weighting of the needs/preferences/goals/values also changes over time: for example, it is impossible to maximize for sleep because if you sleep all the time, you starve and die and if you maximize for food then you die of eating too much or never sleeping or whatever. We are not maximizers, we are satisficers, and trying to maximize any need/goal/value by trading it off against all the others leads to a very stupid death. We are more like feedback-based control systems that need to keep a lot of parameters in the good boundaries.

Second, interpersonal comparison are between hazardous and impossible. Going back to the example of preference utilitarianism, people have different levels of enjoyment of the same things (in addition to those degrees also changing over time intrapersonally).

Third, there are limits to the disutility that a person will endure before it rounds off to infinite disutility. Under sufficient torture, people will prefer to die rather than bearing it for any length of time longer; at this point, it can be called subjective infinite disutility (simplifying so as to not get bogged down in discussing discounting rates and limited lifespan).

Third and a halfth, it is impossible to get so much utility that it can be rounded off to positive infinity, short of maybe FiO or other form of hedonium/orgasmium/eudaimonium/whatever of the sort. It is not infinite, but it is "whatever is the limit for a sapient mind" (which is something like "all the thermostat variables satisfied including those that require making a modicum of effort to satisfy the others", because minds are a tool to do that and seem to require doing it to some, intra- and interpersonally varying, extent).

Fourth and the most important point to refute total utilitarianism, you need to account for the entire distribution. Even assuming, very wrongly as explained above, that you can actually measure the utility that one person gets and compare it to the utility that an other person gets, you can still have the bottom of your distribution of utility being sufficiently low that the bottom whatever% of the population would prefer to die immediately, which is (simplified) infinite disutility and can not be traded for the limit of positive utility. (Torture and dust specks: no finite amount of dust specks can trade off for the infinite disutility of a degree of torture sufficient to make even one single victim prefer to die.) (This still works even if the victim dies in a day, because you need to measure over all of the history from the beginnning of when your moral theory begins to take effect.) (For the smartasses in the back row: no, that doesn't mean that there having been that level of torture in the past absolves you from not doing it in the future under the pretext that the disutility over all of history already sums to infinity. Yes it does, and don't you make it worse.)

But alright. Assuming you can measure utility Correctly, let's say you have the floor of the distribution of it at least epsilon above the minimum viable. What then? Job done? No. You also want to maximize the entire area under the curve, raising it as high as possible, which is the point that total utilitarianism actually got right. And, in a condition of scarcity, that may require having not too many people. At least, having the rise in amount of people being slower than the rise in distributable utility.

Replies from: TAG
comment by TAG · 2022-04-20T22:19:54.310Z · LW(p) · GW(p)
comment by Dagon · 2019-01-18T22:39:49.318Z · LW(p) · GW(p)

This is pretty easily fixed with declining marginal moral weight as quantity increases. And this matches my intuitions pretty well. Basically, accept and make use of scope insensitivity (though probably at higher numbers than evolved into you).

An additional happy person carries less weight going from 1000000000 happy people to 1000000001 that it does when going from 100 to 101. Same for suffering, whether you think suffering is comparable with happiness or think it's another dimension - one more sufferer is worse when it's rare and there are few, and less bad (but not actively good) when nothing materially changes.

When you find morally wrong outcomes that contradict your moral theory, then enrich your moral theory rather than twisting your moral judgements.

I don't think this generalizes. If your moral intuition (snap judgements) contradicts your moral theory (considered judgements), you need to expend effort to figure out which one is more likely to apply.

Replies from: philh
comment by philh · 2019-01-24T15:52:25.681Z · LW(p) · GW(p)

That doesn't fix it, it just means you need bigger numbers before you run into the problem.

Maybe if you have an asymtote, but I fully expect that you still run into problems then.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2019-01-24T16:41:55.117Z · LW(p) · GW(p)

Geometric discounting could fix this, as the sum of the series converges.

I once had a (prioritarian) idea where you order people's utility from lowest to highest, and apply geometric discounting starting at the lowest. It's not particularly elegant or theoretically grounded, but it does avoid the repugnant conclusion (indeed I think geometric discounting, applied in any order, removes the RC).

Replies from: MichaelStJules
comment by MichaelStJules · 2019-08-31T17:00:10.180Z · LW(p) · GW(p)

Erik Carlson called this the Moderate Trade-off Theory. See also Sider's Geometrism and Carlson's discussion of it here.

One concern I have with this approach is that similar interests do not receive similar weight, i.e. if the utility of one individual approaches another's, then the weight we give to their interests should also approach each other. I would be pretty happy if we could replace the geometric discounting with a more continuous discounting without introducing any other significant problems. The weights could each depend on all of the utilities in a continuous way.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2019-08-31T13:25:42.902Z · LW(p) · GW(p)

Something like or or in general (for decreasing, continuous ) could work, I think.

Replies from: MichaelStJules, Gurkenglas
comment by MichaelStJules · 2019-08-31T15:36:35.955Z · LW(p) · GW(p)

won't converge as more people (with good lives or not) are added, so it doesn't avoid the Repugnant Conclusion or Very Repugnant Conclusion and it will allow dust specks to outweigh torture.

Normalizing by the sum of weights will give less weight to the worst off as more people are added. If the weighted average is already negative, then adding people with negative but better than average lives will improve the average. And it will still allow dust specks to outweigh torture (the population has a fixed size in the two outcomes, so normalization makes no difference).

In fact, anything of the form for increasing will allow dust specks to outweigh torture for a large enough population, and if , will also lead to the Repugnant Conclusion and Very Repugnant Conclusion (and if , it will lead to the Sadistic Conclusion, and if , then it's good to add lives not worth living, all else equal). If we only allow to depend on the population size, , as by multiplying by some factor which depends only on , then (regardless of the value of ), it will still choose torture over dust specks, with enough dust specks, because that trade-off is for a fixed population size, anyway. EDIT: If depends on in some more complicated way, I'm not sure that it would necessarily lead to torture over dust specks.

I had in mind something like weighting by where is the minimum utility (so it gives weight 1 to the worst off individual), but it still leads to the Repugnant Conclusion and at some point choosing torture over dust specks.

What I might like is to weight by something like for , where the utilities are labelled in increasing (nondecreasing) order, but if are close (and far from all other weights, either in an absolute sense or in a relative sense), they should each receive weight close to . Similarly, if there are clustered utilities, they should each receive weight close to the average of the weights we'd give them in the original Moderate Trade-off Theory.

comment by Gurkenglas · 2019-08-31T14:38:43.226Z · LW(p) · GW(p)

The utility of the universe should not depend on the order that we assign to the population. We could say that there is a space of lives one could live, and each person covers some portion of that space, and identical people are either completely redundant or only reinforce coverage of their region, and our aim should be to cover some swath of this space.

comment by avturchin · 2019-01-18T15:27:43.796Z · LW(p) · GW(p)

The world W2 could be our contemporary world, with 7.5 billion people, and a lot of sufferings, and W1 is the world of just one tribe of happy "primitive" people, like the one Sentinel island people. I prefer W2, as it is much more interesting and diverse.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2019-01-18T15:50:06.961Z · LW(p) · GW(p)

What would you think of , much bigger, sadder, and blander?

Anyway the point of the repugnant conclusion is that any world , no matter how ideal, has a corresponding .

Replies from: avturchin, Dagon
comment by avturchin · 2019-01-18T16:54:53.414Z · LW(p) · GW(p)

This is a central argument of Phil Torres' paper against space colonisation: there will be space wars!

Probably, we should include in the calculation not only averaged wellbeing of the individuals, which is "goodharted" measure of social wellbeing, but also the properties of the whole world, to which any individual may have access.

Replies from: ErickBall
comment by ErickBall · 2019-01-18T19:00:54.443Z · LW(p) · GW(p)

Why is average wellbeing a goodharted measure?

Replies from: Jayson_Virissimo, avturchin
comment by Jayson_Virissimo · 2019-01-21T01:36:22.656Z · LW(p) · GW(p)

Offing those with low wellbeing increases average wellbeing.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2019-01-21T13:10:01.908Z · LW(p) · GW(p)

That in itself can be solved (if you break the symmetry between killing / not allowing to live), but it still remains that a tiny super-happy population (of one person in the limit) is what's aimed at.

comment by avturchin · 2019-01-18T21:12:47.853Z · LW(p) · GW(p)

It ignores many important aspect of human wellbeing:

1) preference to stay alive, even if it is unpleasant + some other preferences

2) time relation between observer-moments: e.g. short intense pain is not very important

3) non linear preferences about intensity of pleasure and pain. people could work a lot of time for short pleasure

4) social values (family), intellectual values (knowledge, diversity of experiences)

comment by Dagon · 2019-01-21T03:13:08.131Z · LW(p) · GW(p)

Declining marginal moral weight answers this with a built-in preference for diversity. More reference classes are good, more quantity isn't worth very much trade-off in intensity.

comment by A Furlong (a-furlong) · 2022-04-20T18:50:16.596Z · LW(p) · GW(p)

This is an example of the more-maximum fallacy. Actually utilitarianism suggests the maximum happiness, not more happiness. This presents a false dichotomy between two non-utilitarian worlds, the actual utilitarian world is the unwritten third option - one in which the greater number of people is happy. Notice this third utilitarian world is also more palatable for other moral theory followers.