The Very Repugnant Conclusion

post by Stuart_Armstrong · 2019-01-18T14:26:08.083Z · score: 27 (15 votes) · LW · GW · 14 comments

I haven't seen the very repugnant conclusion mentioned much here, so I thought I'd add it, as I need it as an example in a subsequent post [LW · GW].

Basically, the repugnant conclusion says:

Some people come to accept the repugnant conclusion, sometimes reluctantly. More difficult to accept is the very repugnant conclusion:

This one feels more negative than the standard repugnant conclusion, maybe because it strikes at our egalitarian and prioritarian instincts, or maybe because of the nature of suffering.

Anyway, my motto on these things is generally:

14 comments

Comments sorted by top scores.

comment by NRW · 2019-01-18T16:04:24.869Z · score: 10 (7 votes) · LW · GW

I think the conclusion is 'repugnant' because people don't want to admit the extent to which they are anti-natalists.

There is a big proportion of current lives that, if I could instantiate an infinity of, I wouldn't think it was a good thing. And the 'repugnant conclusion' is just them realizing they actually feel this way when asked if they would like to shut up and multiply a negative number by infinity.

comment by Jan_Kulveit · 2019-01-18T18:03:52.799Z · score: 8 (5 votes) · LW · GW

Btw, when it comes to any practical implications, both of these repugnant conclusions depend on likely incorrect aggregating of utilities. If we aggregate utilities with logarithms/exponentiation in the right places [LW · GW], and assume the resources are limited, the answer to the question “what is the best population given the limited resources” is not repugnant.

comment by avturchin · 2019-01-18T15:27:43.796Z · score: 4 (4 votes) · LW · GW

The world W2 could be our contemporary world, with 7.5 billion people, and a lot of sufferings, and W1 is the world of just one tribe of happy "primitive" people, like the one Sentinel island people. I prefer W2, as it is much more interesting and diverse.

comment by Stuart_Armstrong · 2019-01-18T15:50:06.961Z · score: 7 (4 votes) · LW · GW

What would you think of , much bigger, sadder, and blander?

Anyway the point of the repugnant conclusion is that any world , no matter how ideal, has a corresponding .

comment by avturchin · 2019-01-18T16:54:53.414Z · score: 2 (2 votes) · LW · GW

This is a central argument of Phil Torres' paper against space colonisation: there will be space wars!

Probably, we should include in the calculation not only averaged wellbeing of the individuals, which is "goodharted" measure of social wellbeing, but also the properties of the whole world, to which any individual may have access.

comment by ErickBall · 2019-01-18T19:00:54.443Z · score: 1 (1 votes) · LW · GW

Why is average wellbeing a goodharted measure?

comment by Jayson_Virissimo · 2019-01-21T01:36:22.656Z · score: 4 (2 votes) · LW · GW

Offing those with low wellbeing increases average wellbeing.

comment by Stuart_Armstrong · 2019-01-21T13:10:01.908Z · score: 2 (1 votes) · LW · GW

That in itself can be solved (if you break the symmetry between killing / not allowing to live), but it still remains that a tiny super-happy population (of one person in the limit) is what's aimed at.

comment by avturchin · 2019-01-18T21:12:47.853Z · score: 3 (2 votes) · LW · GW

It ignores many important aspect of human wellbeing:

1) preference to stay alive, even if it is unpleasant + some other preferences

2) time relation between observer-moments: e.g. short intense pain is not very important

3) non linear preferences about intensity of pleasure and pain. people could work a lot of time for short pleasure

4) social values (family), intellectual values (knowledge, diversity of experiences)

comment by Dagon · 2019-01-21T03:13:08.131Z · score: 1 (2 votes) · LW · GW

Declining marginal moral weight answers this with a built-in preference for diversity. More reference classes are good, more quantity isn't worth very much trade-off in intensity.

comment by Dagon · 2019-01-18T22:39:49.318Z · score: 3 (5 votes) · LW · GW

This is pretty easily fixed with declining marginal moral weight as quantity increases. And this matches my intuitions pretty well. Basically, accept and make use of scope insensitivity (though probably at higher numbers than evolved into you).

An additional happy person carries less weight going from 1000000000 happy people to 1000000001 that it does when going from 100 to 101. Same for suffering, whether you think suffering is comparable with happiness or think it's another dimension - one more sufferer is worse when it's rare and there are few, and less bad (but not actively good) when nothing materially changes.

When you find morally wrong outcomes that contradict your moral theory, then enrich your moral theory rather than twisting your moral judgements.

I don't think this generalizes. If your moral intuition (snap judgements) contradicts your moral theory (considered judgements), you need to expend effort to figure out which one is more likely to apply.

comment by philh · 2019-01-24T15:52:25.681Z · score: 2 (1 votes) · LW · GW

That doesn't fix it, it just means you need bigger numbers before you run into the problem.

Maybe if you have an asymtote, but I fully expect that you still run into problems then.

comment by Stuart_Armstrong · 2019-01-24T16:41:55.117Z · score: 4 (2 votes) · LW · GW

Geometric discounting could fix this, as the sum of the series converges.

I once had a (prioritarian) idea where you order people's utility from lowest to highest, and apply geometric discounting starting at the lowest. It's not particularly elegant or theoretically grounded, but it does avoid the repugnant conclusion (indeed I think geometric discounting, applied in any order, removes the RC).

comment by c0rw1n · 2019-01-18T18:51:42.573Z · score: 3 (3 votes) · LW · GW

If your theory leads you to an obviously stupid conclusion, you need a better theory.

Total utilitarianism is boringly wrong for this reason, yes.

What you need is non-stupid utilitarianism.

First, utility is not a scalar number, even for one person. Utility and disutility are not the same axis: if I hug a plushie, that is utility without any disutility and if I kick a bedpost, that is disutility without utility, and if I do both at the same time, neither of those ends up compensating for each other. They are not the same dimension with the sign reversed. This is before going into the details where, for example, preference utilitarianism is a model where each preference is its own axis, and so is each dispreference. Those axes are sometimes orthogonal and sometimes they trade off against each other, a little or a lot. The numbers are fuzzy and imprecise, and the weighting of the needs/preferences/goals/values also changes over time: for example, it is impossible to maximize for sleep because if you sleep all the time, you starve and die and if you maximize for food then you die of eating too much or never sleeping or whatever. We are not maximizers, we are satisficers, and trying to maximize any need/goal/value by trading it off against all the others leads to a very stupid death. We are more like feedback-based control systems that need to keep a lot of parameters in the good boundaries.

Second, interpersonal comparison are between hazardous and impossible. Going back to the example of preference utilitarianism, people have different levels of enjoyment of the same things (in addition to those degrees also changing over time intrapersonally).

Third, there are limits to the disutility that a person will endure before it rounds off to infinite disutility. Under sufficient torture, people will prefer to die rather than bearing it for any length of time longer; at this point, it can be called subjective infinite disutility (simplifying so as to not get bogged down in discussing discounting rates and limited lifespan).

Third and a halfth, it is impossible to get so much utility that it can be rounded off to positive infinity, short of maybe FiO or other form of hedonium/orgasmium/eudaimonium/whatever of the sort. It is not infinite, but it is "whatever is the limit for a sapient mind" (which is something like "all the thermostat variables satisfied including those that require making a modicum of effort to satisfy the others", because minds are a tool to do that and seem to require doing it to some, intra- and interpersonally varying, extent).

Fourth and the most important point to refute total utilitarianism, you need to account for the entire distribution. Even assuming, very wrongly as explained above, that you can actually measure the utility that one person gets and compare it to the utility that an other person gets, you can still have the bottom of your distribution of utility being sufficiently low that the bottom whatever% of the population would prefer to die immediately, which is (simplified) infinite disutility and can not be traded for the limit of positive utility. (Torture and dust specks: no finite amount of dust specks can trade off for the infinite disutility of a degree of torture sufficient to make even one single victim prefer to die.) (This still works even if the victim dies in a day, because you need to measure over all of the history from the beginnning of when your moral theory begins to take effect.) (For the smartasses in the back row: no, that doesn't mean that there having been that level of torture in the past absolves you from not doing it in the future under the pretext that the disutility over all of history already sums to infinity. Yes it does, and don't you make it worse.)

But alright. Assuming you can measure utility Correctly, let's say you have the floor of the distribution of it at least epsilon above the minimum viable. What then? Job done? No. You also want to maximize the entire area under the curve, raising it as high as possible, which is the point that total utilitarianism actually got right. And, in a condition of scarcity, that may require having not too many people. At least, having the rise in amount of people being slower than the rise in distributable utility.