# Logarithms and Total Utilitarianism

post by pvs · 2018-08-09T08:49:16.753Z · LW · GW · 31 comments## Contents

The Repugnant Conclusion IRL Equality Implications Footnotes None 30 comments

*Epistemic status: I might be reinventing the wheel here*

A common cause for rejection of total utilitarianism is that it implies the so-called Repugnant Conclusion, of which a lot has been written elsewhere. I will argue that while this implication is solid in theory, it does not apply in our current known universe. My view is similar to the one expressed here [LW · GW], but I try to give more details.

## The Repugnant Conclusion IRL

The greatest relevance of the RC in practice arises in situations of scarce resources and Malthusian population traps¹: We compare population A, where there are few people with each one having plentiful resources, and population Z, which has grown from A until the average person lives in near-subsistence conditions.

Let's formalize this a bit: suppose each person requires 1 unit of resources for living, so that the utility of a person living on 1 resources is exactly 0: a completely neutral life. Furthermore, suppose utility is linear w.r.t. resources: doubling resources means doubling utility and 10 resources correspond to 1 utility. If there are 100 resources in the world, population A might contain 10 people with 10 resources each and total utility 10; population Z might contain 99 people with 100/99 resources each and total utility also 10.

So in this model, we are indifferent between A and Z even as everyone in Z is barely subsisting, and this would be the Repugnant Conclusion². But this conclusion depends crucially on the relationship between resources and utility which we have assumed to be linear. What if our assumption is wrong? What is this relationship in the actual world? Note that this is an empirical question³.

It is well known that self-reported happiness varies logarithmically with income⁴, both between countries and for individuals within each country, so it seems reasonable to assume that the utility-resources relation is logarithmic: exponential increases in resources bring linear increases in utility.

Back to our model, assuming log utility, how do we now compare A and Z? If utility per person is where are the resources available to that person, then total utility is . Assuming equality in the population (see the Equality section), if * *are total resources and is population size, each person has resources and so we have

We can plot total utility (vertical axis) as a function of *N *(horizontal axis) for

Here we can see two extremes of cero utility: at where there are no persons and at where each person lives with 1 resources, at subsistence level. In the middle there is a sweet spot, and the maximum M lies at around 37 people⁵.

Now we can answer our question! Population A, where is better than population Z where , but M is a superior alternative to both.

So I have shown that there is a population M greater and better than A where everyone is worse off, how is that different from the RC? Well, the difference is that this does not happen for every population, but only for those where average well being is relatively high. Furthermore, the average individual in M is far above subsistence.

## Equality

In my model I assumed an equal distribution of resources over the population, mainly to simplify the calculations, but also because under the log relationship and if the population is held constant, total utilitarianism endorses equality. I will try to give an intuition for this and then a formal proof.

This graph represents individual utility (vertical axis) vs individual resources (horizontal axis). If there are two people, A and B, each having 2.5 and 7.5 resources respectively, we can reallocate resources so that both now are at point M, with 5 each. Note that the increase in utility for A is 3, while the decrease for B is a bit less than 2, so total utility increases by more than 1.

This happens no matter where in the graph are A and B due to the properties of the log function. As long as there is a difference in wealth you can increase total utility by redistributing resources equally.

For a formal proof, see ⁶.

## Implications

The main conclusion I get from this is that although total utilitarianism is far from perfect, it might give good results in practice. The Repugnant Conclusion is not dead, however. We can certainly imagine some sentient aliens, AIs or animals whose utility function is such that greater, worse-average-utility populations end up being better. But in this case, should we really call it repugnant? Could our intuition be fine-tuned for thinking about humans, and thus not applicable to those hypothetical beings?

I don't know to what extent have others explored the connection between total utilitarianism and equality, but I was surprised when I realized that the former could imply the latter. Of course, even if total utility is all that matters, it might not be possible to reshuffle it among individuals with complete liberty, which is the case in my model.

## Footnotes

1: One might consider other ways of controlling individual utility in a population besides resources (e.g. mind design, torture...) but these seem less relevant to me.

2: Actually, in the original formulation Z is shown to be *better* than A, not just equally good.

3: As long as utility is well defined, that is. Here I will use self-reported happiness as a proxy for utility.

4: See the charts here

5: We can find the exact maximum for any R with a bit of calculus:

A nice property of this is that the ratio that maximizes is constant for al (the exact constant obtained here is just due to the arbitrary choice of base 10 for the logarithms)

6: For a population of individuals the distribution of resources which maximizes total utility is that where for all . The proof goes by induction on .

This is obvious in the case . For the induction step, we can separate a population of into two sets of and 1 individuals respectively so that total utility is . Suppose we allocate resources to the group of , and to the last person. By hypothesis, each of the people must receive resources to maximize their total utility so

Now we have to decide how much should be.

Therefore, for each of the first individuals and for the last one

## 31 comments

Comments sorted by top scores.

## comment by Unnamed · 2018-08-13T05:40:09.291Z · LW(p) · GW(p)

Looking at the math of dividing a fixed pool of resources among a non-fixed number of people, a feature of log(r) that matters a lot is that log(0)<0. The first unit of resources that you give to a person is essentially wasted, because it just gets them up to 0 utility (which is no better than just having 1 fewer person around).

That favors having fewer people, so that you don't have to keep wasting that first unit of resource on each person. If the utility function for a person in terms of their resources was f(r)=r-1 you would similarly find that it is best not to have too many people (in that case having exactly 1 person would work best).

Whereas if it was f(r)=sqrt(r) then it would be best to have as many people as possible, because you're starting from 0 utility at 0 resources and sqrt is steepest right near 0. Doing the calculation... if you have R units of resources divided equally among N people, the total utility is sqrt(RN). log(1+r) is similar to sqrt - it increases as N increases - but it is bounded if R is fixed and just approaches that bound (if we use natural log, that bound is just R).

To sum up: diminishing marginal utility favors having more people each with fewer resources (in addition to favoring equal distribution of resources), f(0)<0 favors having fewer people each with more resources (to avoid "wasting" the bit of resources that get a person up to 0 utility), and functions with both features like log(r) favor some intermediate solution with a moderate population size.

## comment by zulupineapple · 2018-08-13T19:16:00.551Z · LW(p) · GW(p)

Note, that the key feature of log function used here is not its slow growth, but the fact that it takes negative values on small inputs. For example, if we take the function u(r)=log (r+1), so that u(0)=0, then RC holds.

Although there are also solutions that prevent RC without taking negative values, e.g u(r) = exp{-1/r}.

## comment by JR · 2018-08-09T13:39:07.320Z · LW(p) · GW(p)

In your linear model, the hypotheses "the utility of a person living on 1 resource is 0" and "doubling resources doubles utility" imply that utility is always 0. Maybe you meant the second hypothesis to be "doubling the number of resources beyond the first doubles utility", so that a person's utility is 0.1 times the number of resources beyond the first. In this version of the linear model, the total utility of population A is 9 (0.9 utility per person), and the total utility of population Z is 0.1 (approximately 0.001 utility per person).

## comment by Jsevillamol · 2018-08-09T09:06:58.689Z · LW(p) · GW(p)

This has shifted my views very positively in favor of total log utilitarianism, as it dissolves quite cleanly the Repugnant Conclussion. Great post!

## comment by Charlie Steiner · 2018-08-09T17:18:20.021Z · LW(p) · GW(p)

Nice example! I still think the most reasonable objection to something like total utilitarianism is that population ethics is a matter of preferences, and my preferences are complicated. If humans prefer to set aside part of the universe as a nature preserve rather than devoting those resources to more humans, then so be it - human preferences are the only preferences we've got.

## comment by steven0461 · 2018-08-10T19:12:19.611Z · LW(p) · GW(p)

The repugnant conclusion just says "a sufficiently large number of lives barely worth living is preferable to a smaller number of good lives". It says nothing about resources; e.g., it doesn't say that the sufficiently large number can be attained by redistributing a fixed supply.

Replies from: shminux## ↑ comment by shminux · 2018-08-12T02:57:19.460Z · LW(p) · GW(p)

Presumably "if other things are equal" implies equal resources.

**EDIT**: The original statement by Parfit does not reference any resource constraint explicitly, at least his original example of A -> A+ -> B certainly does not seem to mention it. Neither does the conclusion that "any loss in the quality of lives in a population can be compensated for by a sufficient gain in the quantity of a population." Disclaimer: I have not read the primary sources.

## ↑ comment by steven0461 · 2018-08-12T14:29:03.166Z · LW(p) · GW(p)

I think in the philosophy literature it's generally interpreted as independent of resource constraints. A quick scan of the linked SEP article seems to confirm this. Apart from the question of what Parfit said, it makes a lot of sense to consider the questions of "what is good" and "what is feasible" separately. And people find the claim that sufficiently many barely-good lives are better than fewer happy lives plenty repugnant even if it has no direct implications for population policy. (In my opinion this is largely because a life barely worth living is better than they imagine.)

## comment by Richard_Ngo (ricraz) · 2018-08-10T14:16:18.326Z · LW(p) · GW(p)

Firstly, excellent post! A cool idea, well-written, and very thought-provoking. Some thoughts on the robustness of the result:

Suppose that every individual were able to produce at least 1 unit of resources throughout their life. Then total utility is monotonically increasing in the number of people, and you have the repugnant conclusion again. How likely is this supposition? Assuming we have arbitrarily advanced technology, including AI, humans will be pretty irrelevant to the production of resources like food or compute (if we're in simulation). But plausibly humans could still produce "goods" which are valuable to other humans, like friendship. Let's plug this into your model above and see what happens. I'll assume that humans need at least 1/K physical resources to survive, but otherwise their utility is logarithmic in the amount of physical resources + friendship that they get. Also, assume that every person receives as much friendship as they produce. So

, with an upper bound of N = KR. When F >= 1, then the optimal value of N is in fact KR, and so utility per person is

, which can be arbitrarily close to 0 (depending on K). When 0 <= F < 1, then I think you get something like your original result again, but I'm not sure. Empirically, I expect that the best friends can produce F >> 1, i.e. if you had nothing except just enough food/water to keep yourself alive, but also you were the sole focus of their friendship, then you'd consider your life well worth living. Idk about average production, but hopefully that'll improve in the future too. In summary, friendship may make things repugnant :P

Here's another version of the repugnant conclusion and your argument. Suppose that the amount of resources used by each person is roughly fixed per unit time (because, say, we're living in simulation), but that there's a period of infancy and early childhood which uses up resources and isn't morally valuable. Then the resources used up by one person are I + L, where I is the length of their infancy and L is the length of the rest of their life, but the utility gained from their life is a function of L alone. What function of L? Perhaps you think that it's linear in L - for example, If you're a hedonic utilitarian, it's plausible that people will be just as happy later in their life as earlier. (In fact, right now, old people tend to be happiest). If so, you must endorse the anti-repugnant conclusion, where you'd prefer a population with very few very long-lived people, to minimise the fixed cost of infancy. If you're a preference utilitarian, maybe you think that there's diminishing marginal utility to having your preferences satisfied. It then follows that there's an optimal point at which to kill people, which isn't too soon (otherwise you're incurring high fixed costs) and isn't too late (otherwise people's marginal utility diminishes too much) - a conclusion which is analogous to your result.

## comment by Jan_Kulveit · 2018-08-09T17:30:49.369Z · LW(p) · GW(p)

Check my post on Nonlinear perception of happiness [LW(p) · GW(p)] - the logarithm is assumed to be in a different place, but the part about implication to ethics contains a version of this argument.

## comment by Unnamed · 2018-08-10T21:03:19.286Z · LW(p) · GW(p)

I don't know to what extent have others explored the connection between total utilitarianism and equality

Diminishing marginal utility is one of the standard arguments for redistribution.

Replies from: shminux## ↑ comment by shminux · 2018-08-12T05:21:03.534Z · LW(p) · GW(p)

It is, but this is a special case: it has to diminish very very quickly, otherwise the repugnant conclusion holds.

Replies from: Unnamed, Unnamed## ↑ comment by Unnamed · 2018-08-13T03:26:14.442Z · LW(p) · GW(p)

Total utilitarianism does imply the repugnant conclusion, very straightforwardly.

For example, imagine that world A has 1000000000000000000 people each with 10000000 utility and world Z has 10000000000000000000000000000000000000000 people each with 0.0000000001 utility. Which is better?

Total utilitarianism says that you just multiply. World A has 10^18 people x 10^7 utility per person = 10^25 total utility. World Z has 10^40 people x 10^-10 utility per person = 10^30 total utility. World Z is way better.

This seems repugnant; intuitively world Z is much worse than world A.

Parfit went through cleverer steps because he wanted his argument to apply more generally, not just to total utilitarianism. Even much weaker assumptions can get to this repugnant-seeming conclusion that a world like Z is better than a world like A.

The point is that lots of people are confused about axiology. When they try to give opinions about population ethics, judging in various scenarios whether one hypothetical world is better than another, they'll wind up making judgments that are inconsistent with each other.

## comment by shminux · 2018-08-12T06:15:17.391Z · LW(p) · GW(p)

This feels like one of those cases where there ought to be a mistake somewhere, given how many eyes have been on the problem, and how simple this example is. Yet I cannot find any errors. All it takes for the repugnant conclusion to be avoided is the logarithmic (or slower, like log(log(R/N))) dependence of utility on the available resources. I'm impressed. Maybe someone can update the relevant wiki entry.

Replies from: jessica.liu.taylor, Jan_Kulveit## ↑ comment by jessicata (jessica.liu.taylor) · 2018-08-12T08:40:51.419Z · LW(p) · GW(p)

The mere addition paradox is an argument that, if you accept some reasonable-seeming axioms about population ethics, then for any positive happiness level h, if we start from a population where everyone has happiness level h, then for any positive happiness level h' < h, there is a larger population where everyone has happiness h' that is preferable to the original population. Most people find this conterintuitive. The interesting thing is that either the counterintuitive result is true, or one of the assumptions is false.

This argument continues to apply regardless of how happiness scales with resources. The resource argument implies that the problem is not faced as stated when resources are fixed and happiness is logarithmic in resources, but (a) artificial thought experiments are useful if we are trying to formalize ethics, and (b) the problem is still faced if resources increase at the right rate as population increases. There is no need to update the Wikipedia page.

Replies from: Jsevillamol, shminux## ↑ comment by Jsevillamol · 2018-08-12T21:06:58.137Z · LW(p) · GW(p)

As I understand it, the idea behind this post dissolves the paradox because it allows us to reframe it in terms of possibility: for a fixed level of resources, there is a number of people for which equal distribution of resources produces optimal sum of utility.

Sure, you could get a greater sum from an enormous repugnant population at subsistence level, but that will take more resources than what you have to be created.

And what is more; even in that situation there is always another non-aberrant distribution of resources, that uses in total the same quantity of resources as the repugnant distribution, and produces greater sum of utility.

Replies from: jessica.liu.taylor## ↑ comment by jessicata (jessica.liu.taylor) · 2018-08-13T01:54:09.961Z · LW(p) · GW(p)

It doesn't dissolve the paradox if it doesn't show that you can construct a preference function over populations that doesn't have any counterintuitive properties (while the repugnant conclusion argument implies it must have at least one counterintuitive property). At best, it shows that the relevant choices are unlikely to be faced in reality, such that even a "bad" preference function performs decently in the real world. But that doesn't resolve the philosophical problem, much less dissolve [LW · GW] it.

I don't think it even shows that the relevant choices are unlikely to be faced in reality, since situations where you can get more resources by having a higher population are really common. (Consider: a higher population contains more workers)

Replies from: Jsevillamol## ↑ comment by Jsevillamol · 2018-08-13T09:04:44.695Z · LW(p) · GW(p)

It dissolves the RC for me, because it answers the question "What kind of cognitive algorithm, as felt from the inside, would generate the observed debate about "the Repugnant Conclusion"?" [grabbed from your link, substituted "free will" for "repugnant conclusion"].

I feel after reading that post that I do no longer feel that the RC is counterintuitive, and instead it feels self evident; I can channel the repugnancy to aberrant distributions of resources.

But granted, most people I have talked to do not feel the question is dissolved through this. I would be curious to see how many people stop being intuitively confused about RC after reading a similar line of reasoning.

The point about more workers => more resources is also an interesting thought. We could probably expand the model to vary resources with workers, and I would expect a similar conclusion for a reasonable model to hold: optimal sum of utility is not achieved in the extremes, but in a happy medium. Either that or each additional worker produces so much that even utility per capita grows as workers goes to infinity.

Replies from: jessica.liu.taylor## ↑ comment by jessicata (jessica.liu.taylor) · 2018-08-13T10:04:07.795Z · LW(p) · GW(p)

I don't see how the post says anything about the cognitive algorithm generating the repugnant conclusion? It's just saying the choices are unlikely to be faced in reality. I think people thinking through the repugnant conclusion are not necessarily thinking about resources, they might just be thinking about happiness levels (that's how it's usually stated, anyway).

Here's a simple model. Total amount of resources = population + sqrt(population). Now we get a repugnant conclusion, it's better to have as high a population as possible, and everyone is living off of 1 + epsilon resources.

Replies from: Jsevillamol## ↑ comment by Jsevillamol · 2018-08-13T11:43:13.757Z · LW(p) · GW(p)

The movement I was going through when thinking about the RC is something akin to "huh, happiness/utility is not a concept that I have an intuitive feeling for, so let me substitute happiness/utility for resources. Now clearly distributing the resources so thinly is very suboptimal. So let's substitute back resources for utility/happiness and reach the conclusion that distributing the utility/happiness so thinly is very suboptimal, so I find this scenario repugnant."

Yeah, the simple model you propose beats my initial intuition. It feels very off though. Maybe its missing diminishing returns and I am rigged to expect diminishing returns?

## ↑ comment by shminux · 2018-08-13T04:36:50.637Z · LW(p) · GW(p)

This is a novel argument about the applicability of the repugnant conclusion for a certain form of the happiness dependence on wealth. A faster-than-logarithmic growth does not let one avoid the conclusion even if the resources are constrained. It looks like a publishable result, let alone deserving a mention in the wikipedia.

Replies from: jessica.liu.taylor## ↑ comment by jessicata (jessica.liu.taylor) · 2018-08-13T05:54:48.768Z · LW(p) · GW(p)

Logarithmic growth does not let you avoid it either if resources increase as population increases at a certain rate.

The logarithm function isn't even special here, it could just as well be that happiness = (resources - 1)^(1/3).

Replies from: shminux## ↑ comment by shminux · 2018-08-13T19:12:28.734Z · LW(p) · GW(p)

The point of the post was to investigate the reallocation of **existing resources** to maximize total utility by creating more less-happy people, and whether this can evade the mere addition paradox. In case of logarithmic dependence of utility on resources available, the utility of this reallocation peaks at a certain "optimal happiness," thus evading the repugnant conclusion. Any faster growth, and the repugnant conclusion survives. Not sure what -1 in (resources - 1)^(1/3) does, haven't done the calculation...

## ↑ comment by jessicata (jessica.liu.taylor) · 2018-08-13T20:27:17.363Z · LW(p) · GW(p)

Check the math on the formula I gave, it also peaks, and it grows faster than log.

I don't think it's that interesting if the paradox is not faced with a fixed level of resources, since the paradox still makes it hard to construct an intuitive formalization of our preferences about populations that gives intuitive answers to a variety of possible problems, and besides resources aren't fixed. See this post [LW · GW].

## ↑ comment by Jan_Kulveit · 2018-08-13T08:13:59.462Z · LW(p) · GW(p)

Well, I posted the same argument in January. Unfortunately (?) with a bunch of other more novel ideas and without plots and (trivial) bits of calculus. Unfortunately (?) I did not make the bold claim the paradox is resolved or dissolved, but just the claim* In the real world we are always resource constrained and the question must be “what is the best population given the limited resources”, therefore the paradox is resolved for most practical purposes.*

## ↑ comment by steven0461 · 2018-08-14T20:34:23.255Z · LW(p) · GW(p)

If a moral hypothesis gives the wrong answers on some questions that we don't face, that suggests it also gives the wrong answers on some questions that we do face.

## comment by steven0461 · 2018-08-12T14:59:58.304Z · LW(p) · GW(p)

One line of attack against the idea that we should reject the repugnant conclusion is to ask why the lives are barely worth living. If it's because the many people have the same good lives but they're p-zombies 99.9999% of the time, I can easily believe that increasing the population until there's more total conscious experiences makes the tradeoff worthwhile.

Replies from: steven0461## ↑ comment by steven0461 · 2018-08-12T15:35:02.272Z · LW(p) · GW(p)

In thought experiments about utilitarianism, it's generally a good idea to consider composite beings. A bus is a utility monster in traffic. If it has 30 people in it, its interests count 30 times as much. So maybe there could be things we'd think of as one mind whose internals mapped onto the internals of a bus in a moral-value-preserving way. (I guess the repugnant conclusion is about utility monsters but for quantity instead of quality.)