Population ethics and the value of variety
post by cousin_it · 2024-06-23T10:42:21.402Z · LW · GW · 11 commentsContents
11 comments
Problems in population ethics (are 2 lives at 2 utility better than 1 life at 3 utility?) are similar to problems about lifespan of a single person (is it better to live 2 years with 2 utility per year than 1 year with 3 utility per year?)
On the surface, this analogy seems to favor total utilitarianism. 2 years at 1 happiness seems obviously better than 1 year at 1.01 happiness, to the point that average utilitarianism becomes just silly. And 4 years at 0.9 seems better still. In the normal range, without getting into astronomical numbers, it does seem multiplying utility/year by years is a good first approximation of what our intuition does.
But does this mean total utilitarianism is our best approximation to population ethics as well? Not so fast. Let's imagine taking an amnesia pill after each year, or after each month. To me at least, it feels like the intuition in favor of summing gets weaker, and the intuition in favor of averaging gets slightly stronger.
Why would amnesia make life less valuable? Well, one scary thing about amnesia is that if we don't remember past experience, we're just going to repeat it. And repeating a past experience feels less valuable than experiencing something new. Let's test that intuition! Imagine 10 equally happy people, all exactly alike. Now imagine 10 equally happy people, all different. The latter feels more valuable to me, and I think to many others as well.
So if we value variety, and that affects our intuition on whether utilities should be summed or averaged, does that carry over to other population ethics problems? I think so, yeah. When someone mentions "100 identical copies of our universe running in parallel", we find it hard to care about it much more than about just 1 copy. But if they'd said all these universes were as good as ours but interestingly different, ooh! That would sound much more appealing. Same if someone mentions "adding many lives barely worth living": if they also said that all these above-zero lives were also interestingly different, it'd no longer sound like such a bad bargain. And so on.
An even more interesting angle is: if we value variety, variety of what? It's tempting to answer "variety of experience", because we're used to thinking of happiness as something you experience. But that's not quite the full answer. Eating the same flavor of ice cream twice, with amnesia in-between, is plausibly twice as valuable as eating it once - it doesn't matter if you forgot what you ate yesterday, you still want to eat today. Whereas watching a great movie twice with amnesia in-between feels like a more iffy proposition - the amnesia starts to feel like a loss of value. But the real flip happens when we stop thinking about consumption (of things, experiences, etc) and think about creation. How would you like to be a musician who wrote a song, then got amnesia and wrote the same song again? This is less like summing of utilities and more like a horrorshow. So with creative activities we value variety very much indeed.
And this I think gets pretty close to the center of disagreement between total and average utilitarianism. Maybe they don't actually disagree; they just apply to different things we value. There are repeatable pleasures like eating ice cream, which it makes sense to add up, but they're only a part of what we value. And then there are non-repeatable good things, like seeing a movie for the first time, or creating something new. We value these things a lot, but for their utility to add up, they must be varied, not merely repeated.
11 comments
Comments sorted by top scores.
comment by Charlie Steiner · 2024-06-26T23:35:29.490Z · LW(p) · GW(p)
My favorite slogan: Population ethics is aesthetics.
If the people living the future at the time would approve of it, but to you right now it seems awful (e.g. just repeating the same happy hour over and over for eternity), you're totally within your rights to not want that future to happen. Not because of some super-deep further justification, but just because it's okay for you to have preferences about what the future should look like.
Replies from: Lukas_Gloor, cousin_it↑ comment by Lukas_Gloor · 2024-06-27T00:47:43.907Z · LW(p) · GW(p)
I wrote a long post [EA · GW] last year saying basically that.
↑ comment by cousin_it · 2024-06-27T06:49:17.445Z · LW(p) · GW(p)
I think that phrase is right "to zeroth order": one can imagine an agent with any preferences about population ethics. Then "to first order", I think the choice between average vs total utilitarianism does have a right answer (see link in my reply to Cleo). And then there are "second order" corrections like value of variety, which seem more subjective, but maybe there are right answers to be found about them as well.
comment by Cleo Nardo (strawberry calm) · 2024-06-24T20:30:06.834Z · LW(p) · GW(p)
Problems in population ethics (are 2 lives at 2 utility better than 1 life at 3 utility?) are similar to problems about lifespan of a single person (is it better to live 2 years with 2 utility per year than 1 year with 3 utility per year?)
This correspondence is formalised in the "Live Every Life Once" principle, which states that a social planner should make decisions as if they face the concatenation of every individual's life in sequence.[1] So, roughly speaking, the "goodness" of a social outcome , in which individuals face the personal outcomes , is the "desirability" of the single personal outcome . (Here, denotes the concatenation of personal outcomes and .)
The LELO principle endorses somewhat different choices than total utilitarianism or average utilitarianism.
Here's three examples (two you mention):
(1) Novelty
As you mention, it values novelty where the utilitarian principles don't. This is because self-interested humans value novelty in their own life.
Thirdly, [Monoidal Rationality of Personal Utility][2] rules out path-dependent values.
Informally, whether I value a future more than a future must be independent of my past experiences. But this is an unrealistic assumption about human values, as illustrated in the following examples. If denotes reading Moby Dick and denotes reading Oliver Twist, then humans seem to value less than but value more than . This is because humans value reading a book higher if they haven't already read it, due to an inherent value for novelty in reading material.
— Aggregative principles approximate utilitarian principles [LW · GW]
In other words, if the self-interested human's personal utility function places inherent value on intertemporal heterogeneity of some variable (e.g. reading material), then the social utility function that LELO exhibits will place an inherent value on the interpersonal heterogeneity of the same variable. Hence, it's better if Alice and Bob read different books than the same book.
(2) Tradition
Note also that the opposite effect also occurs:
Alternatively, if and denote being married to two different people, then humans seem to value more than but value less than . This is because humans value being married to someone for a decade higher if they've already been married to them, due to an inherent value for consistency in relationships.
— ibid.
That is, if the personal utility function places inherent value on intertemporal homogeneity of some variable (e.g. religious practice), then the social utility function that LELO exhibits will place an inherent value on the interpersonal homogeneity of the same variable. Hence, it's better if Alice and Bob practice the same religion than different ones. So LELO can account valuing both diversity and tradition, whereas total/average utilitarianism can't do either.
(3) Compromise on repugnant conclusion
You say "On the surface, this analogy seems to favor total utilitarianism." I think that's mostly right. LELO's response to the Repugnant Conclusion is somewhere between total and average utilitarianism, leaning to the former.
Formally, when comparing a population of individuals with personal utilities to an alternative population of individuals with utilities , LELO ranks the first population as better if and only if a self-interested human would prefer to live the combined lifespan over . Do people generally prefer a longer life with moderate quality, or a shorter but sublimely happy existence? Most people's preferences likely lie somewhere in between the extremes. This is is because personal utility of a concatenation of personal outcomes is not precisely the sum of the personal utilities of the outcomes being concatenated.
Hence, LELO endorses a compromise between total and average utilitarianism, better reflecting our normative intuitions. While not decisive, it is a mark in favour of aggregative principles as a basis for population ethics.
- ^
See:
Myself (2024), "Aggregative Principles of Social Justice [LW · GW]"
Loren Fryxell (2024), "XU"
MacAskill (2022), "What We Owe the Future"
- ^
MRPU is a condition that states that the personal utility function of a self-interested human satisfies the axiom , which is necessary for LELO to be mathematically equivalent to total utilitarianism.
↑ comment by cousin_it · 2024-06-24T22:17:48.564Z · LW(p) · GW(p)
Yeah. I decided sometime ago that total utilitarianism is in some sense more "right" than average utilitarianism, because of some variations on the Sleeping Beauty problem [LW(p) · GW(p)]. Now it seems the right next step is taking total utilitarianism and adding corrections for variety / consistency / other such things.
↑ comment by Ape in the coat · 2024-06-27T07:22:55.190Z · LW(p) · GW(p)
That's a neat and straightforward way to combine average and total utilitarian approaches. This still doesn't sound quite right to me, LELO seems to be somewhat like 2/3 total and 1/3 average, while for my intuition, the opposite ratio seems to be more preferable, but its definetely an interesting direction to explore.
comment by Vladimir_Nesov · 2024-06-29T18:17:24.461Z · LW(p) · GW(p)
Incidentally, since weights need to be maintained in hardware available for computation, spinning up another thread of an existing person might be 10,000 times cheaper than instantiating a novel person.
comment by cubefox · 2024-06-24T11:20:46.842Z · LW(p) · GW(p)
Imagine 10 equally happy people, all exactly alike. Now imagine 10 equally happy people, all different. The latter feels more valuable to me, and I think to many others as well.
But their life does seem equally valuable to themselves, so it isn't clear why external evaluations should matter here. Imagine we can either rescue 11 equal people or 10 different people. The equal people would argue that they should be rescued because they are one more.
Are the 11 equal people like a single person who lives 11 times as long as normal and repeats the same experience 11 times while each time forgetting everything that happened before? Arguably no, because the single person would neither want to have amnesia nor repeating experiences, similar to how we wouldn't want to unknowingly die during sleep. But the 11 people are probably not bothered much by the existence of their copies.
Replies from: cousin_it, Dagon↑ comment by cousin_it · 2024-06-24T12:57:01.401Z · LW(p) · GW(p)
To me it actually feels plausible that identical people might value themselves less. If there are 100 identical people and 1 different person, the identical ones might prefer saving the different one rather than one of their own. But maybe not everyone has that intuition.
↑ comment by Dagon · 2024-06-24T19:51:41.693Z · LW(p) · GW(p)
Equally happy is very different from identical experiences. 11 non-diverging copies of a person is (IMO) more like one person with especially intense qualia. 11 superficially-similar people are probably not actually "the same" enough to start with declining marginal utility.
comment by Dagon · 2024-06-23T15:19:37.962Z · LW(p) · GW(p)
Thanks for this - it is one of the reasons I am not a Utilitarian. I just don't believe that any simple, linear, aggregation function will capture my values (or any human values I've seen in action or can sympathize with), either within a life or across multiple lives.
You get closer when you start talking about declining marginal utility based on "same" or "similar" things - not addition, but logarithmic-ish. Even so, it's very hard to make that legible and compelling. It's also not currently allowed to acknowledge that indexical attributes (it matters WHICH human we're talking about) is part of most people's actual value function.