It's hard to use utility maximization to justify creating new sentient beings

post by dynomight · 2020-10-19T19:45:39.858Z · LW · GW · 14 comments

This is a link post for https://dyno-might.github.io/2020/10/19/its-hard-to-use-utility-maximization-to-justify-creating-new-sentient-beings/

Contents

  The puppy offer
  The rat king
  Simulator settings
  The normalization problem
  More realistic examples
  Conclusion
None
14 comments

Cedric and Bertrand want to see a movie. Bertrand wants to see Muscled Duded Blow Stuff Up. Cedric wants to see Quiet Remembrances: Time as Allegory. There's also Middlebrow Space Fantasy. They are rational but not selfish - they care about the other's happiness as much as their own. What should they see?

They decide to write down how much pleasure each movie would provide them:

Bertrand Cedric
Muscled Dudes 8 2
Quiet Remembrances 1 7
Middlebrow 6 5

Since Middlebrow provides the most total pleasure, they see Middlebrow.

The puppy offer

A few months later, they are walking in the park and run into their neighbor Maria. She has an insanely cute dog. Cedric admits it's adorable, but Bertrand completely loses his mind. As he rolls around on the ground with the dog, they have the following conversation:

Maria: "I was just going to have her sterilized. But if you want I can have her bred and give you the puppy."

Bertrand: "Puppy! YES!"

Cedric: "It would be nice, but it would also be a lot of work..."

Bertrand: "Eeeeee! Dog, want."

Maria: "It doesn't make any difference to me. Think about it!"

A few hours later, Bertrand has recovered his sanity, and they decide to use the same strategy they used to chose a movie. They both quantify the pleasure the dog will provide them and subtract the pain of taking care of it. Bertrand would see a big improvement, while Cedric a small negative.

Bertrand Cedric
Don't get a dog 10 10
Get a dog 14 8

Based on this, they decide to get the dog. They invite Maria over and proudly show her the calculations.

Cedric: "As you can see from our calculations, we are geniuses and..."

Bertrand: "GIVE US THE PUPPY."

Maria: "But guys!"

Cedric: "Yes?"

Maria: "Your apartment is kind of... small."

Bertrand: "So..."

Maria: "So will the dog be happy here? Shouldn't you include the dog's utility in your calculations?"

They all agree the dog should be included, and would be moderately happy in their apartment. They add a new column to their table, but aren't sure what number to put for the dog in the scenario where it doesn't exist.

Bertrand Cedric New Dog
Don't get a dog 10 10 ???
Get a dog 14 8 5

Bertrand: "Easy! The total with the dog is 27. The total without the dog is only 20."

Maria: "But you can't just add a 0 for the scenario where the dog doesn't exist. As it is now, your average happiness is 10. If you add the dog, the average happiness drops to 9."

Cedric: "Wait..."

Bertrand: "Are we supposed to be maximizing the sum or the average?"

Maria: "Let me tell you a couple stories..."

The rat king

You're a rat. You move onto a pristine island with your gorgeous and adoring rat-spouse. You relax on the beach, read poetry and eat coconuts. You are about to make love when you are struck by a vision: What happens when you keep breeding? As the population grows, resources will be exhausted. Cultural and genetic evolution will favor ruthlessness, and gradually prune away any focus on kindness or beauty. 100 generations from now, the island will be an overcrowded hellscape of inbreeding and cannibalism. Their lives will be worse than death. Should you go ahead and procreate?

Simulator settings

You are the Singularity. You've decided to start running some human simulations. You've have a huge but finite computational power. You can simulate 1000 humans on 'ultra' settings, meaning every whim is attended to and the simulated humans experience nothing but joy and contentment. Or, you can simulate 10,0000 humans on 'low' settings, meaning they mostly wade around in mud and try not to starve. All simulations are conscious. Which is better?

The normalization problem

Bertrand and Cedric are basically attempting to make decisions using utilitarianism. It could be summarized as:

"Chose the action that maximizes happiness for everyone affected."

So what's the problem? Why can't utilitarianism cope with the scenarios above?

The issue is this: Utilitarianism tells you how to choose among alternatives when the population is fixed. Suppose there are individuals. Then Utilitarianism says the ethically correct action is the one that maximizes the total utility. That is, we should choose something like

This works great for choosing what movie to see. But with the puppy offer, the number of individuals depends on the choice we make. Suppose that there are individuals for action . (So and ). Mathematically speaking, it's still very easy to define the total utility:

But is that we want? If you maximize that, you'll probably end up with a huge number of not-so-happy individuals. As Maria pointed out, you could also optimize the average utility

This can also give results you might not like. Is a single very-very happy person really better than 1000 very happy people?

Of course, if you want you can define something in the middle. For example, you could use a square root:

Sure, this provides some kind of a trade-off between population size and average utility. But it the square root feels extremely arbitrary. Why not some other function between and ?

When the population is fixed, none of these distinctions matter. The same action is best regardless of the variant. But when the population changes, there's just no single answer for the "right" way to apply utilitarianism.

More realistic examples

Maybe you can make puppy decisions without a formal ethical system. Maybe you don't rat kingship in your future. Fine, but there are many real situations today where exactly this issue emerges. Here's two:

If these questions have answers, “basic utilitarianism” alone doesn’t give them.

Conclusion

I don't meant to claim that these problems are irresolvable or necessarily even hard! My point is just that it's difficult to resolve them using utilitarianism. Personally, this gives me pause. I tend to fall back on utilitarianism as a first line of defense for most ethical questions. But if it's so easy to create situations where it gives ambiguous answers, should I trust it in regular situations?

References:

14 comments

Comments sorted by top scores.

comment by simon · 2020-10-20T03:45:46.792Z · LW(p) · GW(p)

People want different things, and the different possible disagreement resolving mechanisms include the different varieties of utilitarianism.

In this view, the fundamental issue is whether you want the new entity to be directly counted in the disagreement resolving mechanism. If the new entity is ignored (except for impact on the utility functions of pre-existing entities, including moral viewpoints if preference utility is used*), then there's no need to be concerned with average v. total utilitarianism.

A general policy of always including the new entity in the disagreement resolving mechanism would be extremely dangerous (utility monsters). Maybe it can be considered safe to include them under limited circumstances, but the Repugnant Conclusion indicates to me that the entities being similar to existing entities is NOT sufficient to make it safe to always include them.

(*) hedonic utility is extremely questionable imo - if you were the only entity in the universe and immortal, would it be evil not to wirehead?

comment by Charlie Steiner · 2020-10-20T06:58:31.198Z · LW(p) · GW(p)

Right - it's a little misleading to cast the decision procedure as if it was some person-independent thing. If you make a decision based on how happy you think the puppy will be, it's not because some universal law forced you to against your will, it's because you care how happy the puppy will be.

If there's some game-theory thing going on where you cooperate with puppies in exchange for them cooperating back (much like how Bertrand and Cedric are cooperating with each other), that's another reason to care about the puppy's preferences, but I don't think actual puppies are that sophisticated.

comment by cousin_it · 2020-10-20T11:12:58.565Z · LW(p) · GW(p)

Sure, but there's still a meaningful question whether you'd prefer many moderately happy puppies or few very happy puppies to exist. Maybe tomorrow you'll think of a compelling intuition one way or the other.

comment by Charlie Steiner · 2020-10-21T00:50:42.400Z · LW(p) · GW(p)

Sure. But it will be my intuition, and not some impersonal law. This means it's okay for me to want things like "there should be some puppies, but not too many," which makes perfect sense as a preference about the universe, but practically no sense in terms of population ethics.

comment by Adele Lopez (adele-lopez-1) · 2020-10-20T03:15:30.381Z · LW(p) · GW(p)

I don't know why the trade-off between population-size and average utility feels like it needs to be a mathematically justified, that function seems to be as much determined by arbitrary evolutionary selection as the rest of our utility functions.

comment by dynomight · 2020-10-20T15:04:36.947Z · LW(p) · GW(p)

Well, it would be nice if we happened to live in a universe where we could all agree on an agent-neutral definition of what the best actions to take in each situation are. It seems to be that we don't live in such a universe, and that our ethical intuitions are indeed sort of arbitrarily created by evolution. So I agree we don't need to mathematically justify these things (and maybe it's impossible) but I wish we could!

comment by cousin_it · 2020-10-19T23:16:19.194Z · LW(p) · GW(p)

I just thought of an argument that pulls toward average utilitarianism. Imagine I'm about to read a newspaper which will tell me the average happiness of people on Earth: is it 8000 or 9000 "chocolate equivalent units" per person? I'd much rather read the number 9000 rather than 8000. In contrast, if the newspaper is about to tell me whether the Earth's population is 8 or 9 billion people, I don't feel any strong hopes either way.

Of course there's selfish value in living in a more populous world, more people = more ideas. But I suspect the difficulty of finding good ideas rises exponentially with their usefulness, so the benefit you derive from larger population could be merely logarithmic.

comment by dynomight · 2020-10-20T00:33:27.400Z · LW(p) · GW(p)

If I understand your second point, you're suggesting that part of our intuition seems to suggest large populations are better is that larger populations tend to make the average utility higher. I like that! It would be interesting to try to estimate at that human population level average utility would be highest. (In hunter/gatherer or agricultural times probably very low levels. Today probably a lot higher?)

comment by FactorialCode · 2020-10-20T00:51:35.518Z · LW(p) · GW(p)

I personally think that something more akin to minimum utilitarianism is more inline with my intuitions. That is, to a first order approximation, define utility as (soft)min(U(a),U(b),U(c),U(d)...) where a,b,c,d... are the sentients in the universe. This utility function mostly captures my intuitions as long as we have reasonable control over everyone's outcomes, utilities are comparable, and the number of people involved isn't too crazy.

comment by Conflux · 2020-10-20T01:48:30.425Z · LW(p) · GW(p)

I think I have a pretty simple solution to this: treat 0 as the point when each being is neither happy nor unhappy. Negative numbers are fine. You can still take the sum. In the example, this seems like just subtracting 10 from everyone, which is 0 in the dogless state and -3 with the dog. Thus: no puppy.

comment by Dagon · 2020-10-19T21:30:07.243Z · LW(p) · GW(p)

The puppy example seems pretty simple.  Nonexistent things don't have preferences, so that cell in the table is "n/a".  This is the same result as if you just ignored the heisenpuppy's value, but it wouldn't have to be (for instance, 14+2+5 > 10+10, so even if cedric only liked dogs a tiny amount, it'd be a net benefit to bring the dog into existence, but wouldn't be if the dog's happiness were unconsidered).

The rat king is similar to https://plato.stanford.edu/entries/repugnant-conclusion/ , but would be much stronger if you show the marginal decision, without the implication that if rats have a litter, that decision will continue to apply to all rats, regardless of situation.  Say, there's 100 rats that are marginally happy because there's just enough food.  Should they add 1 to their population?

I think the simulator is too far removed from decisions we actually make.  It's not a very good intuition pump because we don't have instinctive answers to question.  Alternately, it's just an empirical question - try both and see which one is better.

Your realistic examples are not extrapolate-able from the simple examples.  Chickens hinges on weighting, not on existential ethics.  There are very few people who claim it's correct because it's best for the chickens (though some  will argue that it is better for the chickens, that's not their motivation).   There are lots who argue that human pleasure is far more important than chicken suffering.  

The parenthood question is murky because of massive externalities, and a fallacy in your premises - the impact on others (even excluding the potential child) is greater than on the parent.  Also, nobody's all that utilitarian - in real decisions, people prioritize themselves.

 

comment by dynomight · 2020-10-19T22:05:46.895Z · LW(p) · GW(p)

Can you clarify which answer you believe is the correct one in the puppy example? Or, even better, the current utility for the dog in the "yes puppy" example is 5-- for what values you believe it is correct to have or not have the puppy?

comment by Dagon · 2020-10-20T04:05:08.204Z · LW(p) · GW(p)

Given the setup (which I don't think applies to real-world situations, but that's the scenario given) that they aggregate preferences, they should get a dog whether or not they value the dog's preferences.  10 + 10 < 14 + 8 if they think of the dog as an object, and 10 + 10 < 14 + 8 + 5 if they think the dog has intrinsic moral relevance.  

It would be a more interesting example if the "get a dog" utilities were 11 and 8 for C and B.  In that case, they should NOT get a dog if the dog doesn't count in itself.  And they SHOULD get a dog if it counts.

But, of course, they're ignoring a whole lot of options (rows in the decision matrix).  Perhaps they should rescue an existing dog rather than bringing another into the world.  

comment by dynomight · 2020-10-20T22:55:22.032Z · LW(p) · GW(p)

I like your concept that the only "safe" way to use utilitarianism is if you don't include new entities (otherwise you run into trouble). But I feel like they have to be included in some cases. E.g. If I knew that getting a puppy would make me slightly happier, but the puppy would be completely miserable, surely that's the wrong thing to do?

(PS thank you for being willing to play along with the unrealistic setup!)