Thoughts on the Repugnant Conclusion

post by Tomás B. (Bjartur Tómas) · 2021-03-07T19:00:37.056Z · LW · GW · 4 comments

Contents

4 comments

When we accept the repugnant conclusion, we trade lower average utility for greater total utility. This is unavoidable. But the image the repugnant conclusion evokes in many people's minds  (of trading higher average toil and squalor for an economic surplus that is spent creating more agents with lives barely worth living) is not unavoidable, and seems to me pretty unlikely, as to the extent one can create economic surplus without experience-moments, or with non-negative experience-moments, you will. 

That is, the discourse on this topic commonly assumes that the agents creating the instrumental wealth of society must also be the agents that embody the value of society. This is true now, but seems unlikely to hold in the future, as unlikely as a paperclip maximizer whose resource-acquisition robots are, also, made of paperclips. It is possible that one cannot create surplus without negative experience-moments, but this seems, at least, not obviously true. 

Also, the phrase "barely worth living" is evocative in ways that are not useful. It is much clearer to reword this statement to its more general form: "barely meeting the criteria for me valuing them." 

For me this rewording takes the teeth out of the statement, as any thing that optimizes will optimize for the simplest structure that qualifies as valuable to its utility function. This is just trivially true and not disheartening. If you are not happy with the minimal structure a utility function optimizes for, it is the wrong utility function.

Whatever eudaimonia is, it is just a pattern of matter and energy evolving through time. An agent optimizing for eudaimonia has a light-cone that they need to transform into a  eudaimonic shape. And how the light-cone is sculpted depends on your definition of eudaimonia.

So what is the real trade-off then? For want of a better word, the trade off seems to be the size of the "soul" of each agent you create. And this will ultimately depend how big the "minimal-viable soul" is. In the limit of large souls, you get a utility monster that uses most of the surplus of the light-cone. Should small souls qualify, then a universe full of joyously ecstatic shrimp may be on the table. 

However, the world of people toiling at the Malthusian limit barely eking out an existence does not seem, at least to me, to be the obvious prescription of the repugnant conclusion. 

4 comments

Comments sorted by top scores.

comment by Dagon · 2021-03-07T20:44:05.879Z · LW(p) · GW(p)

Also, the phrase "barely worth living" is evocative in ways that are not useful. It is much clearer to to reword this statement in its more general form : "barely meeting the criteria for me valuing them." 

This is likely an important crux.  I have yet to see a believable theory of relative value of lives or moments across agents.  "barely meeting my value criteria" is probably not the same as "barely worth living".  But it's the only proxy I have that makes any sense.

"minimal-sized qualifying soul" is exactly as repugnant as "large soul, minimally satisfied" to me.  And I even acknowledge that the model of size/desire of agents independent of their value isn't necessarily coherent, let alone true.

comment by Bunthut · 2021-03-07T21:47:15.784Z · LW(p) · GW(p)

For me this rewording takes the teeth out of the statement

And that's why you should be careful about it. With "barely worth living", you have a clear image of what you're thinking about. With "barely meeting the criteria for me valuing them" you probably don't, you just have an inferential insurance that you are happy applying the repugnant conclusion to them. The argument for total value is not actually any stronger or weaker than it was before - you've just decided that the intution against it ought to be rationally overridden, because the opposite is "trivially true".

comment by chaosmage · 2021-03-08T13:45:41.336Z · LW(p) · GW(p)

I will reluctantly concede this is logical. If you want to optimize for maximal happiness, find out what the minimal physical correlate of happiness is, and build tiny replicators that do nothing but have a great time. Drown the planet in them. You can probably justify the expense of building ships and ship builders with a promise of more maximized happiness on other planets.

But this is basically a Grey Goo scenario. Happy Goo.

Yes it's a logical conclusion, yes it is repugnant, and I think it's a reductio ad absurdum of the whole idea of optimizing for conscious states. An even more dramatic one than wild animal suffering.

comment by Archer Sterling (archer-sterling) · 2021-03-09T04:58:15.271Z · LW(p) · GW(p)

Would everyone have the same standard of living? Or would there be a distribution of living standards with a hard minimum limit?

If the later, then it's a nonissue.