In favour of total utilitarianism over average
post by casebash · 2015-12-22T05:07:02.767Z · LW · GW · Legacy · 15 commentsContents
While it may be intuitive to reduce population ethics to a single lottery, this is incorrect; instead, it can only be reduced to n repeated lotteries, where n is the number of people... The following was originally towards the start of the article. I think this is still an interesting approach, but I'm not convinced that it has any benefits over noting that it seems absurd that creating SEPs with positive utility could be bad or criticising the sadistic conclusion. I also think tha... None 15 comments
While it may be intuitive to reduce population ethics to a single lottery, this is incorrect; instead, it can only be reduced to n repeated lotteries, where n is the number of people...
This post will argue that within the framework of hedonic utilitarianism, total utilitarianism should be preferenced over average utilitarianism. Preference utilitarianism will be left to future work. We will imagine collections of single experience people (SEPs) who only have a single experience that gains or loses them a certain amount of utility
Both average and total utilitarianism begin with an axiom that seem obviously true. For total utilitarianism this axiom is: "It is good for a SEP with positive utility to occur if it doesn't affect anything else". This seems to be one of the most basic assumptions that one could choose to start with - it's practically equivalent to "It is good when good things occur". However, if it is true, then average utilitarianism is false, as a positive, but low utility SEP may bring the average utility down. It also leads to the sadistic conclusion, that if a large number of SEPs involve negative utility, we should add a SEP who suffers less over adding no-one. Total utilitarianism does lead to the repugnant conclusion, but contrary to perceptions, near zero, but still positive utility is not a state of terrible suffering like most people imagine. Instead it is a state where life is good and worth living on the whole.
On the other hand, average utilitarianism starts from its own "obviously true" axiom, that we should maximise the average expected utility for each person independent of the total utility. We note that average utilitarianism depends on a statement about aggregations (expected utility), while total utilitarianism depends on a statement about an individual occurrence that doesn't interact with any other SEPs. Given the complexities with aggregating utility, we should be more inclined towards trusting the statement about individual occurrences, then the one about a complex aggregate. This is far from conclusive, but I still believe that this is a useful exercise.
So why is average utilitarianism flawed? The strongest argument for average utilitarianism is the aforementioned "obviously true" assumption that we should maximise expected utility. Accepting this assumption would reduce the situation as follows:
Original situation -> expected utility
Given that we already exist, it is natural for us really want the average expected utility to be high and for us to want to preference it over increasing the population seeing as not existing is not inherently negative. However, while not existing is not negative in the absolute sense, it is still negative in the relative sense due to opportunity cost. It is plausibly good for more happy people to exist, so reducing the situation as we did above discards important information without justification. Another way of stating the situation is as follows: While it may be intuitive to reduce population ethics to a single lottery, this is incorrect; instead, it can only be reduced to n repeated lotteries, where n is the number of people. This situations can be represented as followed:
Original situation -> (expected utility, number of SEPs)
Since this is a tuple, it doesn't provide an automatic ranking for situations, but instead needs to be subject to another transformation before this can occur. It is now clear that the first model assumed away the possible importance of the number of SEPs without justification and therefore assumed its conclusion. Since the strongest argument for average utilitarianism is invalid, the question is what other reasons are there for believing in average utilitarianism? As we have already noted, the repugnant conclusion is much less repugnant than it is generally perceived. This leaves us with very little in the way of logical reasons to believe in average utilitarianism. On the other hand, as already discussed, there are very good reasons for believing in total utilitarianism, or at least something much closer to total utilitarianism than average utilitarianism.
I made this argument using SEPs for simplicity, but there's no reason why the same result shouldn't also apply to complete people. I also believe that this line of argumentation has implications for the anthropic principle.
The following was originally towards the start of the article. I think this is still an interesting approach, but I'm not convinced that it has any benefits over noting that it seems absurd that creating SEPs with positive utility could be bad or criticising the sadistic conclusion. I also think that my attempted formalisation needs a bit more work to make it correct. I also think that imagining combining universes provides a very interesting method for thinking about this problem.
Let's begin by considering a relatively simple argument for total utilitarianism. If we have a group of SEPs who experience different amounts of utility, we can quantify how good or bad each group is in utilitarian terms by imagining how much negative or positive utility a single SEP would need to have in order to balance out the existence of the group and result in the existence of the new group being neither good nor bad. If we accept that groups can cancel out like this, then this pushes us towards an aggregate model of utilitarianism because, for example, doubling the number of SEPs in a group where all the members have positive utility, seems very helpful when we want this group to cancel out a SEP with negative utility. Once we accept that sheer weight of numbers can allow a group with small, but positive utility, to cancel out SEPs with arbitrarily large amounts of negative utility and that the SEP required to cancel out a group is a valid measure of how good a group is in utilitarian terms, we have pretty much proven the repugnant condition. If if the actual aggregate function doesn't end up being a total function, proving the repugnant condition would provide us with a total-like function and would also disarm the most severe objection to total utilitarianism. (There is also a very convincing argument where you argue it is better to have a million people with 99 utility, then 100 utility and then you keep repeating until you end up with a ridiculously large number of people with small utility, I'll add a link if I can find it).
Let's try to state our assumptions more clearly and see if they are justified. Firstly, for a SEP with any amount of negative utility is it the case that there will be some amount of SEPs with small positive utility who would lead to a neutral universe if no other SEPs existed? Firstly, I'll note that this happens in both average and aggregate utilitarianism. Secondly, this seems pretty much equivalent to the torture vs. dust specks problem. We can convert the large negative utility into dust specks, then let each SEP with a small positive utility cancel out a dust speck.
Secondly, is the ability to cancel out a person with negative utility a good metric to measure how good a situation is in utilitarian terms? Let's suppose we were able to show something a bit more general, that if universe1 is better than universe2 and universe3 is better than universe4 then universe1&3 is better than universe2&4 where universea^b has all the SEPs in universea and universeb. Furthermore, if universe1 is just as good as universe2 then universe1&c is just as good as universe2&c. If these axioms are true, then it will be perfectly valid to cancel out groups equivalent to empty universe, before comparing the remaining SEPs in order to determine whether one universe is better than another. Why would we believe this? Well, it seems rather strange to think that whether a SEP should occur or not depends on what is happening elsewhere in the universe given that one SEP experiencing utility does not affect how another SEP experiences utility. It seems bizarrely inconsistent that we might want one half of the universe to be what would be a worse universe (an empty universe) if it existed on it's own.
15 comments
Comments sorted by top scores.
comment by kilobug · 2015-12-22T10:12:32.559Z · LW(p) · GW(p)
The same way that human values are complicated and can't be summarized as "seek happiness !", the way we should aggregate utility is complicated and can't be summarized with just a sum or an average. Trying to use a too simple metric will lead to ridiculous cases (utility monster, ...). The formula we should use to aggregate individual utilities is likely to be involve total, median, average, Ginny, and probably other statistical tools, and finding it is a significant part of finding our CEV.
Replies from: casebash, UmamiSalami↑ comment by UmamiSalami · 2015-12-23T00:45:48.283Z · LW(p) · GW(p)
The problem is that by doing that you are making your position that much more arbitrary and contrived. It would be better if we could find a moral theory that has solid parsimonious basis, and it would be surprising if the fabric of morality involved complicated formulas.
Replies from: kilobug↑ comment by kilobug · 2015-12-23T08:39:13.565Z · LW(p) · GW(p)
There is no objective absolute morality that exists in a vacuum. Our morality is a byproduct of evolution and culture. Of course we should use rationality to streamline and improve it, not limit ourselves to the intuitive version that our genes and education gave us. But that doesn't mean we can streamline it to the point of simple average or sum, and yet have it remain even roughly compatible with our intuitive morality.
Utility theory, prisoner's dilemma, Occam's razor, and many other mathematical structures put constraints on what a self-consistent, formalized morality has to be like. But they can't and won't pinpoint a single formula in the huge hypothesis space of morality, but we'll always have to rely heavily on our intuitive morality at the end. And this one isn't simple, and can't be made that simple.
That's the whole point of the CEV, finding a "better morality", that we would follow if we knew more, were more what we wished we were, but that remains rooted in intuitive morality.
Replies from: UmamiSalami↑ comment by UmamiSalami · 2015-12-28T05:38:44.374Z · LW(p) · GW(p)
There is no objective absolute morality that exists in a vacuum.
No, that's highly contentious, and even if it's true, it doesn't grant a license to promote any odd utility rule as ideal. The anti-realist also may have reason to prefer a simpler version of morality.
Utility theory, prisoner's dilemma, Occam's razor, and many other mathematical structures put constraints on what a self-consistent, formalized morality has to be like. But they can't and won't pinpoint a single formula in the huge hypothesis space of morality, but we'll always have to rely heavily on our intuitive morality at the end. And this one isn't simple, and can't be made that simple.
There are much more relevant factors in building and choosing moral systems than those mathematical structures, whose relevance to moral epistemology is dubious in the first place.
That's the whole point of the CEV, finding a "better morality", that we would follow if we knew more, were more what we wished we were, but that remains rooted in intuitive morality.
It's not obvious that we would be more likely to believe anything in particular if we knew more and were more what we wished we were. CEV is a nice way of making different people's values and goals fit together, but it makes no sense to propose it as a method of actual moral epistemology.
comment by CookieUtilityMonster · 2015-12-22T16:50:52.847Z · LW(p) · GW(p)
Me agree. But remember, me gets more utility from a single chocolate chip than the entire range of possible utility variation across hundreds of trillions of human people.
Best way to increase total utility is to create and enslave as many people as possible, making cookies for me. The most negative-utility possible human life still is net positive utility for the universe if it manages to generate one delicious crunchy crumb in it's otherwise-miserable existence.
Replies from: casebash, UmamiSalami↑ comment by UmamiSalami · 2015-12-23T00:48:57.827Z · LW(p) · GW(p)
Would you accept a lottery where there was 1 ticket to maintain your life as a satisfied cookie utility monster and hundreds of trillions of tickets to become a miserable enslaved cookie maker?
Or, after rational reflection and experiencing the alternate possibilities, would you rather prefer a guaranteed life of threshold satisfaction?
comment by Dagon · 2015-12-22T15:04:03.261Z · LW(p) · GW(p)
"It is good for a SEP with positive utility to occur if it doesn't affect anything else"
This may be true, but it doesn't actually occur. There are zero things that don't affect anything else. SEPs as distinct from actual interactive entities doesn't make anything clearer.
Replies from: casebash↑ comment by casebash · 2015-12-22T15:16:06.115Z · LW(p) · GW(p)
http://lesswrong.com/lw/bwp/please_dont_fight_the_hypothetical/
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2015-12-24T11:55:10.484Z · LW(p) · GW(p)
I think that "this case will never occur in real life, thus we should ignore it when evaluating the validity of a moral theory" is a valid objection to the premises of a thought experiment, for the same reason why "this classification algorithm systematically misclassifies images of dogs as cats, but we were only going to use it for spam filtering not for image recognition so that's irrelevant" is a valid argument.
Replies from: casebash↑ comment by casebash · 2015-12-24T12:33:30.902Z · LW(p) · GW(p)
I've already written a few articles on this on Less Wrong (http://lesswrong.com/lw/mt5/hypothetical_situations_are_not_meant_to_exist/), plus a few others.
comment by Gunslinger (LessWrong1) · 2015-12-22T06:57:13.265Z · LW(p) · GW(p)
Any prerequisites to reading this? It feels too abstract, so I think I'm missing something.
I also haven't found anything about SEPs, are they a similar name or idea to P-zombies?
Replies from: casebash↑ comment by casebash · 2015-12-22T07:01:56.082Z · LW(p) · GW(p)
You can replace every instance of SEP (single experience person) with person. I came up with the terminology myself. The idea is that it is simpler to theorise about multiple person utility when we ignore the fact that people have multiple experiences over their lifetime.
You may want to read about https://en.wikipedia.org/wiki/Average_and_total_utilitarianism and the repugnant conclusion: http://utilitarianism.wikia.com/wiki/Repugnant_conclusion
Replies from: LessWrong1↑ comment by Gunslinger (LessWrong1) · 2015-12-22T07:31:09.749Z · LW(p) · GW(p)
I'm not sure what to say about this, mainly because of the happiness stuff. What do we know about happiness that we can confidently go forward with min-maxing it?