comment by Lukas_Gloor ·
2014-12-09T14:34:29.515Z · LW(p) · GW(p)
I thought about this question a while ago and have been meaning to write about it sometime. This is a good opportunity.
Terminology: Other commenters are pointing out that there are differing definitions of the word "utilitarianism". I think it is clear that the article in question is talking about utilitarianism as an ethical theory (or rather, a family of ethical theories). As such, utilitarianism is a form of consequentialism, the view that doing "the right thing" is what produces the best state of affairs. Utilitarianism is different from other forms of consequentialism in that the thing people consider good/valuable/worth achieving is directly tied to conscious beings. An example of a non-utilitarian consequentialist theory would be the belief that knowledge is the most important thing, and that we should all strive to advance science (at all costs).
In regard to the question, there are two interesting points that are immediately worth pointing out:
1) Utilitarianism (and any sort of consequentialism), if it is indeed demanding, is only demanding in certain empirical situations. If the world is already perfect, you don't have to do anything!
2) For every consequentialist view, there are empirical situations where achieving the best consequences is extremely demanding. Just imagine that the desired state of affairs is really hard to attain.
So my first reply to people who criticise utilitarianism for it being too demanding is the following:
Yes, it's very unfortunate that the world is so messed up, but it's not the fault of the utilitarians!
Further, the quoted statement in bold speaks of certain actions being "not just admirable, but morally obligatory". I find this framing misleading. I believe that people should taboo words like "morally obligatory" in ethical discussions. It makes it seem like there is some external moral standard that humans are supposed to obey, but what would it be, and more importantly, why should we care? In my disclaimer on terminology, I wrote that I'm referring to utilitarianism as an ethical theory. I don't intend this to mean that utilitarians are committed to the claim that there are universally valid "ethical truths". I would define "utilitarian" as: "Someone who would voluntarily take a pill that turns them into a robot that goes on to perfectly maximize expected utility". With "utility" being defined as "world-states that are good for sentient individuals", with "good" being defined in non-moral terms, depending on which branch of utilitarianism one subscribes to (could be that e.g. preference-fulfillment is important to you, or contentment, or sum of happiness minus suffering). According to this interpretation, a utilitarian would not be committed to the view that non-utilitarian people are "making a mistake" -- perhaps they just care about different things!
According to the meta ethical view I just sketched, which is meta ethical anti-realism, the demandingness of utilitarianism loses its scariness. If something is requested of you against your will, you're going to object all the more if the request is more demanding. However, if you have a particular goal in life and find out that the circumstances are unfortunately quite dire, so achieving your goal will be very hard, your objection will be directed towards the state of the world, not towards your own goal (hopefully anyway, sometimes people irrationally do the other thing).
Yes, utilitarianism ranks actions according to how much expected utility they produce, and only one action will be "best". However, it would be very misleading to apply moral terms like "only the best action is right, all the others are wrong". Unlike deontology, where all you need to do is to not violate a set of rules, utilitarianism should be thought of as an open-ended game where you can score points, and all you try is to score the most points. Yes, there is just one best path of action, but it can still make a huge difference whether you e.g. take the fifteenth best action or the nineteenth. For utilitarians, moral praise is merely instrumental: They want to blame and praise people in a way that produces the best outcome. This includes praising people for things that are less than perfect, for instance.
So in part, the demandingness objection against utiltiarianism relies on an uncharitable interpretation/definition of "utilitarianism", which commits utilitarians to believe in moral realism. (I consider this interpretation uncharitable because I think the entire concept of "moral realism" is, like libertarian free will, a confused idea that cannot be defined in clear terms without losing at least part of the connotations we intuitively considered important.
Another reason why I think the demandingness objection is a bad objection is because people usually apply it in a naive, short-sighted way. The author of the quote in question did so, for instance: "It also appears to imply that donating all your money to charity beyond what you need to survive (…)"
This is wrong. It only implies donating all your money to charity beyond what you need to be maximally productive in the long run. Empirical studies show that being poor decreases the quality of your decision-making. Further, putting too much pressure on yourself often leads to burnout, which leads to a significant loss of productivity in the long run. I find that people tend to overestimate how demanding a typical utilitarian life is. But they are right insofar as there could be situations where trying to achieve the utilitarian goal results in significant self-sacrifice. Such situations are definitely logically possible, but I think they are much more rare than people think.
The reason this is the case is because people tend to conflate "trying to act like a perfectly rational, super-productive utilitarian robot would act" and "trying to maximise expected utility given all your personal constraints". Utilitarianism implies the latter, not the former. Utilitarianism refers to desiring a specific overall outcome, not to a specific decision-procedure for every action you are taking. It is perfectly in line with utilitarianism to come to a conclusion such as: "My personality happens to be such that thinking about all the suffering in the world every day is just too much for me, I literally couldn't keep it up for more than two months. I want to make a budget for charity once every year, I donate what's in that budget, and for the rest of the time, I try to not worry much about it." If it is indeed the case that doing things differently will lead to this person giving up the entire endeavour of donating money, then this is literally the best thing to do for this person. Humans need some degree of happiness and luxury if they want to remain productive and clear-headed in the long run.
The whole thing is also extremely person-dependent. For some people, "trying to maximise expected utility given all your personal constraints" will look more like "trying to act like a perfectly rational, super-productive utilitarian robot would act" than for other people. Some people are just naturally better at achieving a goal than other people, this depends on both the goals and on the personality traits and assets of the person in question.
Finally, let's ask whether "trying to maximise expected utility given all your personal constraints" will, on average, given real-world circumstances, prove to be demanding or not. I suggest to define "demanding" as follows: goal A is more demanding than goal B if people who try to rationally achieve A have a lower average happiness across a time period than people who try to rationally achieve goal B. If you were to empirically measure this, I would suggest contacting people at random times during day or night to ask them to report how they are feeling at this very moment. When it comes to momentary happiness, it is trivial that trying to maximise your momentary happiness will lead to you being happier than trying to be utilitarian. Utilitarians might object, citing the paradox of hedonism: When people only focus on their own personal happiness, their life will soon feel sad. However, this would be making the exact same mistake I discussed earlier. If it is truly the case that explicitly focusing on your personal happiness makes you miserable, then of course the rational thing to do for a person with this goal would be to self-modify and convince yourself to follow a different goal.
There is a distinction between the experiencing self and the remembering self, which is why it would be a completely different question to ask people "how happy are you with your life on the whole". For instance, I read somewhere that mothers (compared to women without children) tend to be less happy in the average moment, but more happy with their life as a whole. What is it that you care about more? I would assume that people are happy with their life on the whole if they know what they want in life, if they think they made good choices regarding the goals that they have, and if they got closer to their goals more and more. At least for the first part of this, knowing what you want in life, utilitarianism does very well.