A (small) critique of total utilitarianism

post by Stuart_Armstrong · 2012-06-26T12:36:38.339Z · LW · GW · Legacy · 237 comments

Contents

  A utility function does not compel total (or average) utilitarianism
  Total utilitarianism is not simple nor elegant, but is arbitrary
  The repugnant conclusion is at the end of a flimsy chain
  Hypothetical beings have hypothetical (and complicated) things say to you
  Moral uncertainty: total utilitarianism doesn't win by default
  (Population) ethics is still hard
None
237 comments

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare). In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so. And this is true for real people, not just thought experiment people - living people with dreams, aspirations, grudges and annoying or endearing quirks. To avoid causing extra pain to those left behind, it is better that you kill off whole families and communities, so that no one is left to mourn the dead. In fact the most morally compelling act would be to kill off the whole of the human species, and replace it with a slightly larger population.

We have many real world analogues to this thought experiment. For instance, it seems that there is only a small difference between the happiness of richer nations and poorer nations, while the first consume many more resources than the second. Hence to increase utility we should simply kill off all the rich, and let the poor multiply to take their place (continually bumping off any of the poor that gets too rich). Of course, the rich world also produces most of the farming surplus and the technology innovation, which allow us to support a larger population. So we should aim to kill everyone in the rich world apart from farmers and scientists - and enough support staff to keep these professions running (Carl Shulman correctly points out that we may require most of the rest of the economy as "support staff". Still, it's very likely that we could kill off a significant segment of the population - those with the highest consumption relative to their impact of farming and science - and still "improve" the situation).

Even if turns out to be problematic to implement in practice, a true total utilitarian should be thinking: "I really, really wish there was a way to do targeted killing of many people in the USA, Europe and Japan, large parts of Asia and Latin America and some parts of Africa - it makes me sick to the stomach to think that I can't do that!" Or maybe: "I really really wish I could make everyone much poorer without affecting the size of the economy - I wake up at night with nightmare because these people remain above the poverty line!"

I won't belabour the point. I find those actions personally repellent, and I believe that nearly everyone finds them somewhat repellent or at least did so at some point in their past. This doesn't mean that it's the wrong thing to do - after all, the accepted answer to the torture vs dust speck dilemma feels intuitively wrong, at least the first time. It does mean, however, that there must be very strong countervailing arguments to balance out this initial repulsion (maybe even a mathematical theorem). For without that... how to justify all this killing?

Hence for the rest of this post, I'll be arguing that total utilitarianism is built on a foundation of dust, and thus provides no reason to go against your initial intuitive judgement in these problems. The points will be:

  1. Bayesianism and the fact that you should follow a utility function in no way compel you towards total utilitarianism. The similarity in names does not mean the concepts are on similarly rigorous foundations.
  2. Total utilitarianism is neither a simple, nor an elegant theory. In fact, it is under-defined and arbitrary.
  3. The most compelling argument for total utilitarianism (basically the one that establishes the repugnant conclusion), is a very long chain of imperfect reasoning, so there is no reason for the conclusion to be solid.
  4. Considering the preferences of non-existent beings does not establish total utilitarianism.
  5. When considering competing moral theories, total utilitarianism does not "win by default" thanks to its large values as the population increases.
  6. Population ethics is hard, just as normal ethics is.

 

A utility function does not compel total (or average) utilitarianism

There are strong reasons to suspect that the best decision process is one that maximises expected utility for a particular utility function. Any process that does not do so, leaves itself open to be money pumped or taken advantage of. This point has been reiterated again and again on Less Wrong, and rightly so.

Your utility function must be over states of the universe - but that's the only restriction. The theorem says nothing further about the content of your utility function. If you prefer a world with a trillion ecstatic super-humans to one with a septillion subsistence farmers - or vice versa - then as long you maximise your expected utility, the money pumps can't touch you, and the standard Bayesian arguments don't influence you to change your mind. Your values are fully rigorous.

For instance, in the torture vs dust speck scenario, average utilitarianism also compels you to take torture, as do a host of other possible utility functions. A lot of arguments around this subject, that may implicitly feel to be in favour of total utilitarianism, turn out to be nothing of the sort. For instance, avoiding scope insensitivity does not compel you to total utilitarianism, and you can perfectly allow birth-death asymmetries or similar intuitions, while remaining an expected utility maximiser.

 

Total utilitarianism is not simple nor elegant, but is arbitrary

Total utilitarianism is defined as maximising the sum of everyone's individual utility function. That's a simple definition. But what are these individual utility functions? Do people act like expected utility maximisers? In a word... no. In another word... NO. In yet another word... NO!

So what are these utilities? Are they the utility that the individuals "should have"? According to what and who's criteria? Is it "welfare"? How is that defined? Is it happiness? Again, how is that defined? Is it preferences? On what scale? And what if the individual disagrees with the utility they are supposed to have? What if their revealed preferences are different again?

There are (various different) ways to start resolving these problems, and philosophers have spent a lot of ink and time doing so. The point remains that total utilitarianism cannot claim to be a simple theory, if the objects that it sums over are so poorly and controversially defined.

And the sum itself is a huge problem. There is no natural scale on which to compare utility functions. Divide one utility function by a billion, multiply the other by eπ, and they are still perfectly valid utility functions. In a study group at the FHI, we've been looking at various ways of combining utility functions - equivalently, of doing interpersonal utility comparisons (IUC). Turns out it's very hard, there seems no natural way of doing this, and a lot has also been written about this, concluding little. Unless your theory comes with a particular IUC method, the only way of summing these utilities is to do an essentially arbitrary choice for each individual before summing. Thus standard total utilitarianism is an arbitrary sum of ill defined, non-natural objects.

Why then is it so popular? Well, one reason is that there are models that make use of something like total utilitarianism to great effect. Classical economic theory, for instance, models everyone as perfectly rational expected utility maximisers. It gives good predictions - but it remains a model, with a domain of validity. You wouldn't conclude from that economic model that, say, mental illnesses don't exist. Similarly, modelling each life as having the same value and maximising expected lives saved is sensible and intuitive in many scenarios - but not necessarily all.

Maybe if we had a bit more information about the affected populations, we could use a more sophisticated model, such as one incorporating quality adjusted life years (QALY). Or maybe we could let other factors affect our thinking - what if we had to choose between saving a population of 1000 versus a population of 1001, of same average QALYs, but where the first set contained the entire Awá tribe/culture of 300 people, and the second is made up of representatives from much larger ethnic groups, much more culturally replaceable? Should we let that influence our decision? Well maybe we should, maybe we shouldn't, but it would be wrong to say "well, I would really like to save the Awá, but the model I settled on earlier won't allow me to, so I best follow the model". The models are there precisely to model our moral intuitions (the clue is in the name), not freeze them.

 

The repugnant conclusion is at the end of a flimsy chain

There is a seemingly sound argument for the repugnant conclusion, which goes some way towards making total utilitarianism plausible. It goes like this:

  1. Start with a population of very happy/utilitied/welfared/preference satisfied people.
  2. Add other people whose lives are worth living, but whose average "utility" is less than that of the initial population.
  3. Redistribute "utility" in an egalitarian way across the whole population, increasing the average a little as you do so (but making sure the top rank have their utility lowered).
  4. Repeat as often as required.
  5. End up with a huge population whose lives are barely worth living.

If all these steps increase the quality of the outcome (and it seems intuitively that they do), then the end state much be better than the starting state, agreeing with total utilitarianism. So, what could go wrong with this reasoning? Well, as seen before, the term "utility" is very much undefined, as is its scale - hence egalitarian is extremely undefined. So this argument is not mathematically precise, its rigour is illusionary. And when you recast the argument in qualitative terms, as you must, it become much weaker.

Going through the iteration, there will come a point when the human world is going to lose its last anime, its last opera, its last copy of the Lord of the Rings, its last mathematics, its last online discussion board, its last football game - anything that might cause more-than-appropriate enjoyment. At that stage, would you be entirely sure that the loss was worthwhile, in exchange of a weakly defined "more equal" society? More to the point, would you be sure that when iterating this process billions of times, every redistribution will be an improvement? This is a conjunctive statement, so you have to be nearly entirely certain of every link in the chain, if you want to believe the outcome. And, to reiterate, these links cannot be reduced to simple mathematical statements - you have to be certain that each step is qualitatively better than the previous one.

And you also have to be certain that your theory does not allow path dependency. One can take the perfectly valid position that "If there were an existing poorer population, then the right thing to do would be to redistribute wealth, and thus lose the last copy of Akira. However, currently there is no existing poor population, hence I would oppose it coming into being, precisely because it would result in the lose of Akira." You can reject this type of reasoning, and a variety of others that block the repugnant conclusion at some stage of the chain (the Stanford Encyclopaedia of Philosophy has a good entry on the Repugnant Conclusion and the arguments surrounding it). But most reasons for doing so already pre-suppose total utilitarianism. In that case, you cannot use the above as an argument for your theory.

 

Hypothetical beings have hypothetical (and complicated) things say to you

There is another major strand of argument for total utilitarianism, which claims that we owe it to non-existent beings to satisfy their preferences, that they would prefer to exist rather than remain non-existent, and hence we should bring them into existence. How does this argument fare?

First of all, it should be emphasised that one is free to accept or reject that argument without any fear of inconsistency. If one maintains that never-existent beings have no relevant preferences, then one will never stumble over a problem. They don't exist, they can't make decisions, they can't contradict anything. In order to raise them to the point where their decisions are relevant, one has to raise them to existence, in reality or in simulation. By the time they can answer "would you like to exist?", they already do, so you are talking about whether or not to kill them, not whether or not to let them exist.

But secondly, it seems that the "non-existent beings" argument is often advanced for the sole purpose of arguing for total utilitarianism, rather than as a defensible position in its own right. Rarely are its implication analysed. What would a proper theory of non-existent beings look like?

Well, for a start the whole happiness/utility/preference problem comes back with extra sting. It's hard enough to make a utility function out of real world people, but how to do so with hypothetical people? Is it an essentially arbitrary process (dependent entirely on "which types of people we think of first"), or is it done properly, teasing out the "choices" and "life experiences" of the hypotheticals? In that last case, if we do it in too much detail, we could argue that we've already created the being in simulation, so it comes back to the death issue.

But imagine that we've somehow extracted a utility function from the preferences of non-existent beings. Apparently, they would prefer to exist rather than not exist. But is this true? There are many people in the world who would prefer not to commit suicide, but would not mind much if external events ended their lives - they cling to life as a habit. Presumably non-existent versions of them "would not mind" remaining non-existent.

Even for those that would prefer to exist, we can ask questions about the intensity of that desire, and how it compares with their other desires. For instance, among these hypothetical beings, some would be mothers of hypothetical infants, leaders of hypothetical religions, inmates of hypothetical prisons, and would only prefer to exist if they could bring/couldn't bring the rest of their hypothetical world with them. But this is ridiculous - we can't bring the hypothetical world with them, they would grow up in ours - so are we only really talking about the preferences of hypothetical babies, or hypothetical (and non-conscious) foetuses?

If we do look at adults, bracketing the issue above, then we get some that would prefer that they not exist, as long as certain others do - or conversely that they not exist, as long as others also not exist. How should we take that into account? Assuming the universe infinite, any hypothetical being would exist somewhere. Is mere existence enough, or do we have to have a large measure or density of existence? Do we need them to exist close to us? Are their own preferences relevant - ie we only have a duty to bring into the world, those beings that would desire to exist in multiple copies everywhere? Or do we feel these have already "enough existence" and select the under-counted beings? What if very few hypothetical beings are total utilitarians - is that relevant?

On a more personal note, every time we make a decision, we eliminate a particular being. We can not longer be the person who took the other job offer, or read the other book at that time and place. As these differences accumulate, we diverge quite a bit from what we could have been. When we do so, do we feel that we're killing off these extra hypothetical beings? Why not? Should we be compelled to lead double lives, assuming two (or more) completely separate identities, to increase the number of beings in the world? If not, why not?

These are some of the questions that a theory of non-existent beings would have to grapple with, before it can become an "obvious" argument for total utilitarianism.

 

Moral uncertainty: total utilitarianism doesn't win by default

An argument that I have met occasionally is that while other ethical theories such as average utilitarianism, birth-death asymmetry, path dependence, preferences of non-loss of culture, etc... may have some validity, total utilitarianism wins as the population increases because the others don't scale in the same way. By the time we reach the trillion trillion trillion mark, total utilitarianism will completely dominate, even if we gave it little weight at the beginning.

But this is the wrong way to compare competing moral theories. Just as different people's utilities don't have a common scale, different moral utilities don't have a common scale. For instance, would you say that square-total utilitarianism is certainly wrong? This theory is simply total utilitarianism further multiplied by the population; it would correspond roughly to the number of connections between people. Or what about exponential-square-total utilitarianism? This would correspond roughly to the set of possible connections between people. As long as we think that exponential-square-total utilitarianism is not certainly completely wrong, then the same argument as above would show it quickly dominating as population increases.

Or what about 3^^^3 average utilitarianism - which is simply average utilitarianism, multiplied by 3^^^3? Obviously that example is silly - we know that rescaling shouldn't change anything about the theory. But similarly, dividing total utilitarianism by 3^^^3 shouldn't change anything, so total utilitarianism's scaling advantage is illusory.

As mentioned before, comparing different utility functions is a hard and subtle process. One method that seems to have surprisingly nice properties (to such an extent that I recommend always using as a first try) is to normalise the lowest possible attainable utility to zero, the highest attainable utility to one, multiply by the weight you give to the theory, and then add the normalised utilities together.

For instance, assume you equally valued average utilitarianism and total utilitarianism, giving them both weights of one (and you had solved all the definitional problems above). Among the choices you were facing, the worst outcome for both theories is an empty world. The best outcome for average utilitarianism would be ten people with an average "utility" of 100. The best outcome for total utilitarianism would be a quadrillion people with an average "utility" of 1. Then how would either of those compare to ten trillion people with an average utility of 60? Well, the normalised utility of this for the average utilitarian is 0.6, while for the total utilitarian it's also 60/100=0.6, and 0.6+0.6=1.2. This is better that the utility for the small world (1+10-9) or the large world (0.01+1), so it beats either of the extremal choices.

Extending this method, we can bring in such theories as exponential-square-total utilitarianism (probably with small weights!), without needing to fear that it will swamp all other moral theories. And with this normalisation (or similar ones), even small weights to moral theories such as "culture has some intrinsic value" will often prevent total utilitarianism from walking away with all of the marbles.

 

(Population) ethics is still hard

What is the conclusion? At Less Wrong, we're used to realising that ethics is hard, that value is fragile, that there is no single easy moral theory to safely program the AI with. But it seemed for a while that population ethics might be different - that there may be natural and easy ways to determine what to do with large groups, even though we couldn't decide what to do with individuals. I've argued strongly here that it's not the case - that population ethics remain hard, that we have to figure out what theory we want to have without access to easy shortcuts.

But in another way it's liberating. To those who are mainly total utilitarians but internally doubt that a world with infinitely many barely happy people surrounded by nothing but "muzak and potatoes" is really among the best of the best - well, you don't have to convince yourself of that. You may choose to believe it, or you may choose not to. No voice in the sky or in the math will force you either way. You can start putting together a moral theory that incorporates all your moral intuitions - those that drove you to total utilitarianism, and those that don't quite fit in that framework.

237 comments

Comments sorted by top scores.

comment by CarlShulman · 2012-06-25T21:29:38.032Z · LW(p) · GW(p)

For instance, it seems that there is only a small difference between the happiness of richer nations and poorer nations, while the first consume many more resources than the second. Hence to increase utility we should simply kill off all the rich, and let the poor multiply to take their place (continually bumping off any of the poor that gets too rich).

This empirical claim seems ludicrously wrong, which I find distracting from the ethical claims. Most people in rich countries (except for those unable or unwilling to work or produce kids who will) are increasing the rate of technological advance by creating demand for improved versions of products, paying taxes, contributing to the above-average local political cultures, and similar. Such advance dominates resource consumption in affecting the welfare of the global poor (and long-term welfare of future people). They make charitable donations or buy products that enrich people like Bill Gates and Warren Buffett who make highly effective donations, and pay taxes for international aid.

The scientists and farmers use thousands of products and infrastructure provided by the rest of society, and this neglects industry, resource extraction, and the many supporting sectors that make productivity in primary and secondary production so far (accountants, financial markets, policing, public health, firefighting...). Even "frivolous" sectors like Hollywood generate a lot of consumer surplus around the world (they watch Hollywood movies in sub-Saharan Africa), and sometimes create net rewards for working harder to afford more luxuries (sometimes they may encourage leisure too much by a utilitarian standard).

Regarding other points:

fact that you should follow a utility function in no way compel you towards total utilitarianism

Yes, this is silly equivocation exacerbated by the use of similar-sounding words for several concepts, and one does occasionally see people making this error.

interpersonal utility comparisons (IUC)

The whole piece assumes preference utilitarianism, but much of it also applies to hedonistic utilitarianism: you need to make seemingly-arbitrary choices in interpersonal happiness/pleasure comparison as well.

When considering competing moral theories, total utilitarianism does not "win by default" thanks to its large values as the population increases.

I agree.

The most compelling argument for total utilitarianism (basically the one that establishes the repugnant conclusion), is a very long chain of imperfect reasoning, so there is no reason for the conclusion to be solid. Considering the preferences of non-existent beings does not establish total utilitarianism.

Maybe just point to the Stanford Encyclopedia of Philosophy entry and a few standard sources on this? This has been covered very heavily by philosophers, if not ad nauseam.

Replies from: Lukas_Gloor, Stuart_Armstrong
comment by Lukas_Gloor · 2012-06-25T22:24:22.610Z · LW(p) · GW(p)

Whatever the piece assumes, I don't think it's preference utilitarianism because then the first sentence doesn't make sense:

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness.

Assuming most people have a preference to go on living, as well as various other preferences for the future, then killing them would violate all these preferences, and simply creating a new, equally happy being would still leave you with less overall utility, because all the unsatisfied preferences count negatively. (Or is there a version of preference utilitarianism where unsatisfied preferences don't count negatively?) The being would have to be substantially happier, or you'd need a lot more beings to make up for the unsatisfied preferences caused by the killing. Unless we're talking about beings that live "in the moment", where their preferences correspond to momentary hedonism.

Peter Singer wrote a chapter on killing and replaceability in Practical Ethics. His view is prior-existence, not total preference utilitarianism, but the points on replaceability apply to both.

comment by Stuart_Armstrong · 2012-06-26T12:56:59.294Z · LW(p) · GW(p)

Maybe just point to the Stanford Encyclopedia of Philosophy entry and a few standard sources on this? This has been covered very heavily by philosophers, if not ad nauseam.

Will add a link. But I haven't yet seen my particular angle of attack on the repugnant conclusion, and it isn't in the Stanford Encyclopaedia. The existence/non-existence seems to have more study, though.

comment by Vladimir_M · 2012-06-26T01:30:36.290Z · LW(p) · GW(p)

There is no natural scale on which to compare utility functions. [...] Unless your theory comes with a particular [interpersonal utility comparison] method, the only way of summing these utilities is to do an essentially arbitrary choice for each individual before summing. Thus standard total utilitarianism is an arbitrary sum of ill defined, non-natural objects.

This, in my opinion, is by itself a decisive argument against utilitarianism. Without these ghostly "utilities" that are supposed to be measurable and comparable interpersonally, the whole concept doesn't even being to make sense. And yet the problem is commonly ignored routinely and nonchalantly, even here, where people pride themselves on fearless and consistent reductionism.

Note that the problem is much more fundamental than just the mathematical difficulties and counter-intuitive implications of formal utilitarian theories. Even if there were no such problems, it would still be the case that the whole theory rests on an entirely imaginary foundation. Ultimately, it's a system that postulates some metaphysical entities and a categorical moral imperative stated in terms of the supposed state of these entities. Why would we privilege that over systems that postulate metaphysical entities and associated categorical imperatives of different kinds, like e.g. traditional religions?

(If someone believes that there is a way how these interpersonally comparable utilities could actually be grounded in physical reality, I'd be extremely curious to hear it.)

Replies from: Jayson_Virissimo, private_messaging, Ghatanathoah, Lukas_Gloor
comment by Jayson_Virissimo · 2012-06-26T05:21:09.099Z · LW(p) · GW(p)

(If someone believes that there is a way how these interpersonally comparable utilities could actually be grounded in physical reality, I'd be extremely curious to hear it.)

I asked about this before in the context of one of Julia Galef's posts about utilitarian puzzles and received several responses. What is your evaluation of the responses (personally, I was very underwhelmed)?

Replies from: Vladimir_M, Viliam_Bur
comment by Vladimir_M · 2012-06-26T06:36:50.629Z · LW(p) · GW(p)

The only reasonable attempt at a response in that sub-thread is this comment. I don't think the argument works, though. The problem is not just disagreement between different people's intuitions, but also the fact that humans don't do anything like utility comparisons when it comes to decisions that affect other people. What people do in reality is intuitive folk ethics, which is basically virtue ethics, and has very little concern with utility comparisons.

That said, there are indeed some intuitions about utility comparison, but they are far too weak, underspecified, and inconsistent to serve as basis for extracting an interpersonal utility function, even if we ignore disagreements between people.

Replies from: Will_Sawin
comment by Will_Sawin · 2012-06-26T22:08:58.543Z · LW(p) · GW(p)

Intuitive utilitarian ethics are very helpful in everyday life.

Replies from: Salemicus, Vladimir_M
comment by Salemicus · 2012-06-29T00:52:35.938Z · LW(p) · GW(p)

There is the oft-repeated anecdote of the utilitarian moral philosopher weighing up whether to accept a job at Columbia. It would get more money, but it would uproot his family, but it might help his career... familiar kind of moral dilemma. Asking his colleague for advice, he got told "Just maximise total utility." "Come on," he is supposed to have replied, "this is serious!"

I struggle to think of any moral dilemma I have faced where utilitarian ethics even provide a practical framework for addressing the problem, let alone a potential answer.

Replies from: gwern, Will_Newsome, Will_Sawin
comment by Will_Newsome · 2012-07-01T23:56:49.529Z · LW(p) · GW(p)

That anecdote is about a decision theorist, not a moral philosopher. The dilemma you describe is a decision theoretic one, not a moral utilitarian one.

comment by Will_Sawin · 2012-06-29T02:52:36.175Z · LW(p) · GW(p)

Writing out costs and benefits is a technique that is sometimes helpful.

Replies from: Salemicus
comment by Salemicus · 2012-06-30T14:32:16.732Z · LW(p) · GW(p)

Sure, but "costs" and "benefits" are themselves value-laden terms, which depend on the ethical framework you are using. And then comparing the costs and the benefits is itself value-laden.

In other words, people using non-utilitarian ethics can get plenty of value out of writing down costs and benefits. And people using utilitarian ethics don't necessarily get much value (doesn't really help the philosopher in the anecdote). This is therefore not an example of how utilitarian ethics are useful.

Replies from: Will_Sawin
comment by Will_Sawin · 2012-06-30T16:45:34.618Z · LW(p) · GW(p)

Writing down costs and benefits is clearly an application of consequentialist ethics, unless things are so muddied that any action might be an example of any ethic. Consequentialist ethics need not be utilitarian, true, but they are usually pretty close to utilitarian. Certainly closer to utilitarianism than to virtue ethics.

Replies from: Salemicus
comment by Salemicus · 2012-06-30T20:21:26.144Z · LW(p) · GW(p)

Writing down costs and benefits is clearly an application of consequentialism ethics.

No, because "costs" and "benefits" are value-laden terms.

Suppose I am facing a standard moral dilemma; should I give my brother proper funerary rites, even though the city's ruler has forbidden it. So I take your advice and write down costs and benefits. Costs - breaching my duty to obey the law, punishment for me, possible reigniting of the city's civil war. Benefits - upholding my duty to my family, proper funeral rites for my brother, restored honour. By writing this down I haven't committed to any ethical system, all I've done is clarify what's at stake. For example, if I'm a deontologist, perhaps this helps clarify that it comes down to duty to the law versus duty to my family. If I'm a virtue ethicist, perhaps this shows it's about whether I want to be the kind of person who is loyal to their family above tawdry concerns of politics, or the kind of person who is willing to put their city above petty personal concerns. This even works if I'm just an egoist with no ethics; is the suffering of being imprisoned in a cave greater or less than the suffering I'll experience knowing my brother's corpse is being eaten by crows?

Ironically, the only person this doesn't help is the utilitarian, because he has absolutely no way of comparing the costs and the benefits - "maximise utility" is a slogan, not a procedure.

Replies from: Will_Sawin
comment by Will_Sawin · 2012-06-30T22:40:05.020Z · LW(p) · GW(p)

What are you arguing here? First you argue that "just maximize utility" is not enough to make a decision. This is of course true, since utilitarianism is not a fully specified theory. There are many different utilitarian systems of ethics, just as there are many different deontological ethics and many different egoist ethics.

Second you are arguing that working out the costs and benefits is not an indicator of consequentialism. Perhaps this is not perfectly true, but if you follow these arguments to their conclusion then basically nothing is an indicator of any ethical system. Writing a list of costs and benefits, as these terms are usually understood, focuses one's attention on the consequences of the action rather than the reasons for the action (as the virtue ethicists care about) or the rules mandating or forbidding an action (as the deontologists care about). Yes, the users of different ethical theories can use pretty much any tool to help them decide, but some tools are more useful for some theories because they push your thinking into the directions that theory considers relevant.

Are you arguing anything else?

comment by Vladimir_M · 2012-06-26T23:09:46.534Z · LW(p) · GW(p)

Could you provide some concrete examples?

Replies from: Will_Sawin, army1987
comment by Will_Sawin · 2012-06-29T02:48:49.913Z · LW(p) · GW(p)

I am thinking about petty personal disputes, say if one person finds something that another person does annoying. A common gut reaction is to immediately start staking territory about what is just and what is virtuous and so on, while the correct thing to do is focus on concrete benefits and costs of actions. The main reason this is better is not because it maximizes utility but because it minimizes argumentativeness.

Another good example is competition for a resource. Sometimes one feels like one deserves a fair share and this is very important, but if you have no special need for it, nor are there significant diminishing marginal returns, then it's really not that big of a deal.

In general, intuitive deontological tendencies can be jerks sometimes, and utilitarianism fights that.

comment by Viliam_Bur · 2012-06-26T08:43:42.812Z · LW(p) · GW(p)

Thanks for the link, I am very underwhelmed too.

If I understand it correctly, one suggestion is equivalent to choosing some X, and re-scaling everyone's utility function so that X has value 1. Obvious problem is the arbitrary choice of X, and the fact that in some people's original scale X may have positive, negative, or zero value.

The other suggestion is equivalent to choosing a hypothetical person P with infinite empathy towards all people, and using the utility function of P as absolute utility. I am not sure about this, but seems to me that the result depends on P's own preferences, and this cannot be fixed because without preferences there could be no empathy.

comment by private_messaging · 2012-06-26T08:00:28.677Z · LW(p) · GW(p)

And yet the problem is commonly ignored routinely and nonchalantly, even here, where people pride themselves on fearless and consistent reductionism.

Yes. To be honest it looks like local version of reductionism takes the 'everything is reducible' in declarative sense, declaring that concepts it uses are reducible regardless of their reducibility.

Replies from: David_Gerard
comment by David_Gerard · 2012-06-26T13:35:04.089Z · LW(p) · GW(p)

Greedy reductionism.

Replies from: private_messaging
comment by private_messaging · 2012-06-26T15:05:37.367Z · LW(p) · GW(p)

Thanks! That's spot on. It's what I think much of those 'utility functions' here are. Number of paperclips in the universe, too. Haven't seen anything like that reduced to formal definition of any kind.

The way humans actually decide on actions, is by evaluating the world-difference that the action causes in world-model, everything being very partial depending to available time. The probabilities are rarely possible to employ in the world model because of the combinatorial space exploding real hard. (also, Bayesian propagation on arbitrary graphs is np-complete, in very practical way of being computationally expensive). Hence there isn't some utility function deep inside governing the choices. Doing the best is mostly about putting limited computing time to best use.

Then there's some odd use of abstractions - like, every agent can be represented with utility function therefore whatever we talk about utilities is relevant. Never mind that this utility function is trivial 1 for doing what agent chooses 0 otherwise and everything just gets tautological.

comment by Ghatanathoah · 2012-09-29T06:56:40.691Z · LW(p) · GW(p)

(If someone believes that there is a way how these interpersonally comparable utilities could actually be grounded in physical reality, I'd be extremely curious to hear it.)

I wonder if I am misunderstanding what you are asking, because interpersonal utility comparison seems like an easy thing that people do every day, using our inborn systems for sympathy and empathy.

When I am trying to make a decision that involves the conflicting desires of myself and another person; I generally use empathy to put myself in their shoes and try to think about desires that I have that are probably similar to theirs. Then I compare how strong those two desires of mine are and base my decision on that. Now, obviously I don't make all ethical decisions like that, there are many where I just follow common rules of thumb. But I do make some decisions in this fashion, and it seems quite workable, the more fair-minded of my acquaintances don't really complain about it unless they think I've made a mistake. Obviously it has scaling problems when attempting to base any type of utilitarian ethics on it, but I don't think they are insurmountable.

Now, of course you could object that this method is unreliable, and ask whether I really know for sure if other people's desires are that similar to mine. But this seems to me to just be a variant of the age-old problem of skepticism and doesn't really deserve any more attention than the possibility that all the people I meet are illusions created by an evil demon. It's infinitesimally possible that everyone I know doesn't really have mental states similar to mine at all, that in fact they are all really robot drones controlled by a non-conscious AI that is basing their behavior on a giant lookup table. But it seems much more likely that other people are conscious human beings with mental states similar to mine that can be modeled and compared via empathy, and that this allows me to compare their utilities.

In fact, it's hard to understand how empathy and sympathy could have evolved if it they weren't reasonably good at interpersonal utility comparison. If interpersonal utility comparison was truly impossible then anyone who tried to use empathy to inform their behavior towards others would end up being disastrously wrong at figuring out how to properly treat others, find themselves grievously offending the rest of their tribe, and would hence likely have their genes for empathy selected against. It seems like if interpersonal utility comparison was impossible humans would have never evolved the ability or desire to make decisions based on empathy.

I am also curious as to why you refer to to utility as "ghostly." It seems to me that utility is commonly defined as the sum of the various desires and feelings that people have. Desires and feelings are computations and other processes in our brains, which are very solid real physical objects. So it seems like utility is at least as real as software. Of course, it's entirely possible that you are using the word "utility" to refer to a slightly different concept than I am and that is where my confusion is coming from.

comment by Lukas_Gloor · 2012-06-26T13:40:59.045Z · LW(p) · GW(p)

This, in my opinion, is by itself a decisive argument against utilitarianism.

You mean against preference-utilitarianism.

The vast majority of utilitarians I know are hedonistic utilitarians, where this criticism doesn't apply at all. (For some reason LW seems to be totally focused on preference-utilitarianism, as I've noticed by now.) As for the criticism itself: I agree! Preference-utiltiarians can come up with sensible estimates and intuitive judgements, but when you actually try to show that in theory there is one right answer, you just find a huge mess.

Replies from: Jayson_Virissimo, Vladimir_M, shminux
comment by Jayson_Virissimo · 2012-06-27T03:10:56.550Z · LW(p) · GW(p)

I agree. I'm fairly confident that, within the next several decades, we will have the technology to accurately measure and sum hedons and that hedonic utilitarianism can escape the conceptual problems inherent in preference utilitarianism. On the other hand, I do not want to maximize (my) hedons (for these kinds of reasons, among others).

Replies from: CarlShulman, David_Gerard
comment by CarlShulman · 2012-06-29T05:07:59.136Z · LW(p) · GW(p)

we will have the technology to accurately measure and sum hedons

Err...what? Technology will tell you things about how brains (and computer programs) vary, but not which differences to count as "more pleasure" or "less pleasure." If evaluations of pleasure happen over 10x as many neurons is there 10x as much pleasure? Or is it the causal-functional role pleasure plays in determining the behavior of a body? What if we connect many brains or programs to different sorts of virtual bodies? Probabilistically?

A rule to get a cardinal measure of pleasure across brains is going to require almost as much specification as a broader preference measure. Dualists can think of this as guesstimating "psychophysical laws" and physicalists can think of it as seeking reflective equilibrium in our stances towards different physical systems, but it's not going to be "read out" of neuroscience without deciding a bunch of evaluative (or philosophy of mind) questions.

Replies from: torekp
comment by torekp · 2012-06-29T23:55:07.950Z · LW(p) · GW(p)

but it's not going to be "read out" of neuroscience without deciding a bunch of evaluative (or philosophy of mind) questions.

Sure, but I don't think we can predict that there will be a lot of room for deciding those philosophy of mind questions whichever way one wants to. One simply has to wait for the research results to come in. With more data to constrain the interpretations, the number and spread of plausible stable reflective equilibria might be very small.

I agree with Jayson that it is not mandatory or wise to maximize hedons. And that is because hedons are not the only valuable things. But they do constitute one valuable category. And in seeking them, the total utilitarians are closer to the right approach than the average utilitarians (I will argue in a separate reply).

comment by David_Gerard · 2012-06-27T10:45:45.412Z · LW(p) · GW(p)

I'm fairly confident that, within the next several decades, we will have the technology to accurately measure and sum hedons

OK, I've got to ask: what's your confidence based in, in detail? It's not clear to me that "sum hedons" even means anything.

comment by Vladimir_M · 2012-06-27T01:09:16.812Z · LW(p) · GW(p)

Why do you believe that interpersonal comparison of pleasure is straightforward? To me this doesn't seem to be the case.

Replies from: Lukas_Gloor, Mark_Lu
comment by Lukas_Gloor · 2012-06-27T02:50:06.984Z · LW(p) · GW(p)

Is intrapersonal comparison possible? Personal boundaries don't matter for hedonistic utilitarianism, they only matter insofar as you may have spatio-temporally connected clusters of hedons (lives). The difficulties in comparison seem to be of an empirical nature, not a fundamental one (unlike the problems with preference-utilitarianism). If we had a good enough theory of consciousness, we could quantitatively describe the possible states of consciousness and their hedonic tones. Or not?

One common argument against hedonistic utiltiarianism is that there are "different kinds of pleasures", and that they are "incommensurable". But if that we're the case, it would be irrational to accept a trade-off of the lowest pleasure of one sort for the highest pleasure of another sort, and no one would actually claim that. So even if pleasures "differ in kind", there'd be an empirical trade-off value based on how pleasant the hedonic states actually are.

comment by Mark_Lu · 2012-06-27T09:00:53.703Z · LW(p) · GW(p)

Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we'd be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.

Replies from: Vladimir_M
comment by Vladimir_M · 2012-06-27T14:59:03.296Z · LW(p) · GW(p)

You make it sound as if there is some signal or register in the brain whose value represents "pleasure" in a straightforward way. To me it seems much more plausible that "pleasure" reduces to a multitude of variables that can't be aggregated into a single-number index except through some arbitrary convention. This seems to me likely even within a single human mind, let alone when different minds (especially of different species) are compared.

That said, I do agree that the foundation of pure hedonic utilitarianism is not as obviously flawed as that of preference utilitarianism. The main problem I see with it is that it implies wireheading as the optimal outcome.

Replies from: Lukas_Gloor, TheOtherDave
comment by Lukas_Gloor · 2012-06-27T17:50:35.590Z · LW(p) · GW(p)

The main problem I see with it is that it implies wireheading as the optimal outcome.

Or the utilitronium shockwave, rather. Which doesn't even require minds to wirehead anymore, but simply converts matter into maximally efficient bliss simulations. I used to find this highly counterintuitive, but after thinking about all the absurd implications of valuing preferences instead of actual states of the world, I've come to think of it as a perfectly reasonable thing.

comment by TheOtherDave · 2012-06-27T15:30:53.665Z · LW(p) · GW(p)

The main problem I see with it is that it implies wireheading as the optimal outcome.

AFAICT, it only does so if we assume that the environment can somehow be relied upon to maintain the wireheading environment optimally even though everyone is wireheading.

Failing that assumption, it seems preferable (even under pure hedonic utilitarianism) for some fraction of total experience to be non-wireheading, but instead devoted to maintaining and improving the wireheading environment. (Indeed, it might even be preferable for that fraction to approach 100%, depending on the specifics of the environment..)

I suspect that, if that assumption were somehow true, and we somehow knew it was true (I have trouble imagining either scenario, but OK), most humans would willingly wirehead.

comment by Shmi (shminux) · 2012-06-26T20:36:20.435Z · LW(p) · GW(p)

Hedonistic utilitarianism ("what matters is the aggregate happiness") runs into the same repugnant conclusion.

Replies from: Lightwave, Lukas_Gloor
comment by Lightwave · 2012-06-26T20:48:26.857Z · LW(p) · GW(p)

But this happens exactly because interpersonal (hedonistic) utility comparison is possible.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-26T21:25:11.994Z · LW(p) · GW(p)

Right, if you cannot compare utilities, you are safe from the repugnant conclusion.

On the other hand, this is not very useful instrumentally, as a functioning society necessarily requires arbitration of individual wants. Thus some utilities must be comparable, even if others might not be. Finding a boundary between the two runs into the standard problem of two nearly identical preferences being qualitatively different.

comment by Lukas_Gloor · 2012-06-26T20:53:30.886Z · LW(p) · GW(p)

Yes but it doesn't have the problem Vladimir_M described above, and it can bite the bullet in the repugnant conclusion by appealing to personal identity being an illusion. Total hedonistic utilitarianism is quite hard to argue against, actually.

Replies from: shminux, CarlShulman
comment by Shmi (shminux) · 2012-06-26T21:43:30.936Z · LW(p) · GW(p)

As I mentioned in the other reply, I'm not sure how a society of total hedonistic utilitarians would function without running into the issue of nearly identical but incommensurate preferences.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-26T22:28:23.833Z · LW(p) · GW(p)

Hedonistic utilitarianism is not about preferences at all. It's about maximizing happiness, whatever the reason or substrate for it. The utilitronium shockwave would be the best scenario for total hedonistic utilitarianism.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-27T01:18:12.787Z · LW(p) · GW(p)

Maybe I misunderstand how total hedonistic utilitarianism works. Don't you ever construct an aggregate utility function?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-27T01:56:01.476Z · LW(p) · GW(p)

No, nothing of that sort. You just take the surplus of positive hedonic states over negative ones and try to maximize that. Interpersonal boundaries become irrelevant, in fact many hedonistic utilitarians think that the concept of personal identity is an illusion anyway. If you consider utility functions, then that's preference utilitarianism or something else entirely.

Replies from: shminux
comment by Shmi (shminux) · 2012-06-27T17:06:46.319Z · LW(p) · GW(p)

You just take the surplus of positive hedonic states over negative ones and try to maximize that.

How is that not an aggregate utility function?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-27T17:23:37.634Z · LW(p) · GW(p)

Utilons aren't hedons. You have one simple utility function that states you should maximize happiness minus suffering. That's similar to maximizing paperclips, and it avoids the problems discussed above that preference utiltiarianism has, namely how interpersonally differing utility functions should be compared to each other.

Replies from: David_Gerard, shminux
comment by David_Gerard · 2012-06-29T07:29:03.207Z · LW(p) · GW(p)

You still seem to be claiming that (a) you can calculate a number for hedons (b) you can do arithmetic on this number. This seems problematic to me for the same reason as doing these things for utilons. How do you actually do (a) or (b)? What is the evidence that this works in practice?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-29T14:53:25.339Z · LW(p) · GW(p)

I don't claim that I, or anyone else, can do that right now. I'm saying there doesn't seem to be a fundamental reason why that would have to remain impossible forever. Why do you think it will remain impossible forever?

As for (b), I don't even see the problem. If (a) works, then you just do simple math after that. In case you're worried about torture and dust specks not working out, check out section VI of this paper.

And regarding (a), here's an example that approximates the kind of solutions we seek: In anti-depression drug tests, the groups with the actual drug and the control group have to fill out self-assessments of their subjective experiences, and at the same time their brain activity and behavior is observed. The self-reports correlate with the physical data.

Replies from: TheOtherDave, David_Gerard
comment by TheOtherDave · 2012-06-29T15:27:05.817Z · LW(p) · GW(p)

I can't speak for David (or, well, I can't speak for that David), but for my own part, I'm willing to accept for the sake of argument that the happiness/suffering/whatever of individual minds is intersubjectively commensurable, just like I'm willing to accept for the sake of argument that people have "terminal values" which express what they really value, or that there exist "utilons" that are consistently evaluated across all situations, or a variety of other claims, despite having no evidence that any such things actually exist. I'm also willing to assume spherical cows, frictionless pulleys, and perfect vacuums for the sake of argument.

But the thing about accepting a claim for the sake of argument is that the argument I'm accepting it for the sake of has to have some payoff that makes accepting it worthwhile. As far as I can tell, the only payoff here is that it lets us conclude "hedonic utilitarianism is better than all other moral philosophies." To me, that payoff doesn't seem worth the bullet you're biting by assuming the existence of intersubjectively commensurable hedons.

The self-reports correlate with the physical data.

If someone were to demonstrate a scanning device whose output could be used to calculate a "hedonic score" for a given brain across a wide range of real-world brains and brainstates without first being calibrated against that brain's reference class, and that hedonic score could be used to reliably predict the self-reports of that brain's happiness in a given moment, I would be surprised and would change my mind about both the degree of variation of cognitive experience and the viability of intersubjectively commensurable hedons.

If you're claiming this has actually been demonstrated, I'd love to see the study; everything I've ever read about has been significantly narrower than that.

If you're merely claiming that it's in principle possible that we live in a world where this could be demonstrated, I agree that it's in principle possible, but see no particular evidence to support the claim that we do.

Replies from: David_Gerard
comment by David_Gerard · 2012-06-29T16:51:12.137Z · LW(p) · GW(p)

If you're merely claiming that it's in principle possible that we live in a world where this could be demonstrated, I agree that it's in principle possible, but see no particular evidence to support the claim that we do.

Well, yes. The main attraction of utilitarianism appears to be that it makes the calculation of what to do easier. But its assumptions appear ungrounded.

comment by David_Gerard · 2012-06-29T16:50:12.347Z · LW(p) · GW(p)

But what makes you think you can just do simple math on the results? And which simple math - addition, adding the logarithms, taking the average or what? What adds up to normality?

comment by Shmi (shminux) · 2012-06-27T22:10:40.238Z · LW(p) · GW(p)

Utilons aren't hedons.

Thanks for the link. I still cannot figure out why utilons are not convertible to hedons, and even if they aren't, why isn't a mixed utilon/hedon maximizer susceptible to dutch booking. Maybe I'll look through the logic again.

comment by CarlShulman · 2012-06-29T05:12:17.777Z · LW(p) · GW(p)

Hedonism doesn't specify what sorts of brain states and physical objects have how much pleasure. There are a bewildering variety of choices to be made in cashing out a rule to classify which systems are how "happy." Just to get started, how much pleasure is there when a computer running simulations of happy human brains is sliced in the ways discussed in this paper?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-29T14:38:41.916Z · LW(p) · GW(p)

But aren't those empirical difficulties, not fundamental ones? Don't you think there's a fact of the matter that will be discovered if we keep gaining more and more knowledge? Empirical problems can't bring down an ethical theory, but if you can show that there exists a fundamental weighting problem, then that would be valid criticism.

Replies from: CarlShulman
comment by CarlShulman · 2012-06-29T18:06:27.107Z · LW(p) · GW(p)

But aren't those empirical difficulties, not fundamental ones?

What sort of empirical fact would you discover that would resolve that? A detector for happiness radiation? The scenario in that paper is pretty well specified.

comment by steven0461 · 2012-06-25T19:29:37.586Z · LW(p) · GW(p)

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness.

I stopped reading here. To me, "total utilitarianism" means maximizing the sum of the values of individual lives. There's nothing forcing a total utilitarian to value a life by adding the happiness experienced in each moment of the life, without further regard to how the moments fit together (e.g. whether they fulfill someone's age-old hopes).

In general, people seem to mean different things by "utilitarianism", so any criticism needs to spell out what version of utilitarianism it's attacking, and acknowledge that the particular version of utilitarianism may not include everyone who self-identifies as a utilitarian.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-25T20:28:43.761Z · LW(p) · GW(p)

But isn't the "values of individual lives" preference-utilitarianism (which often comes as prior-existence instead of "total")? I'm confused, it seems like there are several definitions criculating. I haven't encountered this kind of total utilitarianism on the felicifia utilitarianism forum. The quoted conclusion about killing people and replacing people is accurate, according the definition that is familiar to me.

Replies from: steven0461
comment by steven0461 · 2012-06-25T20:36:57.566Z · LW(p) · GW(p)

But isn't the "values of individual lives" preference-utilitarianism

Not unless the value of a life is proportional to the extent to which the person's preferences are satisfied.

The quoted conclusion about killing people and replacing people is accurate, according the definition that is familiar to me.

What would you call the view I mentioned, if not total utilitarianism?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-25T20:47:10.251Z · LW(p) · GW(p)

Sounds like total preference-utiltiarianism, instead of total hedonistic utilitarianism. Would this view imply that it is good to create beings whose preferences are satisfied? If yes, then it's total PU. If no, then it might be prior-existence PU. The original article doesn't specify explicitly whether it means hedonistic or preference utilitarianism, but the example given about killing only works for hedonistic utilitarianism, that's why I assumed that this is what's meant. However, somewhere else in the article, it says

Total utilitarianism is defined as maximising the sum of everyone's individual utility function.

And that seems more like preference-utilitarianism again. So something doesn't work out here.

As a side note, I've actually never encountered a total preference-utilitarian, only prior-existence ones (like Peter Singer). But it's a consistent position.

Replies from: steven0461
comment by steven0461 · 2012-06-25T21:51:52.933Z · LW(p) · GW(p)

But it's not preference utilitarianism. In evaluating whether someone leads a good life, I care about whether they're happy, and I care about whether their preferences are satisfied, but those aren't the only things I care about. For example, I might think it's a bad thing if a person lives the same day over and over again, even it's what the person wants and it makes the person happy. (Of course, it's a small step from there to concluding it's a bad idea when different people have the same experiences, and that sort of value is hard to incorporate into any total utilitarian framework.)

Replies from: Will_Newsome, Lukas_Gloor
comment by Will_Newsome · 2012-06-27T01:53:29.273Z · LW(p) · GW(p)

I think you might want to not call your ethical theory utilitarianism. Aquinas' ethics also emphasize the importance of the common welfare and loving thy neighbor as thyself, yet AFAIK no one calls his ethics utilitarian.

Replies from: steven0461, steven0461
comment by steven0461 · 2012-06-27T04:15:33.836Z · LW(p) · GW(p)

I think maybe the purest statement of utilitarianism is that it pursues "the greatest good for the greatest number". The word "for" is important here. Something that improves your quality of life is good for you. Clippy might think (issues of rigid designators in metaethics aside) that paperclips are good without having a concept of whether they're good for anyone, so he's a consequentialist but not a utilitarian. An egoist has a concept of things being good for people, but chooses only those things that are good for himself, not for the greatest number; so an egoist is also a consequentialist but not a utilitarian. But there's a pretty wide range of possible concepts of what's good for an individual, and I think that entire range should be compatible with the term "utilitarian".

comment by steven0461 · 2012-06-27T02:29:23.326Z · LW(p) · GW(p)

It doesn't make sense to me to count maximization of total X as "utilitarianism" if X is pleasure or if X is preference satisfaction but not if X is some other measure of quality of life. It doesn't seem like that would cut reality at the joints. I don't necessarily hold the position I described, but I think most criticisms of it are misguided, and it's natural enough to deserve a short name.

comment by Lukas_Gloor · 2012-06-25T22:35:44.853Z · LW(p) · GW(p)

I see, interesting. That means you bring in a notion independent of both the person's experiences and preferences. You bring in a particular view on value (e.g. that life shouldn't be repetitious). I'd just call this a consequentialist theory where the exact values would have to be specified in the description, instead of utilitarianism. But that's just semantics, as you said initially, it's important that we specify what exactly we're talking about.

comment by bryjnar · 2012-06-25T17:54:11.757Z · LW(p) · GW(p)

A utility function does not compel total (or average) utilitarianism

Does anyone actually think this? Thinking that utility functions are the right way to talk about rationality !=> utilitarianism. Or any moral theory, as far as I can tell. I don't think I've seen anyone on LW actually arguing that implication, although I think most would affirm the antecedent.

There is a seemingly sound argument for the repugnant conclusion, which goes some way towards making total utilitarianism plausible. It goes like this... If all these steps increase the quality of the outcome (and it seems intuitively that they do), then the end state much be better than the starting state, agreeing with total utilitarianism

This is the complete opposite of what I'd understood the point of that argument to be: as I understand it, it's claimed that the final state is clearly not of high utility, and so there is something wrong with total utilitarianism. Which is fine for what you're arguing, but you seem to have taken it a bit the wrong way around.

As for the mathematical rigour, there are some very nice impossibility theorems proved by Arrhenius (example) that make the kind of worries exemplified by the repugnant conclusion a lot more precise. They don't even require the problematic assumptions about utility functions that you point out: they're just about axiology (ranking possible outcomes). So they're actually independent problems for utilitarians.

I think a lot of the reason that utilitarians don't tend to feel terribly worried about the difficulty of interpersonal utility calculations is that we already do them. Every time you decide to let someone else have the last cookie because they'll enjoy it more, you just did a little IUC. Obviously, it's pretty unclear how to scale that up, but it gives a strong feeling that it ought to be possible, somehow.

comment by Lukas_Gloor · 2012-06-25T20:18:10.937Z · LW(p) · GW(p)

What seems to be overlooked in most discussions about total hedonistic utiltiarianism is that the proponents often have a specific (Parfitean) view about personal identity. Which leads to either empty or open individualism. Based on that, they may hold that it is no more rational to care about one's own future self than it is to care about any other future self. "Killing" a being would then just be failing to let a new moment of consciousness come into existence. And any notions of "preferences" would not really make sense anymore, only instrumentally.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-06-27T09:34:28.186Z · LW(p) · GW(p)

I'm increasingly coming to hold this view, where the amount and quality of experience-moments is all that matters, and I'm glad to see someone else spell it out.

comment by wedrifid · 2012-06-27T13:55:58.599Z · LW(p) · GW(p)

A smaller critique of total utilitarianism:

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).

You can just finish there.

(In case the "sufficient cause to reject total utilitarianism" isn't clear: I don't like murder. Total utilitarianism advocates it in all sorts of scenarios that I would not. Therefore, total utilitarianism is Evil.)

Replies from: Stuart_Armstrong, private_messaging
comment by Stuart_Armstrong · 2012-06-28T11:42:49.421Z · LW(p) · GW(p)

You can just finish there.

:-) I kinda did. The rest was just "there are no strong countervailing reasons to reject that intuition".

Replies from: wedrifid
comment by wedrifid · 2012-06-28T12:59:52.764Z · LW(p) · GW(p)

:-) I kinda did. The rest was just "there are no strong countervailing reasons to reject that intuition".

Excellent post then. I kind of stopped after the first line so I'll take your word for the rest!

comment by private_messaging · 2012-06-27T20:44:06.010Z · LW(p) · GW(p)

Agreed completely. This goes for any utilitarianism where the worth of changing from state A to state B is f(B)-f(A) . Morality is about transitions; even hedonism is, as happiness is nothing if it is frozen solid.

Replies from: army1987
comment by A1987dM (army1987) · 2012-06-27T21:02:23.079Z · LW(p) · GW(p)

happiness is nothing if it is frozen solid

I'd take A and B in the equation above to include momentums as well as positions? :-)

Replies from: private_messaging
comment by private_messaging · 2012-06-27T21:10:44.565Z · LW(p) · GW(p)

That's a good escape but only for specific laws of physics... what do you do about brain sim on computer? It has multiple CPUs going over calculating next state from current state in parallel, and it doesn't care about how CPU is physically implemented, but it does care how many experience-steps it has. edit: i.e. i mean, transition from one happy state to other happy state that is equally a happy state is what a moment of being happy is about. the total utilitarianism boils down to zero utility of an update pass on a happy brain sim. It's completely broken. edit: and with simple workarounds, it boils down to zero utility of switching the current/next state arrays, so that you sit in a loop recalculating same next state from static current state.

comment by Larks · 2012-06-25T20:08:50.437Z · LW(p) · GW(p)

Well, one reason is that there are models that make use of something like total utilitarianism to great effect. Classical economic theory, for instance, models everyone as perfectly rational expected utility maximisers.

I think you actually slightly understate the case against Utilitarianism. Yes, Classical Economics uses expected utility maximisers - but it prefers to deal with Pareto Improvements (or Kaldor-Hicks improvements) than try to do inter-personal utility comparisons.

comment by Lukas_Gloor · 2012-06-25T19:55:54.611Z · LW(p) · GW(p)

Total utilitarianism is defined as maximising the sum of everyone's individual utility function.

That seems misleading. Most of the time "total utiltiarianism" refers to what should actually be called "hedonistic total utilitarianism". And what is maximized there is the suprlus of happiness over suffering (positive hedonic states over negative ones), which isn't necessarily synonymous with individual utility functions.

There are three different parameters for the various kinds of utilitarianism: It can either be total or average or prior-existence. Then it can be negative or classical (and in theory also "positive", even though that would be insane, forcing people to accept eternal torture if there's even the slightest chance of a moment of happiness). And then utiltiarianism can also be hedonistic or preference. Most common, and subject to this article, is (classical) total hedonistic utiltiarianism. While some combinations make very little sense, a lot of them actually have advocates. (For instance, recently someone published a paper advocating "negative average preference-utilitarianism".)

Replies from: endoself
comment by endoself · 2012-06-25T20:47:07.349Z · LW(p) · GW(p)

and in theory also "positive", even though that would be insane, forcing people to accept eternal torture if there's even the slightest chance of a moment of happiness

There exist people who profess that they would choose to be tortured for the rest of their lives with no chance of happiness rather than being killed instantly, so this intuition could be more than theoretically possible. People tend to be surprised by the extent to which intuitions differ.

comment by Scott Alexander (Yvain) · 2012-06-28T04:19:09.686Z · LW(p) · GW(p)

Upvoted, but as someone who, without quite being a total utilitarian, at least hopes someone might be able to rescue total utilitarianism, I don't find much to disagree with here. Points 1, 4, 5, and 6 are arguments against certain claims that total utilitarianism should be obviously true, but not arguments that it doesn't happen to be true.

Point 2 states that total utilitarianism won't magically implement itself and requires "technology" rather than philosophy; that is, people have to come up with specific contingent techniques of estimating utility, rather than just reading it off via a simple method which can be proven mathematically perfect. But we have some Stone Age utility-comparing technologies like money and the popular vote, and QALYs might be metaphorically a Bronze Age technology. I suppose I take it on faith that there's a lot of room for more advanced technology before we hit mathematical limits.

That leaves the introductory paragraph and Point 3 as the only places I still disagree:

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).

In hedonic utilitarianism, yes. Are you making this claim for preference utilitarianism as well? If so, on what basis? If we don't give credit for creating potential people, isn't most people's preference not to be killed enough to stop preference utilitarians from killing them?

And you also have to be certain that your theory does not allow path dependency. One can take the perfectly valid position that "If there were an existing poorer population, then the right thing to do would be to redistribute wealth, and thus lose the last copy of Akira. However, currently there is no existing poor population, hence I would oppose it coming into being, precisely because it would result in the lose of Akira." You can reject this type of reasoning, and a variety of others that block the repugnant conclusion at some stage of the chain (the Stanford Encyclopaedia of Philosophy has a good entry on the Repugnant Conclusion and the arguments surrounding it). But most reasons for doing so already pre-suppose total utilitarianism. In that case, you cannot use the above as an argument for your theory.

Can you explain this further? If we don't allow potential people to carry weight, and if we are preference rather than hedonic utilitarians, then the only thing we are checking when deciding to create all these new people is whether or not existing people would prefer to do so.

The fact that the repugnant conclusion has "repugnant" right in the name suggests that most people don't want it. Therefore if total utilitarianism is about satisfying the preferences of as many people as possible much as possible, and it results in a conclusion nobody prefers, that should be a red flag.

If existing people understand the repugnant conclusion, then they will understand it is a likely consequence of creating all these people is that the world loses most of its culture and happiness, and when we aggregate their preferences they will vote against doing so.

So I don't see what you mean when you say this reasoning "pre-supposes total utiltarianism". It presupposes people's intuitive moral preferences for a happy world full of culture to a just-barely-not-unhappy-world without, and it pretends we can solve the aggregation problem, but where's the vicious self-reference?

Replies from: Lukas_Gloor, Stuart_Armstrong, None
comment by Lukas_Gloor · 2012-06-28T15:03:25.755Z · LW(p) · GW(p)

If we don't allow potential people to carry weight, and if we are preference rather than hedonic utilitarians, then the only thing we are checking when deciding to create all these new people is whether or not existing people would prefer to do so.

That's Peter Singer's view, prior-existence instead of total. A problem here seems to be that creating a being in intense suffering would be ethically neutral, and if even the slightest preference for doing so exists, and if there were no resource trade-offs in regard to other preferences, then creating that miserable being would be the right thing to do. One can argue that in the first millisecond after creating the miserable being, one would be obliged to kill it, and that, foreseeing this, one ought not have created it in the first place. But that seems not very elegant. And one could further imagine creating the being somewhere unreachable, where it's impossible to kill it afterwards.

One can avoid this conclusion by axiomatically stating that it is bad to bring into existence a being with a "life not worth living". But that still leaves problems, for one thing, it seems ad hoc, and for another, it would then not matter whether one brings a happy child into existence or one with a neutral life, and that again seems highly counterintuitive.

The only way to solve this, as I see it, is to count all unsatisfied preferences negatively. You'd end up with negative total preference-utiltiarianism, which usually has quite strong reasons against bringing beings into existence. Depending on how much pre-existing beings want to have children, it wouldn't necessarily entail complete anti-natalism, but the overall goal would at some point be a universe without unsatisfied preferences. Or is there another way out?

Replies from: Ghatanathoah, Mark_Lu, Ghatanathoah, Yvain
comment by Ghatanathoah · 2013-09-15T06:38:13.139Z · LW(p) · GW(p)

The only way to solve this, as I see it, is to count all unsatisfied preferences negatively. You'd end up with negative total preference-utiltiarianism, which usually has quite strong reasons against bringing beings into existence.

A potential major problem with this approach has occurred to me, namely, the fact that people tend to have infinite or near infinite preferences. We always want more. I don't see anything wrong with that, but it does create headaches for the ethical system under discussion.

The human race's insatiable desires makes negative total preference-utilitarianism vulnerable to an interesting variant of the various problems of infinity in ethics. Once you've created a person, who then dies, it is impossible to do any more harm. There's already an infinite amount of unsatisfied preferences in the world from their existence and death. Creating more people will result in the same total amount of unsatisfied preferences as before: infinity. This would render negative utilitarianism as always indifferent to whether one should create more people, obviously not what we want.

Even if you posit that our preferences are not infinite, but merely very large, this still runs into problems. I think most people, even anti-natalists, would agree that it is sometimes acceptable to create a new person in order to prevent the suffering of existing people. For instance, I think even an antinatalist would be willing to create one person who will live a life with what an upper-class 21st Century American would consider a "normal" amount of suffering, if doing so would prevent 7 billion people from being tortured for 50 years. But if you posit that the new person has a very large, but not infinite amount of preferences (say, a googol) then it's still possible for the badness of creating them to outweigh the torture of all those people. Again, not what we want.

Hedonic negative utilitarianism doesn't have this problem, but it's even worse, it implies we should painlessly kill everyone ASAP! Since most antinatalists I know believe death to be a negative thing, rather than a neutral thing, they must be at least partial preference utilitarians.

Now, I'm sure that negative utilitarians have some way around this problem. There wouldn't be so many passionate advocates for it if it could be killed by a logical conundrum like this. But I can't find any discussion of this problem after doing some searching on the topic. I'm really curious to know what the proposed solution is, and would appreciate it if someone told me.

comment by Mark_Lu · 2012-06-28T20:30:19.732Z · LW(p) · GW(p)

A problem here seems to be that creating a being in intense suffering would be ethically neutral

Well don't existing people have a preference about there not being such creatures? You can have preferences that are about other people, right?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-28T20:47:12.705Z · LW(p) · GW(p)

Sure, existing people tend to have such preferences. But hypothetically it's possible that they didn't, and the mere possibility is enough to bring down an ethical theory if you can show that it would generate absurd results.

Replies from: Mark_Lu
comment by Mark_Lu · 2012-06-28T21:10:44.405Z · LW(p) · GW(p)

This might be one reason why Eliezer talks about morality as a fixed computation.

P.S. Also, doesn't the being itself have a preference for not-suffering?

comment by Ghatanathoah · 2012-09-17T06:39:29.714Z · LW(p) · GW(p)

Or is there another way out?

One possibility might be phrasing it as "Maximize preference satisfaction for everyone who exists and ever will exist, but not for everyone who could possibly exist.."

This captures the intuition that it is bad to create people who have low levels of preference satisfaction, even if they don't exist yet and hence can't object to being created, while preserving the belief that existing people have a right to not create new people whose existence would seriously interfere with their desires. It does this without implying anti-natalism. I admit that the phrasing is a little clunky and needs refinement, and I'm sure a clever enough UFAI could find some way to screw it up, but I think it's a big step towards resolving the issues you point out.

EDIT: Another possibility that I thought of is setting "creating new worthwhile lives" and "improving already worthwhile lives" as two separate values that have diminishing returns relative to each other. This is still vulnerable to some forms of repugant-conclusion type arguments, but it totally eliminates what I think is the most repugnant aspect of the RC - the idea that a Malthusian society might be morally optimal.

comment by Scott Alexander (Yvain) · 2012-06-29T00:19:07.519Z · LW(p) · GW(p)

Thank you. Apparently total utilitarianism really is scary, and I had routed around it by replacing it with something more useable and assuming that was what everyone else meant when they said "total utilitarianism".

comment by Stuart_Armstrong · 2012-06-28T12:10:34.343Z · LW(p) · GW(p)

I suppose I take it on faith that there's a lot of room for more advanced technology before we hit mathematical limits.

Yes, yes, much progress can (and will) be made fomalising our intuitions. But we don't need to assume ahead of time that the progress will take the form of "better individual utilities and definition of summation" rather than "other ways of doing population ethics".

In hedonic utilitarianism, yes. Are you making this claim for preference utilitarianism as well? If so, on what basis? If we don't give credit for creating potential people, isn't most people's preference not to be killed enough to stop preference utilitarians from killing them?

Yes, the act is not morally neutral in preference utilitarianism. In those cases, we'd have to talk about how many people we'd have to create with satisficiable preferences, to compensate for that one death. You might not give credit for creating potential people, but preference total utilitarianism gives credit for satisfying more preferences - and if creating more people is a way of doing this, then it's in favour.

If existing people understand the repugnant conclusion, then they will understand it is a likely consequence of creating all these people is that the world loses most of its culture and happiness, and when we aggregate their preferences they will vote against doing so.

This is not preference total utilitarianism. It's something like "satisfying the maximal amount of preferences of currently existing people". In fact, it's closer to preference average utilitarianism (satisfy the current majority preference) that to total utilitarianism (probably not exactly that either; maybe a little more path dependency).

So I don't see what you mean when you say this reasoning "pre-supposes total utiltarianism".

Most reasons for rejecting the reasoning that blocks the repugnant conclusion pre-suppose total utiltarianism. Without the double negative: most justifications of the repugnant conclusion pre-suppose total utilitarianism.

Replies from: Mark_Lu
comment by Mark_Lu · 2012-06-28T12:58:27.712Z · LW(p) · GW(p)

preference total utilitarianism gives credit for satisfying more preferences - and if creating more people is a way of doing this, then it's in favour

Shouldn't we then just create people with simpler and easier to satisfy preferences so that there's more preference-satisfying in the world?

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-28T14:48:47.960Z · LW(p) · GW(p)

Indeed, that's a very counterintuitive conclusion. It's the reason why most preference-utilitarians I know hold a prior-existence view.

comment by [deleted] · 2012-07-09T21:57:56.400Z · LW(p) · GW(p)

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).

In hedonic utilitarianism, yes.

Even in hedonistic utilitarianism, it is an almost misleading simplification. There are crucial differences between killing a person and not birthing a new one: Most importantly, one is seen as breaking the social covenant of non-violence, while the other is not. One disrupts pre-existing social networks, the other does not. One destroys an experienced educated brain, the other does not. Endorsing one causes social distrust and strife in ways the other does not.

A better claim might be: It is morally neutral in hedonistic utilitarianism to create a perfect copy of a person and painlessly and unexpectedly destroy the original. It's a more accurate claim, and I personally would accept it.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2013-10-16T20:19:09.680Z · LW(p) · GW(p)

Even in hedonistic utilitarianism, it is an almost misleading simplification. There are crucial differences between killing a person and not birthing a new one: Most importantly, one is seen as breaking the social covenant of non-violence, while the other is not. One disrupts pre-existing social networks, the other does not. One destroys an experienced educated brain, the other does not. Endorsing one causes social distrust and strife in ways the other does not.

These are all practical considerations. Most people believe it is wrong in principle to kill someone and replace them with a being of comparable happiness. You don't see people going around saying:

"Look at that moderately happy person. It sure is too bad that it's impractical to kill them and replace them with a slightly happier person. The world would be a lot better if that were possible."

I also doubt that an aversion to violence is what prevents people from endorsing replacement either. You don't see people going around saying:

"Man, I sure wish that person would get killed in a tornado or a car accident. Then I could replace them without breaking any social covenants."

I believe that people reject replacement because they see it as a bad consequence, not because of any practical or deontological considerations. I wholeheartedly endorse such a rejection.

A better claim might be: It is morally neutral in hedonistic utilitarianism to create a perfect copy of a person and painlessly and unexpectedly destroy the original. It's a more accurate claim, and I personally would accept it.

The reason that that claim seems acceptable is because, under many understandings of how personal identity works, if a copy of someone exists, they aren't really dead. You killed a piece of them, but there's still another piece left alive. As long as your memories, personality, and values continue to exist you still live.

The OP makes it clear that what they mean is that total utilitarianism (hedonic and otherwise) maintains that it is morally neutral to kill someone and replace them with a completely different person who has totally different memories, personality, and values, providing the second person is of comparable happiness to the first. I believe any moral theory that produces this result ought to be rejected.

comment by Kaj_Sotala · 2012-06-25T22:56:54.751Z · LW(p) · GW(p)

This would deserve to be on the front page.

Replies from: army1987, Stuart_Armstrong
comment by A1987dM (army1987) · 2012-06-26T03:08:42.570Z · LW(p) · GW(p)

I agree.

ETA: Also, I expected a post with “(small)” in its title to be much shorter. :-)

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-06-26T13:36:25.047Z · LW(p) · GW(p)

Well, it did start shorter, then more details just added themselves. Nothing to do with me! :-)

comment by Stuart_Armstrong · 2012-06-26T12:30:39.635Z · LW(p) · GW(p)

Cheers, will move it.

comment by private_messaging · 2012-06-26T21:28:32.457Z · LW(p) · GW(p)

There's how I see this issue (from philosophical point of view):

Moral value is, in the most general form, a function of a state of a structure, for lack of better word. The structure may be just 10 neurons in isolation, for which the moral worth may well be exact zero, or it may be 7 billion blobs of about 10^11 neurons who communicate with each other, or it may be a lot of data on a hard drive, representing a stored upload.

The moral value of two interconnected structures, in general, does not equal the sum of moral value of each structure (example: whole brain vs piece of brain, a mind on redundant hardware). Moral value of whole can (in general) be greater or less than sum of moral values of the parts. Note that I have not defined anything specific at all here, I just specified very general considerations. We have developed somewhat ad hoc approximations to some sort of ideal moral worth.

edit: Note. The moral worth of an action is in general a function of state without the action and state with the action, not necessarily difference in moral worth of one state, and moral worth of other state.

The utilitarianism of the N dustspecks worse than torture variety takes as fundamental and the ideal plenty of assumptions such as that moral worth would be distributive like W(a .. b) = W(a) + W(b) , to which we have clear counter-examples when the parts are strongly interconnected (e.g. 2 hemispheres of a brain) or correlated (double redundant hardware) but which may hold approximately for people due to them not being strongly interconnected. With very large N, clearly broken premise is taken to their extreme and then proclaimed normative, while the approximations that aren't linear are proclaimed wrong.

comment by Shmi (shminux) · 2012-06-26T20:22:08.733Z · LW(p) · GW(p)

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).

Just wanted to note that this is too strong a statement. There is no requirement for the 1:1 ratio in "total utilitarianism". You end up with the "repugnant conclusion" to the Parfit's "mere addition" argument as long as this ratio is finite (known as "birth-death asymmetry"). For example, one may argue that killing 1 person to save 5 equally happy people is wrong, because killing is wrong, but as long as there is a ratio they would agree with (or, more generally, an equivalent number of saved people for each number of killed people), the repugnant conclusion argument still goes through.

Replies from: Stuart_Armstrong, Lukas_Gloor
comment by Stuart_Armstrong · 2012-06-27T09:12:07.005Z · LW(p) · GW(p)

I was more thinking of a total asymmetry rather that a ratio. But yes, if you have a finite ratio, then you have the repugnant conclusion (even though it's not total utilitarianism unless the ratio is 1:1).

comment by Lukas_Gloor · 2012-06-27T17:56:22.586Z · LW(p) · GW(p)

Exactly! I've been pointing this out too. If you assume preference utilitarianism, then killing counts as wrong, at least if the beings you kill want to stay alive further (or have detailed future plans even). So the replecement only works if you increase the number of the new beings, or make them have more satisfied preferences. The rest of the argument still works, but this is important to point out.

comment by A1987dM (army1987) · 2012-06-26T08:06:28.671Z · LW(p) · GW(p)

You know, I've felt that examining the dust speck vs torture dilemma or stuff like that, finding a way to derive an intuitively false conclusion from intuitively true premises, and thereby concluding that the conclusion must be true after all (rather than there's some kind of flaw in the proof you can't see yet) is analogous to seeing a proof that 0 equals 1 or that a hamburger is better than eternal happiness or that no feather is dark, not seeing the mistake in the proof straight away, and thereby concluding that the conclusion must be true. Does anyone else feel the same?

Replies from: TheOtherDave, MBlume, Richard_Kennaway, private_messaging, David_Gerard
comment by TheOtherDave · 2012-06-26T13:54:27.594Z · LW(p) · GW(p)

Sure.

But it's not like continuing to endorse my intuitions in the absence of any justification for them, on the assumption that all arguments that run counter to my intuitions, however solid they may seem, must be wrong because my intuitions say so, is noticeably more admirable.

When my intuitions point in one direction and my reason points in another, my preference is to endorse neither direction until I've thought through the problem more carefully. What I find often happens is that on careful thought, my whole understanding of the problem tends to alter, after which I may end up rejecting both of those directions.

Replies from: private_messaging
comment by private_messaging · 2012-06-27T08:35:25.456Z · LW(p) · GW(p)

Well, what you should do, is to recognize that such arguments themselves are built entirely out of intuitions, and their validity rest on conjunction of a significant number of often unstated intuitive assumptions. One should not fall for cargo cult imitation of logic.

There's no fundamental reason why value should be linear in number of dust specks; it's nothing but an assumption which may be your personal intuition, but it is still intuition that lacks any justification what so ever, and in so much as it is an uncommon intuition, it even lacks the "if it was wrong it would be debunked" sort of justification. There's always the Dunning Kruger effect. People least capable of moral (or any) reasoning should be expected to think themselves most capable.

Replies from: MarkusRamikin, TheOtherDave
comment by MarkusRamikin · 2012-06-27T08:56:59.504Z · LW(p) · GW(p)

There's no fundamental reason why value should be linear in number of dust specks

Yeah, that has always been my main problem with that scenario.

There are different ways to sum multiple sources of something. Consider linear vs paralel electrical circuits; the total output depends greatly on how you count the individual voltage sources (or resistors or whatever).

When it comes to suffering, well suffering only exists in consciousness, and each point of consciousness - each mind involved - experiences their own dust speck individually. There is no conscious mind in that scenario who is directly experiencing the totality of the dust specks and suffers accordingly. It is in no way obvious to me that the "right" way to consider the totality of that suffering is to just add it up. Perhaps it is. But unless I missed something, no one arguing for torture so far has actually shown it (as opposed to just assuming it).

Suppose we make this about (what starts as) a single person. Suppose that you, yourself, are going to be copied into all that humongous number of copies. And you are given a choice: before that happens, you will be tortured for 50 years. Or you will be unconscious for 50 years, but after copying each of your copies will get a dust speck in the eye. Either way you get copied, that's not part of the choice. After that, whatever your choice, you will be able to continue with your lives.

In that case, I don't care about doing the "right" math that will make people call me rational, I care about being the agent who is happily NOT writhing in pain with 50 years more of it ahead of him.

EDIT: come to think of it, assume the copying template is taken from you before the 50 years start, so we don't have to consider memories and lasting psychological effects of torture. My answer remains the same, even if in future I won't remember the torture, I don't want to go through it.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-27T13:48:51.727Z · LW(p) · GW(p)

As far as I know, TvDS doesn't assume that value is linear in dust specks. As you say, there are different ways to sum multiple sources of something. In particular, there are many ways to sum the experiences of multiple individuals.

For example, the whole problem evaporates if I decide that people's suffering only matters to the extent that I personally know those people. In fact, much less ridiculous problems also evaporate... e.g., in that case I also prefer that thousands of people suffer so that I and my friends can live lives of ease, as long as the suffering hordes are sufficiently far away.

It is not obvious to me that I prefer that second way of thinking, though.

Replies from: David_Gerard, private_messaging
comment by David_Gerard · 2012-06-27T15:27:26.657Z · LW(p) · GW(p)

e.g., in that case I also prefer that thousands of people suffer so that I and my friends can live lives of ease, as long as the suffering hordes are sufficiently far away.

It is arguable (in terms of revealed preferences) that first-worlders typically do prefer that. This requires a slightly non-normative meaning of "prefer", but a very useful one.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-27T15:34:42.673Z · LW(p) · GW(p)

Oh, absolutely. I chose the example with that in mind.

I merely assert that "but that leads to thousands of people suffering!" is not a ridiculous moral problem for people (like me) who reveal such preferences to consider, and it's not obvious that a model that causes the problem to evaporate is one that I endorse.

comment by private_messaging · 2012-06-27T15:47:07.326Z · LW(p) · GW(p)

Well, it sure uses linear intuition. 3^^^3 is bigger than number of distinct states, its far past point where you are only increasing exactly-duplicated dust speck experience, so you could reasonably expect it to flat out.

One can go perverse and proclaims that one treats duplicates the same, but then if there's a button which you press to replace everyone's mind with mind of happiest person, you should press it.

I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition. Simulation of pinprick slowed down 1000000 times is not ultra long torture. The 'murder' is a form of irreversible state transition. The morality as it exist is about state transitions not about states.

Replies from: TheOtherDave, Mark_Lu
comment by TheOtherDave · 2012-06-27T16:05:21.530Z · LW(p) · GW(p)

It isn't clear to me what the phrase "exactly-duplicated" is doing there. Is there a reason to believe that each individual dust-speck-in-eye event is exactly like every other? And if so, what difference does that make? (Relatedly, is there a reason to believe that each individual moment of torture is different from all the others? If it turns out that it's not, does that imply something relevant?)

In any case, I certainly agree that one could reasonably expect the negvalue of suffering to flatten out no matter how much of it there is. It seems unlikely to me that fifty years of torture is anywhere near the asymptote of that curve, though... for example, I would rather be tortured for fifty years than be tortured for seventy years.

But even if it somehow is at the asymptotic limit, we could recast the problem with ten years of torture instead, or five years, or five months, or some other value that is no longer at that limit, and the same questions would arise.

So, no, I don't think the TvDS problem depends on intuitions about the linear-additive nature of suffering. (Indeed, the more i think about it the less convinced i am that I have such intuitions, as opposed to approaches-a-limit intuitions. This is perhaps because thinking about it has changed my intuitions.)

Replies from: private_messaging
comment by private_messaging · 2012-06-27T16:19:58.371Z · LW(p) · GW(p)

I was referring to linear-additive nature of dust speck so called suffering, in the number of people with dust specks.

3^^^3 is far far larger than number of distinct mind states of anything human-like. You can only be dust-speck-ing something like 10^(10^20) distinct human-like entities maximum. I recall i posted about that a while back. You shouldn't be multiplying anything with 3^^^3 .

TBH, my 'common sense' explanation as of why EY chooses to adopt torture > dust specks stance (i say chooses because it is entirely up to grabs here plus his position is fairly incoherent), is because he seriously believes that his work has non negligible chance of influencing lives of an enormous number of people, and subsequently if he can internalize the torture>dust specks, he is free to rationalize any sort of thing he can plausibly do, even if AI extinction risk does not exist.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-27T16:40:29.979Z · LW(p) · GW(p)

[edit: this response was to an earlier version of the above comment, before it was edited. Some of it is no longer especially apposite to the comment as it exists now.]

I was referring to linear-additive nature of dust specks.

Well, I agree that 3^^^3 dust specks don't quite add linearly... long before you reach that ridiculous mass, I expect you get all manner of weird effects that I'm not physicist enough to predict. And I also agree that our intuitions are that dust specks add linearly.

But surely it's not the dust-specks that we care about here, but the suffering? That is, it seems clear to me that if we eliminated all the dust specks from the scenario and replaced them with something that caused an equally negligible amount of suffering, we would not be changing anything that mattered about the scenario.

And, as I said, it's not at all clear to me that I intuit linear addition of suffering (whether it's caused by dust-specks, torture, or something else), or that the scenario depends on assuming linear addition of suffering. It merely depends on assuming that addition of multiple negligible amounts of suffering can lead to an aggregate-suffering result that is commensurable with, and greater than, a single non-negligible amount of suffering.

It's not clear to me that this assumption holds, but the linear-addition objection seems like a red herring to me.

You can only be dust-speck-ing something like 10^(10^20) distinct human-like entities maximum.

Ah, I see.

Yeah, sure, there's only X possible ways for a human to be (whether 10^(10^20) or some other vast number doesn't really matter), and there's only Y possible ways for a dust speck to be, and there's only Z possible ways for a given human to experience a given dust speck in their eye. So, sure, we only have (XYZ) distinct dust-speck-in-eye events, and if (XYZ) << 3^^^3 then there's some duplication. Indeed, there's vast amounts of duplication, given that (3^^^3/(XYZ)) is still a staggeringly huge number.

Agreed.

I'm still curious about what difference that makes.

Replies from: private_messaging
comment by private_messaging · 2012-06-27T16:55:12.386Z · LW(p) · GW(p)

Well, some difference that it should make:

Lead to severe discounting of the 'reasoning method' that arrived at 3^^^3 dust-specks>torture conclusion without ever coming across the exhaustion of states issue. In all fields where it was employed. And to severely discount anything that came from that process previously. If it failed even though it gone against intuition, it's even more worthless when it goes along with intuition.

I get the feeling that attempts to 'logically' deliberate on morality from some simple principles like "utility" are similar to trying to recognize cats in pictures by reading R,G,B number value array and doing some arithmetic. If someone haven't got visual cortex they can't see, even if they do insane amount of reasoning deliberately.

Replies from: Mark_Lu, CarlShulman, TheOtherDave
comment by Mark_Lu · 2012-06-28T08:55:15.385Z · LW(p) · GW(p)

similar to trying to recognize cats in pictures by reading R,G,B number value array and doing some arithmetic

But a computer can recognize cats by reading pixel values in pictures? Maybe not as efficiently and accurately as people, but that's because brains have a more efficient architecture/algorithms than today's generic computers.

Replies from: private_messaging
comment by private_messaging · 2012-06-28T09:08:10.480Z · LW(p) · GW(p)

Yes, it is of course possible in principle (in fact I am using cats as example because Google just did that). The point is that a person can't do anything equivalent to what human visual cortex does in a fraction of a second by using paper and pencil for multiple lifetimes. The morality and the immorality, just like cat recognition, rely on some innate human ability of connecting symbols with reality.

edit: To clarify. To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down. To tell what actions are moral or not, humans employ some method that is likewise hopelessly impossible for them to write down. All you can do is write down guidelines, and add some picture examples of cats and dogs. Various rules like utilitarianism are along the lines of "if the eyes have vertical slits, it's a cat" which mis-recognize a lizard as a cat but do not recognize the cat that closed the eyes. (There is also the practical matter of law making, where you want to restrict the diversity of moral judgment to something sane, and thus you use principles like 'if it doesn't harm anyone else it's okay')

Replies from: Mark_Lu
comment by Mark_Lu · 2012-06-28T09:30:52.736Z · LW(p) · GW(p)

To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down.

Right, but if/when we get to (partial) brain emulations (in large quantities) we might be able to do the same thing for 'morality' that we do today to recognize cats using a computer.

Replies from: private_messaging
comment by private_messaging · 2012-06-28T10:03:12.267Z · LW(p) · GW(p)

Agreed. We may even see how it is that certain algorithms (very broadly speaking) can feel pain etc, and actually start defining something agreeable from first principles. Meanwhile, all that 3^^^3 people with dustspeck worse than 1 person tortured stuff is to morality as scholasticism is to science. The only value it may have is in highlighting the problem with approximations, and with handwavy reasoning (nobody said that the number of possible people is >3^^^3 (which is false) , even though such statement was a part of reasoning and should have been stated and then rejected invalidating everything that followed. Or a statement that identical instances matter should have been made, which in itself leads to multitude of really dumb decisions whereby the life of a conscious robot that has thicker wires in its computer (or uses otherwise redundant hardware) is worth more)

Replies from: CarlShulman
comment by CarlShulman · 2012-06-29T05:46:40.669Z · LW(p) · GW(p)

Or a statement that identical instances matter should have been made

Not many people hold the view that if eternal inflation is true then there is nothing wrong with hitting people with hot pokers, since the relevant brain states exist elsewhere anyway. In Bostrom's paper he could only find a single backer of the view. In talking to many people, I have seen it expressed more than once, but still only a very small minority of cases. Perhaps not including it in that post looms large for you because you have a strong intuition that it would be OK to torture and kill if the universe were very large, or think it very unlikely that the universe is large, but it's a niche objection to address.

After all, one could include such a discussion as a rider in every post talking about trying to achieve anything for oneself or others: "well, reading this calculus textbook seems like it could teach you interesting math, but physicists say we might be living in a big universe, in which case there's no point since brains in all states already exist, if you don't care about identical copies."

Replies from: private_messaging
comment by private_messaging · 2012-06-29T08:58:17.258Z · LW(p) · GW(p)

If there is any nonzero probability that universe is NOT very large (or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates), all you did is scaled all the utilities down, which does not affect any decision.

That's incredibly terrible thing to do for our friends the people who believe themselves to be utilitarian, as those people are going to selectively scale down just some of the utilities and then act, in self interest or otherwise, out of resulting big differences, doing something stupid.

edit: also, the issue with multiple-counting redundant hardware and the thick-wired utility monsters in the utilitarianism that does count extra copies doesn't go away if the world is big. If you have a solid argument that utilitarianism without counting the extra copies the same does not work, that means utilitarianism does not work. Which I believe is the case. The morals are an engineered / naturally selected solution to problem of peer to peer intellectual and other cooperation, which requires nodes not to model each other in undue detail, which rules out direct straightforward utilitarianism. The utilitarianism is irreparably broken. It's fake-reductionism where you substitute one irreducible concept for another.

Replies from: cousin_it
comment by cousin_it · 2012-06-29T09:20:50.881Z · LW(p) · GW(p)

(or the copy counting is a bit subtle about the copies which are effectively encoding state onto coordinates)

That's an interesting idea, thanks. Maybe caring about anthropic probabilities or measures of conscious experiences directly would make more sense than caring about the number of copies as a proxy.

If you take that idea seriously and assume that all anthropic probabilities of conscious experiences must sum to 1, then torture vs dustspecks seems to lose some of its sting, because the total disutility of dustspecking remains bounded and not very high, no matter how many people you dustspeck. (That's a little similar to the "proximity argument", which says faraway people matter less.) And being able to point out the specific person to be tortured means that person doesn't have too low weight, so torturing that single person would be worse than dustspecking literally everyone else in the multiverse. I don't remember if anyone made this argument before... Of course there could be any number of holes in it.

Also note that the thicker wires argument is not obviously wrong, because for all we know, thicker wires could affect subjective probabilities. It sounds absurd, sure, but so does the fact that lightspeed is independent of observer speed.

ETA: the first version of this comment mixed up Pascal's mugging and torture vs dustspecks. Sorry. Though maybe a similar argument could be made for Pascal's mugging as well.

Replies from: private_messaging
comment by private_messaging · 2012-06-29T09:47:11.726Z · LW(p) · GW(p)

Thinking about it some more: maybe the key is that it is not enough for something to exist somewhere, just as it is not enough for output tape in Solomonoff induction to contain the desired output string somewhere within it, it should begin with it. (Note that it is a critically important requirement). If you are using Solomonoff induction (suppose you got oracle and suppose universe is computable and so on), then your model contains not only laws of universe but also locator, and my intuition is that one model that has simplest locator is some very huge length shorter than the next simplest model, so all the other models except the one with simplest locator, have to be ignored entirely.

If we require that the locator is present somehow in the whole then the ultra-distant copies are very different while the nearby copies are virtually the same, and Kolmogorov complexity of concatenated strings can be used for count, not counting twice nearby copies (the thick wired monster only weights a teeny tiny bit more).

TBH i feel tho that utilitarianism goes in the wrong direction entirely. Morals can be seen as evolved / engineered solution to peer to peer intellectual and other cooperation, essentially. It relies on trust, not on mutual detailed modeling (which wastes computing power), and the actions are not quite determined by the expected state (which you can't model), even though it is engineered with some state in mind.

edit: also I think the what ever stuff raises the problem with distant copies or MWI is subjectively disproved by this not saving you from brain damage of any kind (you can get drunk, pass out, wake up with a little bit fewer neurons). So we basically know something's screwed up with naive counting for probabilities, or the world is small.

comment by CarlShulman · 2012-06-29T05:39:07.045Z · LW(p) · GW(p)

Lead to severe discounting of the 'reasoning method' that arrived at 3^^^3 dust-specks>torture conclusion without ever coming across the exhaustion of states issue.

This is mistaken. E.g. see this post which discusses living in a Big World, as in eternal inflation theories where the universe extends infinitely and has random variation so that somewhere in the universe every possible galaxy or supercluster will be realized, and all the human brain states will be explored.

Or see Bostrom's paper on this issue, which is very widely read around here. Many people think that our actions can still matter in such a world, e.g. that it's better to try to give people chocolate than to torture them here on Earth, even if in ludicrously distant region there are brains that have experienced all the variations of chocolate and torture.

comment by TheOtherDave · 2012-06-27T17:06:36.716Z · LW(p) · GW(p)

Lead to severe discounting of the 'reasoning method' that arrived at 3^^^3 dust-specks>torture conclusion without ever coming across the exhaustion of states issue.

Even better, to my mind, is to think about the scenario from the ground up and form my own conclusions, rather than start with some intuitive judgment about someone else's writeup about it and then update that judgment based on things they didn't mention in that writeup.

If someone haven't got visual cortex they can't see, even if they do insane amount of reasoning deliberately

It's not clear to me that I correctly understand what you mean here, but given my current understanding, I disagree. All my visual cortex is doing is performing computations on the output of my eyes; if that's seeing, then anything else that performs the same computations can see just as well.

Replies from: private_messaging
comment by private_messaging · 2012-06-27T19:44:05.135Z · LW(p) · GW(p)

Even better, to my mind, is to think about the scenario from the ground up and form my own conclusions, rather than start with some intuitive judgment about someone else's writeup about it and then update that judgment based on things they didn't mention in that writeup.

The point is that the approach is flawed; one should always learn on mistakes. The issue here is in building an argument which is superficially logical - conforms to the structure of something a logical rational person might say - something you might have a logical character in a movie say - but is fundamentally a string of very shaky intuitions which are only correct if nothing outside the argument interferes, rather than solid steps.

It's not clear to me that I correctly understand what you mean here, but given my current understanding, I disagree. All my visual cortex is doing is performing computations on the output of my eyes; if that's seeing, then anything else that performs the same computations can see just as well.

In theory. In practice it takes ridiculous number of operations, and you can't chinese-room vision without slow-down by factor of billions. Decades for single-cat recognition versus fraction of a second, and that's if you got an algorithm for it.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-27T20:02:08.866Z · LW(p) · GW(p)

I disagree with pretty much all of this, as well as with most of what seem to be the ideas underlying it, and don't see any straightforward way to achieve convergence rather than infinitely ramified divergence, so I suppose it's best for me to drop the thread here.

comment by Mark_Lu · 2012-06-27T16:29:11.759Z · LW(p) · GW(p)

I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition.

"State" doesn't have to mean "frozen state" or something similar, it could mean "state of the world/universe". E.g. "a state of the universe" in which many people are being tortured includes the torture process in it's description. I think this is how it's normally used.

Replies from: private_messaging
comment by private_messaging · 2012-06-27T16:38:20.041Z · LW(p) · GW(p)

Well, if you are to coherently take it that the transitions have value, rather than states, then you arrive at morality that regulates the transitions that the agent should try to make happen, ending up with morality that is more about means than about ends.

I think it's simply that the pain feels like a state rather than dynamic process, and so utilitarianism treats it as state, while doing something feels like a dynamic process, so utilitarianism doesn't treat it as state and is only concerned with difference in utilities.

comment by TheOtherDave · 2012-06-27T13:25:59.465Z · LW(p) · GW(p)

Agreed that all of these sorts of arguments ultimately rest on different intuitions about morality, which sometimes conflict, or seem to conflict.

Agreed that value needn't add linearly, and indeed my intuition is that it probably doesn't.

It seems clear to me that if I negatively value something happening, I also negatively value it happening more more. That is, for any X I don't want to have happen, it seems I would rather have X happen than have X happen twice. I can't imagine an X where I don't want X to happen and would prefer to have X happen twice than once. (Barring silly examples like "the power switch for the torture device gets flipped".)

Replies from: APMason
comment by APMason · 2012-06-27T13:41:58.506Z · LW(p) · GW(p)

Can anyone explain what goes wrong if you say something like, "The marginal utility of my terminal values increases asymtotically, and u(Torture) approaches a much higher asymptote than u(Dust speck)" (or indeed whether it goes wrong at all)?

Replies from: Lukas_Gloor, wedrifid, TheOtherDave
comment by Lukas_Gloor · 2012-06-27T14:18:18.018Z · LW(p) · GW(p)

That's been done in this paper, secion VI "The Asymptotic Gambit".

Replies from: APMason
comment by APMason · 2012-06-27T14:29:13.027Z · LW(p) · GW(p)

Thank you. I had expected the bottom to drop out of it somehow.

EDIT: Although come to think of it I'm not sure the objections presented in that paper are so deadly after all if you takes TDT-like considerations into account (i.e. there would not be a difference between "kill 1 person, prevent 1000 mutilations" + "kill 1 person, prevent 1000 mutilations" and "kill 2 people, prevent 2000 mutilations".) Will have to think on it some more.

comment by wedrifid · 2012-06-27T13:53:52.111Z · LW(p) · GW(p)

Can anyone explain what goes wrong if you say something like, "The marginal utility of my terminal values increases asymtotically, and u(Torture) approaches a much higher asymptote than u(Dust speck)" (or indeed whether it goes wrong at all)?

Nothing, iif that happens to be be what your actual preferences are. If your preferences did not happen to be as you describe but instead you are confused by an inconsistency in your intuitions then you will make incorrect decisions.

The challenge is not to construct a utility function such that you can justify it to others in the face of opposition. The challenge is to work out what your actual preferences are and implement them.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-27T14:47:11.242Z · LW(p) · GW(p)

The challenge is to work out what your actual preferences are and implement them.

Ayup. Also, it may be worth saying explicitly that a lot of the difficulty comes in working out a model of my actual preferences that is internally consistent and can be extended to apply to novel situations. If I give up those constraints, it's easier to come up with propositions that seem to model my preferences, because they approximate particular aspects of my preferences well enough that in certain situations I can't tell the difference. And if I don't ever try to make decisions outside of that narrow band of situations, that can be enough to satisfy me.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-27T17:44:45.193Z · LW(p) · GW(p)

The challenge is to work out what your actual preferences are and implement them.

[Edited to separate from quote] But doesn't that beg the question? Don't you have to ask a the meta question "what kinds of preferences are reasonable to have?" Why should we shape ethics the way evolution happened to set up our values? That's why I favor hedonistic utiltiarianism that is about actual states of the world that can in itself be bad (--> suffering).

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-27T18:02:31.963Z · LW(p) · GW(p)

Note that markup requires a blank line between your quote and the rest of the topic.

It does beg a question: specifically, the question of whether I ought to implement my preferences (or some approximation of them) in the first place. If, for example, my preferences are instead irrelevant to what I ought to do, then time spent working out my preferences is time that could better have been spent doing something else.

All of that said, it sounds like you're suggesting that suffering is somehow unrelated to the way evolution set up our values. If that is what you're suggesting, then I'm completely at a loss to understand either your model of what suffering is, or how evolution works.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-27T18:10:58.755Z · LW(p) · GW(p)

The fact that suffering feels awful is about the very thing, and nothing else. There's no valuing required, no being ask itself "should I dislike this experience" when it is in suffering. It wouldn't be suffering otherwise.

My position implies that in a world without suffering (or happiness, if I were not a negative utiltiarian), nothing would matter.

comment by TheOtherDave · 2012-06-27T13:53:14.278Z · LW(p) · GW(p)

Depends on what I'm trying to do.

If I make that assumption, then it follows that given enough Torture to approach its limit, I choose any number of Dust Specks rather than that amount of Torture.

If my goal is to come up with an algorithm that leads to that choice, then I've succeeded.

(I think talking about Torture and Dust Specks as terminal values is silly, but it isn't necessary for what I think you're trying to get at.)

comment by MBlume · 2012-06-27T18:29:27.641Z · LW(p) · GW(p)

Does anyone else feel the same?

Nope! Some proofs are better-supported than others.

comment by Richard_Kennaway · 2012-06-26T08:19:35.922Z · LW(p) · GW(p)

Yes. The known unreliability of my own thought processes tempers my confidence in any prima facie absurd conclusion I come to. All the more so when it's a conclusion I didn't come to, but merely followed along with someone else's argument to.

comment by private_messaging · 2012-06-26T08:19:05.476Z · LW(p) · GW(p)

I feel this way. The linear theories are usually nothing but first order approximations.

Also, the very idea of summing of individual agent utilities... that's, frankly, nothing but pseudomathematics. Each agent's utility function can be modified without changing agent's behaviour in any way. The utility function is a phantom. It isn't so defined that you could add two of them together. You can map same agent's preferences (whenever they are well ordered) to infinite variety of real valued 'utility functions'.

Replies from: David_Gerard, None
comment by David_Gerard · 2012-06-26T12:54:36.097Z · LW(p) · GW(p)

Yes. The trouble with "shut up and multiply" - beyond assuming that humans have a utility function at all - is assuming that utility works like conventional arithmetic and that you can in fact multiply.

There's also measuring and shut-up-and-multiplying the wrong thing: e.g., seeing people willing to pay about the same in total to save 2000 birds or 20,000 birds and claiming this constitutes "scope insensitivity." The error is assuming this means that people are scope-insensitive, rather than to realise that people aren't buying saved birds at all, but are paying what they're willing to pay for warm fuzzies in general - a constant amount.

The attraction of utilitarianism is that calculating actions would be so much simpler if utility functions existed and their output could be added with the same sort of rules as conventional arithmetic. This does not, however, constitute non-negligible evidence that any of the required assumptions hold.

Replies from: Richard_Kennaway, None, David_Gerard, private_messaging
comment by Richard_Kennaway · 2012-06-26T22:42:03.308Z · LW(p) · GW(p)

This does not, however, constitute non-negligible evidence that any of the required assumptions hold.

It even tends to count against it, by the A+B rule. If items are selected by a high enough combined score on two criteria A and B, then among the selected items, there will tend to be a negative correlation between A and B.

comment by [deleted] · 2012-06-26T17:21:12.427Z · LW(p) · GW(p)

There's also measuring and shut-up-and-multiplying the wrong thing: e.g., seeing people willing to pay about the same in total to save 2000 birds or 20,000 birds and claiming this constitutes "scope insensitivity." The error is assuming this means that people are scope-insensitive, rather than to realise that people aren't buying saved birds at all, but are paying what they're willing to pay for warm fuzzies in general - a constant amount.

I don't know who's making that error. Seems like scope insensitivity and purchasing of warm fuzzies are usually discussed together around here.

Anyway, if there's an error here then it isn't about utilitarianism vs something else, but about declared vs revealed preference. The people believe that they care about the birds. They don't act as if they cared about the birds. For those who accept deliberative reasoning as an expression of human values it's a failure of decision-making intuitions and it's called scope insensitivity. For those who believe that true preference is revealed through behavior it's a failure of reflection. None of those positions seems inconsistent with utilitarianism. In fact it might be easier to be a total utilitarian if you go all the way and conclude that humans really care only about power and sex. Just give everybody nymphomania and megalomania, prohibit birth control and watch that utility counter go. ;)

comment by David_Gerard · 2012-06-26T16:57:39.569Z · LW(p) · GW(p)

An explanatory reply from the downvoter would be useful. I'd like to think I could learn.

comment by private_messaging · 2012-06-26T14:46:43.020Z · LW(p) · GW(p)

I don't think it's even linearly combinable. Suppose there were 4 copies of me total, pair doing some identical thing, other pair doing 2 different things. The second pair is worth more. When I see someone go linear on morals, that strikes me as evidence of poverty of moral value and/or poverty of mathematical language they have available.

Then the consequentialism. The consequences are hard to track - got to model the worlds resulting from uncertain initial state. Really really computationally expensive. Everything is going to use heuristics, even jupiter brains.

There's also measuring and shut-up-and-multiplying the wrong thing: e.g., seeing people willing to pay about the same in total to save 2000 birds or 20,000 birds and claiming this constitutes "scope insensitivity." The error is assuming this means that people are scope-insensitive, rather than to realise that people aren't buying saved birds at all, but are paying what they're willing to pay for warm fuzzies in general - a constant amount.

Well, "willing to pay for warm fuzzies" is a bad way to put it IMO. There's limited amount of money available in the first place, if you care about birds rather than warm fuzzies that doesn't make you a billionaire.

Replies from: army1987
comment by A1987dM (army1987) · 2012-06-28T12:46:55.496Z · LW(p) · GW(p)

Well, "willing to pay for warm fuzzies" is a bad way to put it IMO. There's limited amount of money available in the first place, if you care about birds rather than warm fuzzies that doesn't make you a billionaire.

The figures people would pay to save 2000, 20,000, or 200,000 birds were $80, $78 and $88 respectively, which oughtn't be so much that the utility of money for most WEIRD people would be significantly non-linear. (A much stronger effect IMO could be people taking --possibly subconsiously-- the “2000” or the “20,000” as evidence about the total population of that bird species.)

comment by [deleted] · 2012-06-26T18:22:04.150Z · LW(p) · GW(p)

Utilitarians don't have to sum different utility functions. An utilitarian has an utility function that happens to be defined as a sum of intermediate values assigned to each individual. Those intermediate values are also (confusingly) referred to as utility but they don't come from evaluating any of the infinite variety of 'true' utility functions of every individual. They come from evaluating the total utilitarian's model of individual preference satisfaction (or happiness or whatever).

Or at least it seems to me that it should be that way. If I see a simple technical problem that doesn't really affect the spirit of the argument then the best thing to do is to fix the problem and move on. If total utilitarianism really is commonly defined as summing every individual's utility function then that is silly but it's a problem of confused terminology and not really a strong argument against utilitarianism.

Replies from: David_Gerard, private_messaging
comment by David_Gerard · 2012-06-26T21:39:50.268Z · LW(p) · GW(p)

But the spirit of the argument is ungrounded in anything. What evidence is there that you can do this stuff at all using actual numbers without repeatedly bumping into "don't do non-normative things even if you got that answer from a shut-up-and-multiply"?

comment by private_messaging · 2012-06-26T20:49:41.924Z · LW(p) · GW(p)

Well and then you can have model where the model of individual is sad when the real individual is happy and vice versa, and there would be no problem with that.

You got to ground the symbols somewhere. The model has to be defined to approximate reality for it to make sense, and for the model to approximate reality it has to somehow process individual's internal state.

comment by David_Gerard · 2012-06-26T11:22:05.712Z · LW(p) · GW(p)

Yes. The error is that humans aren't good at utilitarianism.

private_messaging has given an example elsewhere: the trouble with utilitarians is that they think they are utilitarians. They then use numbers to convince themselves to do something they would otherwise consider evil.

The Soviet Union was an attempt to build a Friendly government based on utilitarianism. They quickly reached "shoot someone versus dust specks" and went for shooting people.

They weren't that good at lesser utilitarian decisions either, tending to ignore how humans actually behaved in favour of taking their theories and shutting-up-and-multiplying. Then when that didn't work, they did it harder.

I'm sure someone objecting to the Soviet Union example as non-negligible evidence can come up with examples that worked out much better, of course.

Replies from: CarlShulman, Lukas_Gloor, private_messaging
comment by CarlShulman · 2012-06-29T05:52:33.827Z · LW(p) · GW(p)

See Eliezer's Ethical Injunctions post.

Also Bryan Caplan:

The key difference between a normal utilitarian and a Leninist: When a normal utilitarian concludes that mass murder would maximize social utility, he checks his work! He goes over his calculations with a fine-tooth comb, hoping to discover a way to implement beneficial policy changes without horrific atrocities. The Leninist, in contrast, reasons backwards from the atrocities that emotionally inspire him to the utilitarian argument that morally justifies his atrocities.

If this seems woefully uncharitable, compare the amount of time a proto-Leninist like Raskolnikov spends lovingly reviewing the mere conceivability of morally justified bloodbaths to the amount of time he spends (a) empirically evaluating the effects of policies or (b) searching for less brutal ways to implement whatever policies he wants. These ratios are typical for the entire Russian radical tradition; it's what they imagined to be "profound." When men like this gained power in Russia, they did precisely what you'd expect: treat mass murder like a panacea. This is the banality of Leninism.

Replies from: David_Gerard, Dolores1984
comment by David_Gerard · 2012-06-29T07:14:47.088Z · LW(p) · GW(p)

As I have noted, when you've repeatedly emphasised "shut up and multiply", tacking "btw don't do anything weird" on the end strikes me as susceptible to your readers not heeding it, particularly when they really need to. If arithmetical utilitarianism works so well, it would work in weird territory.

Caplan does have a cultural point on the Soviet Union example. OTOH, it does seem a bit "no true utilitarian".

Replies from: CarlShulman, wedrifid
comment by CarlShulman · 2012-06-29T09:10:23.043Z · LW(p) · GW(p)

If arithmetical utilitarianism works so well, it would work in weird territory.

Note the bank robbery thread below. Someone claims that "the utilitarian math" shows that robbing banks and donating to charity would have the best consequences. But they don't do any math or look up basic statistics to do a Fermi calculation. A few minutes of effort shows that bank robbery actually pays much worse than working as a bank teller over the course of a career (including jail time, etc).

In Giving What We Can there are several people who donate half their income (or all income above a Western middle class standard of living) to highly efficient charities helping people in the developing world. They expect to donate millions of dollars over their careers, and to have large effects on others through their examples and reputations, both as individuals and via their impact on organizations like Giving What We Can. They do try to actually work things out, and basic calculations easily show that running around stealing organs or robbing banks would have terrible consequences, thanks to strong empirical regularities:

  1. Crime mostly doesn't pay. Bank robbers, drug dealers, and the like make less than legitimate careers. They also spend a big chunk of time imprisoned, and ruin their employability for the future. Very talented people who might do better than the average criminal can instead go to Wall Street or Silicon Valley and make far more.

  2. Enormous amounts of good can be done through a normal legitimate career. Committing violent crime, or other hated acts close off such opportunities very rapidly.

  3. Really dedicated do-gooders hope to have most of their influence through example, encouraging others to do good. Becoming a hated criminal, and associating their ethical views with such, should be expected to have huge negative effects by staunching the flow of do-gooders to exploit the vast legitimate opportunities to help people.

  4. If some criminal scheme looks easy and low-risk, consider that law enforcement uses many techniques which are not made public, and are very hard for a lone individual to learn. There are honey-pots, confederates, and so forth. In the market for nuclear materials, most of the buyers and sellers are law enforcement agents trying to capture any real criminal participants. In North America terrorist cells are now regularly infiltrated long before they act, with government informants insinuated into the cell, phone and internet activities monitored, etc.

  5. It is hard to keep a crime secret over time. People feel terrible guilt, and often are caught after they confess to others. In the medium term there is some chance of more effective neuroscience-based lie detectors, which goes still higher long-term.

  6. The broader society, over time, could punish utilitarian villainy by reducing its support for the things utilitarians seek as they are associated with villains, or even by producing utilitarian evils. If animal rights terrorists tried to kill off humanity, it might lead to angry people eating more meat or creating anti-utilitronium (by the terrorists' standards, not so much the broader society, focused on animals, say) in anger. The 9/11 attacks were not good for Osama bin Laden's ambitions of ruling Saudi Arabia.

There are other considerations, but these are enough to dispense with the vast bestiary of supposedly utility-boosting sorts of wrongdoing. Arithmetical utilitarianism does say you should not try to become a crook. But unstable or vicious people (see the Caplan Leninist link) sometimes do like to take the idea of "the end justifies the means" as an excuse to go commit crimes without even trying to work out how the means are related to the end, and to alternatives.

Disclaimer: I do not value total welfare to the exclusion of other ethical and personal concerns. My moral feelings oppose deontological nastiness aside from aggregate welfare. But I am tired of straw-manning "estimating consequences" and "utilitarian math" by giving examples where these aren't used and would have prevented the evil conclusion supposedly attributed to them.

Replies from: cousin_it, Decius
comment by cousin_it · 2012-07-18T10:39:11.213Z · LW(p) · GW(p)

I'm confused. Your comment paints a picture of a super-efficient police force that infiltrates criminal groups long before they act. But the Internet seems to say that many gangs in the US operate openly for years, control whole neighborhoods, and have their own Wikipedia pages...

Replies from: ciphergoth, CarlShulman
comment by Paul Crowley (ciphergoth) · 2012-07-18T12:55:30.094Z · LW(p) · GW(p)

The gangs do well, and the rare criminals who become successful gang leaders may sometimes do well, but does the average gangster do well?

comment by CarlShulman · 2012-07-18T16:54:52.884Z · LW(p) · GW(p)
  • Gang membership still doesn't pay relative to regular jobs
  • The police largely know who is in the gangs, and can crack down if this becomes a higher priority
  • Terrorism is such a priority, to a degree way out of line with the average historical damage, because of 9/11; many have critiqued the diversion of law enforcement resources to terrorism
  • Such levels of gang control are concentrated in poor areas with less police funding, and areas where the police are estranged from the populace, limiting police activity.
  • Gang violence is heavily directed at other criminal gangs, reducing the enthusiasm of law enforcement, relative to more photogenic victims
comment by Decius · 2012-07-18T18:36:44.498Z · LW(p) · GW(p)

The other side is that robbing banks at gunpoint isn't the most effective way to redistribute wealth from those who have it to those to whom it should go.

I suspect that the most efficient way to do that is government seizure- declare that the privately held assets of the bank now belong to the charities. That doesn't work, because the money isn't value, it's a signifier of value, and rewriting the map does not change the territory- if money is forcibly redistributed too much, it loses too much value and the only way to enforce the tax collection is by using the threat of prison and execution- but the jailors and executioners can only be paid by the taxes. Effectively robbing banks to give the money to charity harms everyone significantly, and fails to be better than doing nothing.

comment by wedrifid · 2012-06-29T07:40:10.774Z · LW(p) · GW(p)

It may have been better if CarlShulman used a different word - perhaps 'Evil' - to represent the 'ethical injunctions' idea. That seems to better represent the whole "deliberately subvert consequentialist reasoning in certain areas due to acknowledgement of corrupted and bounded hardware". 'Weird' seems to be exactly the sort of thing Eliezer might advocate. For example "make yourself into a corpsicle" and "donate to SingInst".

Replies from: David_Gerard
comment by David_Gerard · 2012-06-29T08:02:27.591Z · LW(p) · GW(p)

But, of course, "weird" versus "evil" is not even broadly agreed upon.

And "weird" includes many things Eliezer advocates, but I would be very surprised if it did not include things that Eliezer most certainly would not advocate.

Replies from: wedrifid
comment by wedrifid · 2012-06-29T14:10:30.979Z · LW(p) · GW(p)

And "weird" includes many things Eliezer advocates, but I would be very surprised if it did not include things that Eliezer most certainly would not advocate.

Of course it does. For example: dressing up as a penguin and beating people to death with a live fish. But that's largely irrelevant. Rejecting 'weird' as the class of things that must never be done is not the same thing as saying that all things in that class must be done. Instead, weirdness is just ignored.

comment by Dolores1984 · 2012-06-29T07:41:50.912Z · LW(p) · GW(p)

I've always felt that post was very suspect. Because, if you do the utilitarian math, robbing banks and giving them to charity is still a good deal, even if there's a very low chance of it working. Your own welfare simply doesn't play a factor, given the size of the variables you're playing with. It seems to be that there is a deeper moral reason not to murder organ donors or steal food for the hungry than 'it might end poorly for you.'

Replies from: CarlShulman, MarkusRamikin
comment by CarlShulman · 2012-06-29T07:54:57.126Z · LW(p) · GW(p)

Because, if you do the utilitarian math, robbing banks and giving them to charity is still a good deal

Bank robbery is actually unprofitable. Even setting aside reputation (personal and for one's ethos), "what if others reasoned similarly," the negative consequences of the robbery, and so forth you'd generate more expected income working an honest job. This isn't a coincidence. Bank robbery hurts banks, insurers, and ultimately bank customers, and so they are willing to pay to make it unprofitable.

According to a study by British researchers Barry Reilly, Neil Rickman and Robert Witt written up in this month's issue of the journal Significance, the average take from a U.S. bank robbery is $4,330. To put that in perspective, PayScale.com says bank tellers can earn as much as $28,205 annually. So, a bank robber would have to knock over more than six banks, facing increasing risk with each robbery, in a year to match the salary of the tellers he's holding up.

Replies from: Dolores1984
comment by Dolores1984 · 2012-06-29T08:33:17.223Z · LW(p) · GW(p)

That was a somewhat lazy example, I admit, but consider the most inconvenient possible world. Let's say you could expect to take a great deal more from a bank robbery. Would it then be valid utilitarian ethics to rob (indirectly) from the rich (us) to give to the poor?

Replies from: CarlShulman
comment by CarlShulman · 2012-06-29T09:25:37.319Z · LW(p) · GW(p)

My whole point in the comments on this post has been that it's a pernicious practice to use such false examples. They leave erroneous impressions and associations. A world where bank-robbery is super-profitable, so profitable as to outweigh the effects of reputation and the like, is not very coherent.

A better example would be something like: "would utilitarians support raising taxes to fund malaria eradication," or "would a utilitarian who somehow inherited swoopo.com (a dollar auction site) shut down the site or use the revenue to save kids from malaria" or "if a utilitarian inherited the throne in a monarchy like Oman (without the consent of the people) would he spend tax revenues on international good causes or return them to the taxpayers?"

comment by MarkusRamikin · 2012-06-29T07:45:23.523Z · LW(p) · GW(p)

if you do the utilitarian math, robbing banks and giving them to charity is still a good deal

Only if you're bad at math. Banks aren't just piggybanks to smash, they perform a useful function in the economy, and to disrupt it has consequences.

Of course I prefer to defeat bad utilitarian math with better utilitarian math rather than with ethical injunctions. But hey, that's the woe of bounded reason, even without going into the whole corrupted hardware problem: your model is only so good, and heuristics that serve as warning signals have their place.

comment by Lukas_Gloor · 2012-06-26T13:33:34.973Z · LW(p) · GW(p)

Yes. The error is that humans aren't good at utilitarianism.

Why would that be an error? It's not a requirement for an ethical theory that Homo sapiens must be good at it. If we notice that humans are bad at it, maybe we should make AI or posthumans that are better at it, if we truly view this as the best ethical theory. Besides, if the outcome of people following utilitarianism is really that bad, then utilitarianism would demand (it gets meta now) that people should follow some other theory that overall has better outcomes (see also Parfit's Reasons and Persons). Another solution is Hare's proposed "Two-Level Utilitarianism". From Wikipedia:

Hare proposed that on a day to day basis, one should think and act like a rule utilitarian and follow a set of intuitive prima facie rules, in order to avoid human error and bias influencing one's decision-making, and thus avoiding the problems that affected act utilitarianism.

Replies from: David_Gerard
comment by David_Gerard · 2012-06-26T13:38:05.944Z · LW(p) · GW(p)

The error is that it's humans who are attempting to implement the utilitarianism. I'm not talking about hypothetical non-human intelligences, and I don't think they were implied in the context.

Replies from: fubarobfusco, private_messaging
comment by fubarobfusco · 2012-06-26T19:25:38.087Z · LW(p) · GW(p)

See also Ends Don't Justify Means (Among Humans): having non-consequentialist rules (e.g. "Thou shalt not murder, even if it seems like a good idea") can be consequentially desirable since we're not capable of being ideal consequentialists.

Replies from: David_Gerard
comment by David_Gerard · 2012-06-26T21:37:54.430Z · LW(p) · GW(p)

Oh, indeed. But when you've repeatedly emphasised "shut up and multiply", tacking "btw don't do anything weird" on the end strikes me as susceptible to your readers not heeding it, particularly when they really need to.

comment by private_messaging · 2012-06-27T08:21:59.598Z · LW(p) · GW(p)

I don't think hypothetical superhuman would be dramatically different in their ability to employ predictive models upon uncertainty. If you increase power so it is to mankind as mankind is to 1 amoeba, you only double anything that is fundamentally logarithmic. While in many important cases there are faster approximations, it's magical thinking to expect them everywhere; and there are problems where the errors inherently grow exponentially with time even if the model is magically perfect (butterfly effect). Plus, of course, models of other intelligences rapidly get unethical as you try to improve fidelity (if it is emulating people and putting them through torture and dust speck experience to compare values).

comment by private_messaging · 2012-06-26T11:48:44.715Z · LW(p) · GW(p)

Well, those examples would have a lot of "okay we can't calculate utility here, so we'll use a principle" and far less faith in direct utilitarianism.

With the torture and dust specks, see, it arrives at counter intuitive conclusion, but it is not proof grade reasoning by any means. Who knows, maybe the correct algorithm for evaluation of torture vs dust specks must have BusyBeaver(10) for the torture, and BusyBeaver(9) for dust specks, or something equally outrageously huge (after all, thought, which is being screwed with by torture, is turing-complete). The 3^^^3 is not a very big number. There are numbers which are big like you wouldn't believe.

edit: also I think even vastly superhuman entities wouldn't be very good at consequence evaluation, especially from uncertain start state. In any case, some sorta morality oracle would have to be able to, at very least, take in full specs of human brain and then spit out the understanding of how to trade off the extreme pain of that individual, for dust speck of that individual (at task which may well end up in ultra long computations BusyBeaver(1E10) style. Forget the puny up arrow). That's an enormously huge problem which the torture-choosers obviously a: haven't done and b: didn't even comprehend that something like this would be needed. Which brings us to the final point: the utilitarians are the people whom haven't slightest clue what it might take to make an utilitarian decision, but are unaware of that deficiency. edit: and also, I would likely take 1/3^^^3 chance of torture over a dust speck. Why? Because dust speck may result in an accident leading up to decades of torturous existence. Dust speck's own value is still non comparable, it only bothers me because it creates the risk.

edit: note, the busy beaver reference is just an example. Before you can be additively operating on dust specks and pain, and start doing some utilitarian math there, you have to at least understand how the hell is it that an algorithm can be feeling pain, what is the pain exactly (in reductionist terms).

Replies from: army1987
comment by A1987dM (army1987) · 2012-06-28T12:39:33.646Z · LW(p) · GW(p)

and also, I would likely take 1/3^^^3 chance of torture over a dust speck. Why? Because dust speck may result in an accident leading up to decades of torturous existence

IIRC, in the original torture vs specks post EY specified that none of the dust specks would have any long-term consequence.

Replies from: private_messaging
comment by private_messaging · 2012-06-28T12:58:24.566Z · LW(p) · GW(p)

I know. Just wanted to point out where the personal preference (easily demonstrable when people e.g. neglect to take inconvenient safety measures) of small chance of torture vs definite dust speck comes from.

comment by Strange7 · 2012-09-17T10:32:48.950Z · LW(p) · GW(p)

Another problem with the repugnant conclusion is economic: it assumes that the cost of creating and maintaining additional barely-worth-living people is negligibly small.

comment by orbenn · 2012-06-27T17:19:02.772Z · LW(p) · GW(p)

The problem here seems to be about the theories not taking all things we value into account. It's therefore less certain whether their functions actually match our morals. If you calculate utility using only some of your utility values, you're not going to get the correct result. If you're trying to sum the set {1,2,3,4} but you only use 1, 2 and 4 in the calculation, you're going to get the wrong answer. Outside of special cases like "multiply each item by zero" it doesn't matter whether you add, subtract or divide, the answer will still be wrong. For example the calculations given for total utilitarianism fail to include values for continuity of experience.

This isn't to say that ethics are easy, but we're going to have a devil of a time testing them with impoverished input.

comment by Mass_Driver · 2012-06-26T23:53:31.560Z · LW(p) · GW(p)

It's a good and thoughtful post.

Going through the iteration, there will come a point when the human world is going to lose its last anime, its last opera, its last copy of the Lord of the Rings, its last mathematics, its last online discussion board, its last football game - anything that might cause more-than-appropriate enjoyment. At that stage, would you be entirely sure that the loss was worthwhile, in exchange of a weakly defined "more equal" society?

I wonder if it makes sense to model a separate variable in the global utility function for "culture." In other words, I think the value I place on a hypothetical society runs something like)%20+%20log(U_c))., where x is each individual person's individual utility, and c is the overall cultural level.

A society where a million people each enjoy reading the Lord of the Rings but there are no other books would have high sigma[U(x)] and low U(c); a society where a hundred people each enjoy reading a unique book would have low total U(x) but high U(c).

That would help model the intuition that culture, even in the abstract, is worth trading off against individual happiness. I think I would prefer a Universe in which the Lord of the Rings were encoded into a durable piece of stone but otherwise had nothing else in it to a Universe in which there was a thriving colony of a few hundred cells of plankton but otherwise nothing else in it, even if there were nobody around to read the stone. Many economists would call that irrational -- but like the OP, I reject the premise that my individual utility function for the state of the world has to break down into other people's individual welfare.

Replies from: Nornagest, Lukas_Gloor
comment by Nornagest · 2012-06-27T00:25:09.306Z · LW(p) · GW(p)

I'll accept the intuition, but culture seems even harder to quantify than individual welfare -- and the latter isn't exactly easy. I'm not sure what we should be summing over even in principle to arrive at a function for cultural utility, and I'm definitely not sure if it's separable from individual welfare.

One approach might be to treat cultural artifacts as fractions of identity, an encoding of their creators' thoughts waiting to be run on new hardware. Individually they'd probably have to be considered subsapient (it's hard to imagine any transformation that could produce a thinking being when applied to Lord of the Rings), but they do have the unique quality of being transmissible. That seems to imply a complicated value function based partly on population: a populous world containing Lord of the Rings without its author is probably enriched more than one containing a counterfactual J.R.R. Tolkien that never published a word. I'm not convinced that this added value need be positive, either: consider a world containing one of H.P. Lovecraft's imagined pieces of sanity-destroying literature. Or your own least favorite piece of real-life media, if you're feeling cheeky.

comment by Lukas_Gloor · 2012-06-27T17:35:09.835Z · LW(p) · GW(p)

How about a universe with one planet full of inaminate cultural artifacts of "great artistic value", and, on another planet that's forever unreachable, a few creatures in extreme suffering? If you make the cultural value on the artifact planet high enough, it would seem to justify the suffering on the other planet, and you'd then have to prefer this to an empty universe, or one with insentient plankton. But isn't that absurd? Why should creatures suffer lives not worth living just because somewhere far away are rocks with fancy symbols on it?

Replies from: Mass_Driver
comment by Mass_Driver · 2012-06-28T01:00:49.281Z · LW(p) · GW(p)

Because I like rocks with fancy symbols on them?

I'm uncertain about this; maybe sentient experiences are so sacred that they should be lexically privileged over other things that are desirable or undesirable about a Universe.

But, basically, I don't have any good reason to prefer that you be happy vs. unhappy -- I just note that I reliably get happy when I see happy humans and/or lizards and/or begonias and/or androids, and I reliably get unhappy when I see unhappy things, so I prefer to fill Universes with happy things, all else being equal.

Similarly, I feel happy when I see intricate and beautiful works of culture, and unhappy when I read Twilight. It feels like the same kind of happy as the kind of happy I get from seeing happy people. In both cases, all else being equal, I want to add more of it to the Universe.

Am I missing something? What's the weakest part of this argument?

Replies from: TheOtherDave, Lukas_Gloor
comment by TheOtherDave · 2012-06-28T01:21:43.031Z · LW(p) · GW(p)

So, now I'm curious... if tomorrow you discovered some new thing X you'd never previously experienced, and it turned out that seeing X made you feel happier than anything else (including seeing happy things and intricate works of culture), would you immediately prefer to fill Universes with X?

Replies from: Mass_Driver
comment by Mass_Driver · 2012-06-28T04:07:04.318Z · LW(p) · GW(p)

I should clarify that by "fill" I don't mean "tile." I'm operating from the point of view where my species' preferences, let alone my preferences, fill less than 1 part in 100,000 of the resource-rich volume of known space, let alone theoretically available space. if that ever changed, I'd have to think carefully about what things were worth doing on a galactic scale. It's like the difference between decorating your bedroom and laying out the city streets for downtown -- if you like puce, that's a good enough reason to paint your bedroom puce, but you should probably think carefully before you go influencing large or public areas.

I would also wonder if some new thing made me incredibly happy if perhaps it was designed to do that by someone or something that isn't very friendly toward me. I would suspect a trap. I'd want to take appropriate precautions to rule out that possibility.

With those two disclaimers, though, yes. If I discovered fnord tomorrow and fnord made me indescribably happy, then I'd suddenly want to put a few billion fnords in the Sirius Sector.

Replies from: Lukas_Gloor, TheOtherDave
comment by Lukas_Gloor · 2012-06-28T14:41:47.671Z · LW(p) · GW(p)

I'm operating from the point of view where my species' preferences, let alone my preferences, fill less than 1 part in 100,000 of the resource-rich volume of known space, let alone theoretically available space.

Do you think the preferences of your species matter more than preferences of some other species, e.g. intelligent aliens? I think that couldn't be justified. I'm currently working on a LW article about that.

Replies from: Mass_Driver
comment by Mass_Driver · 2012-06-29T00:05:14.509Z · LW(p) · GW(p)

I haven't thought much about it! I look forward to reading your article.

My point above was simply that even if my whole species acted like me, there would still be plenty of room left in the Universe for a diversity of goods. Barring a truly epic FOOM, the things humans do in the near future aren't going to directly starve other civilizations out of a chance to get the things they want. That makes me feel better about going after the things I want.

comment by TheOtherDave · 2012-06-28T13:34:10.386Z · LW(p) · GW(p)

(nods) Makes sense.
If I offered to, and had the ability to, alter your brain so that something that already existed in vast quantities -- say, hydrogen atoms -- made you indescribably happy, and you had taken appropriate precautions to rule out the possibility that I wasn't very friendly towards you and that this wasn't a trap, would you agree?

Replies from: Mass_Driver
comment by Mass_Driver · 2012-06-29T00:02:06.947Z · LW(p) · GW(p)

Sure! That sounds great. Thank you. :-)

comment by Lukas_Gloor · 2012-06-28T02:46:05.843Z · LW(p) · GW(p)

I think it's a category error to see ethics as only being about what one likes (even if that involves some work getting rid of obvious contradictions). In such a case, doing ethics would just be descriptive, it would tell us nothing new, and the outcome would be whatever evolution arbitrarily equipped us with. Surely that's not satisfying! If evolution had equipped us with a strong preference to generate paperclips, should our ethicists then be debating how to best fill the universe with paperclips? Much rather, we should be trying to come up with better reasons than mere intuitions and arbitrarily (by blind evolution) shaped preferences.

If there was no suffering and no happiness, I might agree with ethics just being about whatever you like, and I'd add that one might as well change what one likes and do whatever, since nothing then truly mattered. But it's a fact that suffering is intrinsically awful, in the only way something can be, for some first person point of view. Of pain, one can only want one thing: That it stops. I know this about my pain as certainly as I know anything. And just because some other being's pain is at another spatio-temporal location doesn't change that. If I have to find good reasons for the things I want to do in life, there's nothing that makes even remotely as much sense as trying to minimize suffering. Especially if you add that caring about my future suffering might not be more rational than caring about all future suffering, as some views on personal identity imply.

Replies from: Mass_Driver, TheOtherDave
comment by Mass_Driver · 2012-06-28T04:18:46.221Z · LW(p) · GW(p)

In such a case, doing ethics would just be descriptive, it would tell us nothing new, and the outcome would be whatever evolution arbitrarily equipped us with

I used to worry about that a lot, and then AndrewCritch explained at minicamp that the statement "I should do X" can mean "I want to want to do X." In other words, I currently prefer to eat industrially raised chicken sometimes. It is a cold hard fact that I will frequently go to a restaurant that primarily serves torture-products, give them some money so that they can torture some more chickens, and then put the dead tortured chicken in my mouth. I wish I didn't prefer to do that. I want to eat Subway footlongs, but I shouldn't eat Subway footlongs. I aspire not to want to eat them in the future.

Also check out the Sequences article "Thou Art Godshatter." Basically, we want any number of things that have only the most tenuous ties to evolutionary drives. Evolution may have equipped me with an interest in breasts, but it surely is indifferent to whether the lace on a girlfriend's bra is dyed aquamarine and woven into a series of cardioids or dyed magenta and woven into a series of sinusoidal spirals -- whereas I have a distinct preference. Eliezer explains it better than I do.

I'm not sure "intriniscally awful" means anything interesting. I mean, if you define suffering as an experience E had by person P such that P finds E awful, then, sure, suffering is intrinsically awful. But if you don't define suffering that way, then there are at least some beings that won't find a given E awful.

comment by TheOtherDave · 2012-06-28T02:58:31.131Z · LW(p) · GW(p)

(shrug) I agree that suffering is bad.
It doesn't follow that the only thing that matters is reducing suffering.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-28T03:08:58.954Z · LW(p) · GW(p)

But suffering is bad no matter your basic preference architecture. It takes the arbitrariness of out ethics when it's applicable to all that. Suffering is bad (for the first person point of view experiencing it) in all hypothetical universes. Well, by definition. Culture isn't. Biological complexity isn't. Biodiversity isn't.

Even if it's not all that matters, it's a good place to start. And a good way to see whether something else really matters too is to look whether you'd be willing to trade a huge amount of suffering for whatever else you consider to matter, all else being equal (as I did in the example about the planet full of artifacts).

Replies from: TheOtherDave
comment by TheOtherDave · 2012-06-28T03:50:15.982Z · LW(p) · GW(p)

Yes, basically everyone agrees that suffering is bad, and reducing suffering is valuable. Agreed.

And as you say, for most people there are things that they'd accept an increase in suffering for, which suggests that there are also other valuable things in the world.

The idea of using suffering-reduction as a commensurable common currency for all other values is an intriguing one, though.

comment by Richard_Kennaway · 2012-06-26T12:27:40.538Z · LW(p) · GW(p)

For instance, it seems that there is only a small difference between the happiness of richer nations and poorer nations

What is happiness? If happiness is the "utility" that people maximise (is it?), and the richer are only slightly happier than the poorer (cite?), why is it that when people have the opportunity to vote with their feet, people in poor nations flock to richer nations whenever they can, and do not want to return?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-06-26T18:00:51.135Z · LW(p) · GW(p)

There's a variety of good literature on the subject (one key component is that people are abysmally bad at estimating their future levels of happiness). There are always uncertainties in defining happiness (as with anything), but there's a clear consensus that whatever is making people move countries, actual happiness levels is not it.

(now, expected happiness levels might be it; or, more simply, that people want a lot of things, and that happiness is just one of them)

comment by Shmi (shminux) · 2012-06-25T17:08:47.221Z · LW(p) · GW(p)

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness. In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so.

I dare to say that no self-professed "total utilitarian" actually aliefs this.

Replies from: Lukas_Gloor, jkaufman
comment by Lukas_Gloor · 2012-06-25T20:02:38.264Z · LW(p) · GW(p)

I know total utilitarians who'd have no problem with that. Imagine simulated minds instead of carbon-based ones. If you can just imagine shutting one simulation off and turning on another one, this can eliminate some of our intuitive aversions to killing and maybe it will make the conclusion less counterintuitive. Personally I'm not a total utilitarian, but I don't think that's a particularly problematic aspect of it.

My problem with total hedonistic utiltiarianism is the following: Imagine a planet full of beings living in terrible suffering. You have the choice to either euthanize them all (or just make them happy), or let them go on living forever, while also creating a sufficiently huge number of beings with lives barely worth living somewhere else. Now that I find unacceptable. I don't think you do anything good by bringing a happy being into existence.

Replies from: Dolores1984, Will_Sawin
comment by Dolores1984 · 2012-06-26T23:56:46.211Z · LW(p) · GW(p)

If you can just imagine shutting one simulation off and turning on another one, this can eliminate some of our intuitive aversions to killing and maybe it will make the conclusion less counterintuitive. Personally I'm not a total utilitarian, but I don't think that's a particularly problematic aspect of it.

As someone who plans on uploading eventually, if the technology comes around... no. Still feels like murder.

comment by Will_Sawin · 2012-06-26T22:16:18.953Z · LW(p) · GW(p)

This is problematic. If bringing a happy being into existence doesn't do anything good, and bringing a neutral being into existence doesn't do anything bad, what do you do when you switch a planned neutral being for a planned happy being? For instance, you set aside some money to fund your unborn child's education at the College of Actually Useful Skills.

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-26T22:36:03.184Z · LW(p) · GW(p)

Good catch, I'm well aware of that. I didn't say that I think bringing a neutral being into existence is neutral. If the neutral being's life contains suffering, then the suffering counts negatively. Prior-existence views seem to not work without the inconsistency you pointed out. The only consistent alternative to total utiltiarianism is, as I see it currently, negative utilitarianism. Which has its own repugnant conclusions (e.g. anti-natalism), but for several reasons I find those easier to accept.

Replies from: Stuart_Armstrong, Will_Sawin
comment by Stuart_Armstrong · 2012-06-27T08:52:28.614Z · LW(p) · GW(p)

The only consistent alternative to total utiltiarianism is, as I see it currently, negative utilitarianism

As I said, any preferences that can be cast into utility function form are consistent. You seem to be adding extra requirements for this "consistency".

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-27T11:57:18.942Z · LW(p) · GW(p)

I should qualify my statement. I was talking only about the common varieties of utilitarianism and I may well have omitted consistent variants that are unpopular or weird (e.g. something like negative average preference-utilitarianism). Basically my point was that "hybrid-views" like prior-existence (or "critical level" negative utiltiarianism) run into contradictions. Most forms of average utilitarianism aren't contradictory, but they imply an obvious absurdity: A world with one being in maximum suffering would be [edit:] worse than a world with a billion beings in suffering that's just slightly less awful.

Replies from: APMason, Stuart_Armstrong
comment by APMason · 2012-06-27T13:07:58.919Z · LW(p) · GW(p)

That last sentence didn't make sense to me when I first looked at this. Think you must mean "worse", not "better".

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-27T14:11:47.298Z · LW(p) · GW(p)

Indeed, thanks.

comment by Stuart_Armstrong · 2012-06-27T12:28:29.775Z · LW(p) · GW(p)

I'm still vague on what you mean by "contradictions".

Replies from: Lukas_Gloor
comment by Lukas_Gloor · 2012-06-27T14:10:10.349Z · LW(p) · GW(p)

Not in the formal sense. I meant for instance what Will_Savin pointed out above, a neutral life (a lot of suffering and a lot of happiness) being equally worthy of creating as a happy one (mainly just happiness, very little suffering). Or for "critical levels" (which also refers to the infamous dust specks), see section VI of this paper, where you get different results depending on how you start aggregating. And Peter Singer's prior-existence view seems to contain a "contradiction" (maybe "absurdity" is better) as well having to do with replaceability, but that would take me a while to explain. It's not quite a contradiction that the theory states "do X and not-X", but it's obvious enough that something doesn't add up. I hope that led to some clarification, sorry for my terminology.

comment by Will_Sawin · 2012-06-26T22:38:55.571Z · LW(p) · GW(p)

Ah, I see. Anti-natalism is certainly consistent, though I find it even more repugnant.

comment by jefftk (jkaufman) · 2012-06-26T03:06:48.951Z · LW(p) · GW(p)

Assuming perfection in the methods, ending N lives and replacing them with N+1 equally happy lives doesn't bother me. Death isn't positive or negative except in as much as it removes the chance of future joy/suffering by the one killed and saddens those left behind.

With physical humans you won't have perfect methods and any attempt to apply this will end in tragedy. But with AIs (emulated brains or fully artificial) it might well apply.

comment by private_messaging · 2012-06-28T10:40:41.897Z · LW(p) · GW(p)

A more general problem with utilitarianisms including those that evade the critique in that article:

Suppose we have a computer running a brain sim (along with VR environment). The brain sim works as following: given current state, next state is calculated (using multiple cpus in parallel); the current state is read only, the next state is write only. Think arrays of synaptic values. After all of the next state is calculated, the arrays are switched and the old state data is written over . This is a reductionist model of 'living' that is rather easy to think about. Suppose that this being is reasonably happy.

We really want to sacrifice the old state for sake of the new state. If we are to do so based on maximizing utility (rather than seeing the update as a virtue in it's own right), the utility of new state data has to be greater than utility of the current state data. The utility has to keep rising with each simulator step. That's clearly not what anyone expects the utility to do. And it clearly has a lot of problems; e.g. when you have multiple brain sims, face a risk of hardware failure, and may want to erase some sim to use freed up memory as backup for some much older sim (whose utility grew over time to a larger value).

I'm very unconvinced that there even exist any 'utilitarian' solution here. If you want to maximize some metric over experience-moments that ever happen, then you need to keep track of the experience moments that already happened, to avoid re-doing it (you don't want to be looping sims over some happy moment). And it is still entirely immoral because you are going to want to destroy everything and create utilitronium.

Replies from: torekp, private_messaging
comment by torekp · 2012-06-30T00:03:32.111Z · LW(p) · GW(p)

Why assume that utility is a function of individual states in this model, rather than processes? Can't a utilitarian deny that instantaneous states, considered apart from context, have any utility?

Replies from: private_messaging
comment by private_messaging · 2012-06-30T06:09:47.679Z · LW(p) · GW(p)

What is "processes" ? What's about not switching state data in above example? (You keep re-calculating same state from previous state; if it's calculation of the next state that is the process then the process is all right)

Also, at that point you aren't rescuing utilitarianism, you're going to some sort of virtue ethics where particular changes are virtuous on their own.

Bottom line is, if you don't define what is processes then you just plug in something undefined through which our intuitions can pour in and make it look all right even if the concept is still fundamentally flawed.

We want to overwrite the old state with the new state. But we would like to preserve old state in a backup if we had unlimited memory. It thus follows that there is a tradeoff decision between worth of old state, worth of new state, and cost of backup. You can proclaim that instantaneous states considered apart from context don't have any utility. Okay you have what ever context you want, now what are the utilities of the states and the backup, so that we can decide when to do the backup? How often to do the backup? Decide on optimal clock rate? etc.

Replies from: torekp
comment by torekp · 2012-07-01T15:20:07.298Z · LW(p) · GW(p)

A process, at a minimum, takes some time (dt > 0). Calculating the next state from previous state would be a process. If you make backups, you could also make additional calculation processes working from those backed-up states. Does that count as "creating more people"? That's a disputed philosophy of mind question on which reasonable utilitarians might differ, just like anyone else. But if they do say that it creates more people, then we just have yet another weird population ethics question. No more and no less a problem for utilitarianism than the standard population ethics questions, as far as I can see. Nothing follows about each individual's life having to have ever-increasing utility lest putting that person in stasis be considered better.

comment by private_messaging · 2012-06-29T17:39:15.138Z · LW(p) · GW(p)

I actually would be very curious as of any ideas how 'utilitarianism' could be rescued from this. Any ideas?

I don't believe direct utilitarianism works as a foundation; declaring that the intelligence is about maximizing 'utility' just trades one thing (intelligence) that has not been reduced to elementary operations but we at least have good reasons to believe it should be reducible (we are intelligent and laws of physics are, in relevant approximation, computable), for something ("utility") that not only hasn't been shown reducible but for which we have no good reason to think it is reducible or works on reductionist models (observe how there's suddenly a problem with utility of life once I consider a mind upload simulated in a very straightforward way; observe how number of paperclips in the universe is impossible or incredibly difficult to define as a mathematical function).

edit: Note: the model-based utility-based agent does not have real world utility function and as such, no matter how awesomely powerful is the solver it uses to find maximums of mathematical functions, won't ever care if it's output gets disconnected from the actuators, unless such condition was explicitly included into model; furthermore it will break itself if model includes itself and it is to modify the model, once again no matter how powerful is it's solver. The utility is defined within very specific non-reductionist model where e.g. a paperclip is a high level object, and 'improving' model (e.g. finding out that paperclip is in fact made of atoms) breaks utility measurement (it was never defined how to recognize when those atoms/quarks/what ever novel physics the intelligence came up with, constitute a paperclip). This is not a deficiency when it comes to solving practical problems other than 'how do we destroy mankind by accident'.

comment by Desrtopa · 2012-06-26T06:24:41.344Z · LW(p) · GW(p)

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness. In fact if one can kill a billion people to create a billion and one, one is morally compelled to do so. And this is true for real people, not just thought experiment people - living people with dreams, aspirations, grudges and annoying or endearing quirks.

Keep in mind that the people being brought into existence will be equally real people, with dreams, aspirations, grudges, and annoying or endearing quirks. If the people being killed had any more of what you value overall, then it wouldn't be a utility neutral act.

Imagine that a billion people are annihilated from existence, and replaced with exact copies who're indistinguishable in any way. Don't judge a person's plan to execute this, that could entail some sort of mistake, suppose that this simply happens, so we must judge it purely by its results. Do you think that this would be a bad thing?

If not, then presumably it's not the destruction and replacement you're objecting to in and of itself, you're implicitly assuming a higher utility value for the people who're destroyed than those who're created, or some chance of an outcome other than perfect replacement of all the people with equal utility people.

comment by Nornagest · 2012-06-26T04:12:13.198Z · LW(p) · GW(p)

An argument that I have met occasionally is that while other ethical theories such as average utilitarianism, birth-death asymmetry, path dependence, preferences of non-loss of culture, etc... may have some validity, total utilitarianism wins as the population increases because the others don't scale in the same way. By the time we reach the trillion trillion trillion mark, total utilitarianism will completely dominate, even if we gave it little weight at the beginning.

I'll admit I haven't encountered this argument before, but to me it looks like a type error. As you note, average utilitarianism counts something quite different than total utilitarianism; observers might (correctly) note that the latter can spit out much larger numbers than the former under some circumstances, but those values are unrelated abstractions, not something commensurate with each other's or those of other ethical theories absent a quantifying theory of metaethics that we don't have. It's like dividing seven by cucumber. I'd argue that the normalization process you suggest doesn't make much sense either, though; many utilitarianisms don't have well-defined upper bounds (why stop at a quadrillion?), and some don't have well-defined lower (a life not worth living might be counted as a negative contribution).

Insofar as ethical theories are models of our ethical intuitions, I can see an argument for normalizing against people's subjective satisfaction with a world-state, which is almost certainly a finite range and therefore implies some kind of diminishing returns or dynamic rather than static evaluation of state changes. But I can see arguments against this, too; in particular, it doesn't make any sense if you're trying to make a universalizable theory of ethics (which has its own problems, but it has been tried). The hedonic treadmill also raises issues.

comment by private_messaging · 2012-06-25T23:30:18.348Z · LW(p) · GW(p)

I like that article. I wrote some on other problem with utilitarianism.

Also, by the way, regarding the use of name of Bayes, you really should thoroughly understand this paper and also get some practice solving belief propagation approximately on not so small networks full of loops and cycles (or any roughly isomorphic problem), to form opinion on self described Bayesianists.

comment by Gust · 2012-07-19T01:53:32.799Z · LW(p) · GW(p)

And the sum itself is a huge problem. There is no natural scale on which to compare utility functions. Divide one utility function by a billion, multiply the other by eπ, and they are still perfectly valid utility functions. In a study group at the FHI, we've been looking at various ways of combining utility functions - equivalently, of doing interpersonal utility comparisons (IUC). Turns out it's very hard, there seems no natural way of doing this, and a lot has also been written about this, concluding little. Unless your theory comes with a particular IUC method, the only way of summing these utilities is to do an essentially arbitrary choice for each individual before summing. Thus standard total utilitarianism is an arbitrary sum of ill defined, non-natural objects.

This interests me. Do you have any literature I should read on this topic?

comment by timtyler · 2012-07-07T10:09:23.076Z · LW(p) · GW(p)

This one left me wondering - is "population ethics" any different from "politics"?

Replies from: Danfly
comment by Danfly · 2012-07-07T11:08:14.627Z · LW(p) · GW(p)

Interesting point, but I would say there are areas of politics that don't really come under "ethics". "What is currently the largest political party in the USA?" is a question about politics and demographics, but I wouldn't call it a question of population ethics. I'd say that you could probably put anything from the subset of "population ethics" into the broad umbrella of "politics" though.

comment by timtyler · 2012-07-07T09:57:41.564Z · LW(p) · GW(p)

In total utilitarianism, it is a morally neutral act to kill someone (in a painless and unexpected manner) and creating/giving birth to another being of comparable happiness (or preference satisfaction or welfare).

Other members of society typically fail to approve of murder, and would apply sanctions to the utilitarian - probably hindering them in their pursuit of total utility. So, in practice, a human being pursuing total utilitarianism would simply not act in this way.

comment by drnickbone · 2012-06-27T13:21:02.751Z · LW(p) · GW(p)

Good article! Here are a few related questions:

  1. The problem of comparing different people's utility functions applies to average utilitarianism as well, doesn't it? For instance if your utility function is U and my utility function is V, then the average could be (U + V)/2 : however utility functions can be rescaled by any linear function, so let's make mine 1000000 x V. Now the average is U/2 + 500000 x V, which seems totally fair doesn't it? Is the right solution here to assume that each person's utility has a "best possible" case, and a "worst possible" case, and to rescale, assigning 1 to each person's best case, and 0 to their worst? That works fine if people have bounded utility, which we apparently do (it's one reason we don't fall for Pascal muggings).

  2. It's true that no-one optimises utility perfectly, but even animals, plants and bacteria have an identifiable utility function (inclusive fitness), which they optimise pretty well. Why shouldn't people? And, to first approximation, why wouldn't a human's utility function also be inclusive fitness? (We can add other approximations as necessary, e.g. some sort of fitness function for culture or memes.)

  3. Do you think utility functions should be defined over "worlds" or "states"? Decision theory only requires worlds, but consequentialism seems to require states. For instance if each world w consists of a sequence of states indexed by time t, then a consequentialist utility function applied to a whole world would look like U(w) = Sum d(t) x u(s(t)) where d(t) is the discount factor, and u is the utility function applied to states. Deontologists would have a completely different sort of U, but they are not immediately irrational because of that. (Seems they can still be consistent with formal decision theory.)

  4. Looking at your paper on Anthropic Decision Theory, what do you think will happen if we adopt a compromise utility function somewhere between average and total utility, much as you suggest? Is the result more like SIA or SSA? Does it contain some of the strengths of each while avoiding their weaknesses? (It strikes me that the result is more like SSA, since you are avoiding the "large" utilities from total utility dominating the calculation, but I haven't tried to do the math, and wondered if you already had...)

  5. Do you have views on "rule" versus "act" utilitarianism? It seems to me that advanced decision theories like TDT, UDT or ADT are already invoking a form of rule utilitarianism, right? Further that rule utilitarianism is a better "model" for our moral judgements than act utilitarianism.

comment by hairyfigment · 2012-06-26T20:35:06.492Z · LW(p) · GW(p)

I think I agree with your conclusion. But this:

to increase utility we should simply kill off all the rich, and let the poor multiply to take their place (continually bumping off any of the poor that gets too rich).

requires you to assume that the US or "the rich" have no relevant chance of producing vastly happier people in the future. This seems stronger than denying the singularity as such. And it makes targeted killing feel much more attractive to this misanthrope.

comment by prase · 2012-06-25T18:32:43.077Z · LW(p) · GW(p)

Only a slightly relevant question which nevertheless I haven't yet seen addressed: If a utilitarian desires to maximise other people's utilities and the other people are utilitarians themselves, also deriving their utility from the utilities of others (the original utilitarian included), doesn't that make utilitarianism impossible to define? The consensus seems to be that one can't take one's own mental states for argument of one's own utility function. But utilitarians rarely object to plugging others' mental states into their utility functions, so the danger of circularity isn't avoided. Is there some clever solution to this?

Replies from: novalis, mwengler
comment by novalis · 2012-06-25T18:39:37.060Z · LW(p) · GW(p)

No, because a utilitarianism does not specify a utilitarian's desires; it specifies what they consider moral. There are lots of things we desire to do that aren't moral, and that we choose not to do because they are not moral.

Replies from: prase
comment by prase · 2012-06-25T19:49:54.426Z · LW(p) · GW(p)

I believe this doesn't answer my question; I will reformulate the problem in order to remove potentially problematic words and make it more specific:

Let the world contain at least two persons, P1 and P2 with utility functions U1 and U2. Both are traditional utilitarians: they value happiness of the others. Assume that U1 is a sum of two terms: H2 + u1(X), where H2 is some measure of happiness of P2 and u1(X) represents P1's utility unrelated to P2's happiness, X is the state of the rest of the world; similarly U2 = H1 + u2(X). (H1 and H2 are monotonous functions of happiness but not necessarily linear - whatever it would even mean - so having U as linear function of H is still quite general.)

Also, as for most people, the happiness of the model utilitarians is correlated with their utility. Let's again assume that the utilities decompose into sums of independent terms such that H1 = h1(U1) + w1(X), where w contains all non-utility sources of happiness and h1(.) is a growing function; similarly for the second agent.

So we have:

  • U1 = h2(U2) + w2(X) + u1(X)
  • U2 = h1(U1) + w1(X) + u2(X)

Whether this does or doesn't have solution (for U1 and U2) depends on details of h1, h2, u1, u2, w1, w2 and X. But what I say is that the system of equations is a direct analogue of the forbidden

  • U = h(U) + u(X)

i.e. when one's utility function takes itself for an argument.

Replies from: endoself, novalis
comment by endoself · 2012-06-25T20:39:57.504Z · LW(p) · GW(p)

Also, as for most people, the happiness of the model utilitarians is correlated with their utility.

This is untrue in general. I would prefer that someone who I am unaware of be happy, but it cannot make me happier since I am unaware of that person. In general, it is important to draw a distinction between the concept of a utility function, which describes decisions being made, and that of a hedonic function, which describes happiness, or, if you are not purely a hedonic utilitarian, whatever functions describe other things that are mentioned in, but not identical to, your utility function.

Replies from: prase, mwengler
comment by prase · 2012-06-25T23:38:24.741Z · LW(p) · GW(p)

Yes, I may not know the exact value of my utility since I don't know the value of every argument it takes, and yes, there are consequently changes in utility which aren't accompanied with corresponding changes in happiness, but no, this doesn't mean that utility and happiness aren't correlated. Your comment would be a valid objection to relevance of my original question only if happiness and utility were strictly isolated and independent of each other, which, for most people, isn't the case.

Also, this whole issue could be sidestepped if the utility function of the first agent had the utility of the second agent as argument directly, without the intermediation of happiness. I am not sure, however, whether standard utilitarianism allows caring about other agent's utilities.

comment by mwengler · 2012-07-05T15:34:19.482Z · LW(p) · GW(p)

There may be many people who's utility you are not aware of, but there are also many people whos utility you are aware of, and whos utility you can effect with your actions. I think @prase points are quite interesting just considering the ones in your awareness/ sphere of influence.

Replies from: endoself
comment by endoself · 2012-07-06T02:25:27.679Z · LW(p) · GW(p)

I'm not sure exactly why prase disagrees with me - I can think of many mutually exclusive reasons that it would take a while to write out individually - but since two people have now responded I guess I should ask for clarification. Why is the scenario described impossible?

comment by novalis · 2012-06-26T02:54:22.078Z · LW(p) · GW(p)

Here's another way to look at it:

Imagine that everyone starts at time t1 with some level of utility, U[n]. Now, they generate a utility based on their beliefs about the sum of everyone else's utility (at time t1). Then they update by adding some function of that summed (averaged, whatever) utility to their own happiness. Let's assume that function is some variant of the sigmoid function. This is actually probably not too far off from reality. Now we know that the maximum happiness (from the utility of others) that a person can have is one (and the minimum is negative one). And assuming that most people's base level of happiness is somewhat larger than the effect of utility, this is going to be a reasonably stable system.

This is a much more reasonable model, since we live in a time-varying world, and our beliefs about that world change over time as we gain more information.

Replies from: prase
comment by prase · 2012-06-26T16:39:23.061Z · LW(p) · GW(p)

When information propagates fast relative to the rate of change of external conditions, the dynamic model converges to the stable point which would be the solution of the static model - are the models really different in any important aspect?

Instability is indeed eliminated by use of sigmoid functions, but then the utility gained from happiness (of others) is bounded. Bounded utility functions solve many problems, the "repugnant conclusion" of the OP included, but some prominent LWers object to their use, pointing out scope insensitivity. (I have personally no problems with bounded utilities.)

Replies from: novalis
comment by novalis · 2012-06-26T17:46:51.943Z · LW(p) · GW(p)

Utility functions need not be bounded, so long as their contribution to happiness is bounded.

comment by mwengler · 2012-07-05T15:28:59.469Z · LW(p) · GW(p)

I think you are on to something brilliant here. The thing that is new to me in your question is the recursive aspect of utilitarianism. A theory of morality that says the moral thing to to do is to maximize utility, clearly then maximizing utility is a thing that has utility.

From here in an engineering sense, you'd have at least two different places you could go. A sort of naive place to go would be to try to have each person maximize total utility independently of what others are doing, noting that other people's utility summed up is much larger than one's own utility. Then to a very large extent your behavior will be driven by maximizing other people's utility. In a naive design involving say 100 utilitarians, one would be "over-driving" the system by ~100 x, if each utilitarian was separately calculating everybody else's utility and trying to maximize it. In some sense, it would be like a feedback system with way too much gain: 99 people all trying to maximize your utility.

An alternative place to go would be to say utility is a meta-ethical consideration, that an ethical system should have the property that it maximizes total utility. But then from engineering considerations you would expect 1) you would have lots of different rule systems that would come close to maximizing utility and 2) among the simplest and most effective would be to have each agent maximizing its own utility under the constraint of rules which were designed to get rid of anti-synergistic effects and to enhance synergistic effects. So you would expect contract law, anti-fraud law, laws against bad externalities, laws requiring participation in good externalities. But in terms of "feedback," each agent in the system would be actively adjusting to maximize its own utility within the constraints of the rules.

This might be called rule-utilitarianism, but really I think it is a hybrid of rule utilitarianism and justified selfishness (Rand's egoism? Economics "homo economicus" rational utility maximizer?). It is a hybrid because you don't ONLY have rules which maximize utility, and you don't ONLY have maximizing individual utility as the moral rule.

comment by timtyler · 2012-07-07T10:02:38.982Z · LW(p) · GW(p)

Why then is it so popular? Well, one reason is that there are models that make use of something like total utilitarianism to great effect. Classical economic theory, for instance, models everyone as perfectly rational expected utility maximisers.

Surely that is not the reason. Firstly, utilitarianism is not that popular. My theory about why it has any adherents at all is that it is used for signalling purposes. One use of moral systems is to broadcast what a nice person you are. Utilitarianism is a super-unselfish moral system. So, those looking for a niceness superstimulus are attracted. I think this pretty neatly explains the 'utilitarianism' demographics.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-09-17T06:24:21.380Z · LW(p) · GW(p)

I don't know if you mean to come across this way, but the way you have written this makes it sound like you think utilitarians are cynically pretending to believe in utilitarianism to look good to others, but don't really believe it in their heart of hearts. I don't think this is true in most cases, I think utilitarians are usually sincere, and most failures to live up to their beliefs can be explained by akrasia.

If you want a plausabile theory as to how natural selection could produce sincere altruism, look at it from a game-theoretic perspective. People who could plausibly signal altruism and trustworthiness would get huge evolutionary gains because they could attract trading partners more easily. One of the more effective ways to signal that you possess a trait is to actually possess it. One of the most effective ways to signal you are altruistic and trustworthy is to actually be altruistic and trustworthy. So it's plausible that humans evolved to be genuinely nice, trustworthy, and altruistic, probably because the evolutionary gains from getting trade partners to trust them outweighed the evolutionary losses from sacrificing for others. Akrasia can be seen as an evolved mechanism that sabotages our altruism in an ego-dystonic way, so that we can truthfully say we're altruists without making maladaptive sacrifices for others.

Of course, the fact that our altruistic tendencies may have evolved from genetically selfish reasons gives us zero reason to behave in a selfish fashion today, except possibly as a means to prevent natural selection from removing altruism from existence. We are not our genes.

Replies from: CarlShulman, Strange7, timtyler
comment by CarlShulman · 2012-09-17T08:08:40.401Z · LW(p) · GW(p)

I think utilitarians are usually sincere, and most failures to live up to their beliefs can be explained by akrasia.

If all you mean by "sincere" is not explicitly thinking of something as deceptive, that seems right to me, but if "sincere" is supposed to mean "thoughts and actions can be well-predicted by utilitarianism" I disagree. Utilitarian arguments get selectively invoked and special exceptions made in response to typical moral sentiments, political alignments, personal and tribal loyalties, and so forth.

I would say similar things about religious accounts of morality. Many people claim to buy Christian or Muslim or Buddhist ethics, but the explanatory power coming from these, as opposed to other cultural, local, and personal factors, seems limited.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-09-17T09:56:55.459Z · LW(p) · GW(p)

If all you mean by "sincere" is not explicitly thinking of something as deceptive, that seems right to me, but if "sincere" is supposed to mean "thoughts and actions can be well-predicted by utilitarianism" I disagree.

I was focused more on the first meaning of "sincere." I think that utilitarian's abstract "far mode" ethical beliefs and thoughts are generally fairly well predicted by utilitarianism, but their "near mode" behaviors are not. I think that self-deception and akrasia are the main reasons there is such dissonance between their beliefs and behavior.

I think a good analogy is belief in probability theory. I believe that doing probability calculations, and paying attention to the calculations of others, is the best way to determine the likelihood of something. Sometimes my behavior reflects this, I don't buy lottery tickets for instance. But other times it does not. For example, I behave more cautiously when I'm out walking if I have recently read a vivid description of a crime, even if said crime occurred decades ago, or is fictional. I worry more about diseases with creepy symptoms than I do about heart disease. But I think I do sincerely "believe" in probability theory in some sense, even though it doesn't always affect my behavior.

comment by Strange7 · 2012-09-17T09:04:55.143Z · LW(p) · GW(p)

One of the most effective ways to signal you are altruistic and trustworthy is to actually be altruistic and trustworthy.

I agree with the main thrust of the argument, but such signaling would only apply to potential trading partners to whom you make a habit of speaking openly and honestly about your motives, or who are unusually clever and perceptive, or both.

comment by timtyler · 2012-09-17T10:14:29.439Z · LW(p) · GW(p)

If you want a plausabile theory as to how natural selection could produce sincere altruism, look at it from a game-theoretic perspective. People who could plausibly signal altruism and trustworthiness would get huge evolutionary gains because they could attract trading partners more easily. One of the more effective ways to signal that you possess a trait is to actually possess it. One of the most effective ways to signal you are altruistic and trustworthy is to actually be altruistic and trustworthy. So it's plausible that humans evolved to be genuinely nice, trustworthy, and altruistic, probably because the evolutionary gains from getting trade partners to trust them outweighed the evolutionary losses from sacrificing for others.

Altruism - at least in biology - normally means taking an inclusive fitness hit for the sake of others - e.g. see the definition of Trivers (1971), which reads:

Altruistic behavior can be defined as behavior that benefits another organism, not closely related, while being apparently detrimental to the organism performing the behavior, benefit and detriment being defined in terms of contribution to inclusive fitness

Proposing that altruism benefits the donor just means that you aren't talking about genuine altruism at all, but "fake" altruism - i.e. genetic selfishness going a fancy name. Such "fake" altruism is easy to explain. The puzzle in biology is to do with genuine altruism.

the way you have written this makes it sound like you think utilitarians are cynically pretending to believe in utilitarianism to look good to others, but don't really believe it in their heart of hearts. I don't think this is true in most cases, I think utilitarians are usually sincere, and most failures to live up to their beliefs can be explained by akrasia.

So: I am most interested in explaining behaviour. In this case, I think virtue signalling is pretty clearly the best fit. You are talking about conscious motives. These are challenging to investigate experimentally. You can ask people - but self-reporting is notoriously unreliable. Speculations about conscious motives are less interesting to me.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-09-17T10:51:23.703Z · LW(p) · GW(p)

Altruism - at least in biology - normally means taking an inclusive fitness hit for the sake of others - e.g. see the definition of Trivers (1971).

I thought it fairly obvious I was not using the biological definition of altruism. I was using the ethical definition of altruism - taking a self-interest hit for the sake of others' self interest. It's quite possible for something to increase your inclusive fitness while harming your self-interest, unplanned pregnancy, for instance.

Proposing that altruism benefits the donor just means that you aren't talking about genuine altruism at all, but "fake" altruism - i.e. genetic selfishness going a fancy name.

I wasn't proposing that altruism benefited the donor. I was proposing that it benefited the donor's genes. That doesn't mean that it is "fake altruism," however, because self interest and genetic interest are not the same thing. Self interest refers to the things a person cares about and wants to accomplish, i.e. happiness, pleasure, achievement, love, fun, it doesn't have anything to do with genes.

Essentially, what you have argued is:

  1. Genuinely caring about other people might cause you to behave in ways that make your genes replicate more frequently.
  2. Therefore, you don't really care about other people, you care about your genes.

If I understand your argument correctly it seems like you are committing some kind of reverse anthropomorphism. Instead of ascribing human goals and feelings to nonsentient objects, you are ascribing the metaphorical evolutionary "goals" of nonsentient objects (genes) to the human mind. That isn't right. Humans don't consciously or unconsciously directly act to increase our IGF, we simply engage in behaviors for their own sake that happened to increase our IGF in the ancestral environment.

Replies from: army1987, timtyler
comment by timtyler · 2012-09-17T23:30:42.689Z · LW(p) · GW(p)

Altruism - at least in biology - normally means taking an inclusive fitness hit for the sake of others - e.g. see the definition of Trivers (1971).

I thought it fairly obvious I was not using the biological definition of altruism. I was using the ethical definition of altruism - taking a self-interest hit for the sake of others' self interest. It's quite possible for something to increase your inclusive fitness while harming your self-interest, unplanned pregnancy, for instance.

So: I am talking about science, while you are talking about moral philosophy. Now that we have got that out the way, there should be no misunderstanding - though in the rest of your post you seem keen to manufacture one.

Replies from: Ghatanathoah
comment by Ghatanathoah · 2012-09-18T03:52:36.162Z · LW(p) · GW(p)

So: I am talking about science, while you are talking about moral philosophy.

I was talking about both. My basic point was that the reason humans evolved to care about morality and moral philosophy in the first place was because doing so made them very trustworthy, which enhanced their IGF by making it easier to obtain allies.

My original reply was a request for you to clarify whether you meant that utilitarians are cynically pretending to care about utilitarianism in order to signal niceness, or whether you meant that humans evolved to care about niceness directly and care about utilitarianism because it is exceptionally nice (a "niceness superstimulus" in your words). I wasn't sure which you meant. It's important to make this clear when discussing signalling because otherwise you risk accusing people of being cynical manipulators when you don't really mean to.

comment by orthonormal · 2012-06-28T23:28:39.565Z · LW(p) · GW(p)

A population of TDT agents with different mostly-selfish preferences should end up with actions that closely resemble total utilitarianism for a fixed population, but oppose the adding of people at the subsistence level followed by major redistribution. (Or so it seems to me. And don't ask me what UDT would do.)

comment by torekp · 2012-06-30T01:18:06.442Z · LW(p) · GW(p)

I have no interest in defending utilitarianism, but I do have an interest in a total welfare (yes I think such a concept can make sense) of sentient beings. The repugnance of the Repugnant Conclusion, I suggest, is a figment of your lack of imagination. When you imagine a universe with trillions of people whose lives are marginally worth living, you probably imagine people whose lives are a uniform grey, just barely closer to light than darkness. In other words, agonizingly boring lives. But this is unnecessary and prejudicial. Instead, imagine people with ups and downs like ours, but with a closer balance of ups and downs. Imagine rich cultures, intense personal relationships, exciting mathematical discoveries, etc., etc. - but perhaps more repression, more romantic breakups, more dead end derivations.

Perhaps there are values that are nonlocal, in the sense of not belonging to any one person and not being the sum of values belonging each to one person. And the Repugnant world you're imagining may lack those values. But that's a problem with Utilitarianism, not with Totality. In other words I suggest that insofar as moral value depends on how things go for individuals (considering individuals other than those to whom you have special obligations), it depends on the total rather than the average, or the pre-existing persons' welfare.

Why think so? Because having children normally isn't wrong, but having children when you know the child will only suffer horribly for a year and then die, is. Normal parents know, however, that there is a very slight chance of that type of horrible result. What justifies childbearing in the normal case? The obvious answer is the high probability that the child will lead a good life. Therefore, adding more good lives is a good-making feature. This doesn't show that adding more good lives is comparable to making the same number of pre-existing equally good lives twice as good - but I think that is the answer most coherent with the meager truth about personal identity.

Other comments: The "real world analogues" of the killing-and-replacing-people thought experiments turn out to be just more thought experiments. Not that there need be anything wrong with that, but the weirdness of the thought-experimental situation should be considered. If the intent is simply to show that utilitarianism faces a burden of argument in the face of counterintuitive results, it succeeds in that easy goal.

Interpersonal comparisons of utility suffer from the same difficulties, in principle, as intrapersonal comparisons. They're just a lot more intense in the former case. This applies to both preference and hedonics, and also to many more sophisticated evaluative schemes which may include both, plus more.

comment by Pentashagon · 2012-06-26T19:31:36.642Z · LW(p) · GW(p)

In Austrian economics using the framework of Praxiology the claim is made that preferences (the rough equivalent of utilities) cannot be mapped to cardinal values but different states of the world are still well ordered by an individual's preferences such that one world state can be said to be more or less desirable than another world state. This makes it impossible to numerically compare the preferences of two individuals except through the pricing/exchange mechanism of economics. E.g. would 1 billion happy people exchange their own death for the existence of 1 billion and 1 new happy people? To answer the question simply ask them and what they would do or observe what they do in that situation and that will reveal their preferences.

Taking a preference-based approach, consider the set of all individuals and the set of all world-states. Each individual has a well-ordered list of preferences of possible world states, with the only restriction that it be bounded from above by a maximal preference. In every world state the next world state is chosen by all individuals voting for the most preferable next reachable world state. In a majority voting system each individual votes for its maximally-preferred world state. In runoff and approval voting the first, second, third, etc. choices are the highest, next-highest, etc. ranked preferences for world states, respectively. Ethics thus reduces to the problem of fair voting.

An obvious criticism of Austrian economics is that it simply describes the economy as the result of all individual actions (with which individuals reveal their true preferences, by definition) with no additional predictive power. I think that by contrasting the theoretical results of perfect ethical preference voting with the theoretical results of perfectly calculating a utilitarian theory, there may be some insight. The basic difference is that economics relies on pricing to build an economy but in ethics we can cheat and ask theoretical questions about all possible world states.

Potential or hypothetical individuals would have their own preferences for world-states but, as the article mentions, their preferences may not be compatible with the set of possible next world states that we are voting on If those hypothetical individuals never have enough votes for possible next world states then they will never have any influence. Individuals currently sleeping, anesthetized, or frozen in liquid nitrogen have hypothetical preferences for future world states that may very well coincide with our preferences for future world states, and therefore they have a greater chance of existing as acting individuals in our future world. Ultimately in any ethical theory it is only our estimation of a hypothetical being's preferences that we can consider, so their preferences are subsumed into our own preferences.

3^^^3 people will probably rank their preferences for world states with and without a single dust speck in their eye as nearly indistinguishable, but world states with torture are hopefully quite lower in rank than equivalent world states without torture. The one person whose torture depends on the vote may prefer world states with 3^^^3 dust specks far more than world states with 50 years of torture, but their vote clearly doesn't matter in any conceivable voting system. Nevertheless, so long as the existence of torture is more repugnant than a single dust speck, 3^^^3 people will vote to receive the dust speck instead of allow that individual to be tortured.

Populations of nearly any size will probably not vote to replace themselves with a different population (whether of humans, paperclips, or smily-faces).

There are still problems: Bacteria and parasites may deserve a vote. Weighting may fix that problem. Hated minorities are still at a disadvantage even in the fairest voting systems. On one hand if people are not personally inconvenienced by the actions of a hated minority they will probably prefer worlds where that minority is not tortured over worlds where they are tortured, simply because of their general aversion to torture. On the other hand a large number of voters in democratic countries have not kicked torturers out of political office. This is distressing because far fewer than 3^^^3 people have been affected by, say, Maher Arar. There is apparently a tendency in humans to have a preference for the brutal punishment of an assumed criminal even if there is only a tiny marginal chance of value to themselves. I think this is a failure of rationality and probably not a failure of any particular ethical system.

The primary difference between additive functions of individual utility and preference voting is that the most important effects on individuals have the largest influence on their preferences. There is no correspondence to having a maximal utility for not having a speck in one's eye and also a maximal utility for torturing another individual. One or the other will have strictly greater preference. In approval or runoff voting the preference for torturing another individual will fall behind a series of other more pleasant preferences unless that individual actually had a major effect on the voter. In effect everyone is forced to vote for what really matters to them instead of arbitrarily ruining another individual's life for no appreciable benefit. Utilitarianism could conceivably have exactly the same ranking of values as preference voting (just enumerate all the N preferences and assign them values of i/N from least to most preferred) but there is no guarantee that an individual faced with choosing utilities would assign the same relative value to world states as an individual choosing preferences.

It appears that Eliezer went down this road a ways in http://lesswrong.com/lw/rx/is_morality_preference/ and then went off in another direction before enumerating the idea of what would happen if everyone voted based on their preferences and then acted to achieve the winning world state instead of acting to achieve only their own maximally preferred world state.