Planning a series: discounting utility
post by Psychohistorian · 2011-04-19T15:27:28.702Z · LW · GW · Legacy · 17 commentsContents
17 comments
I'm planning a top-level post (probably two or three or more) on when agent utility should not be part of utilitarian calculations - which seems to be an interesting and controversial topic given some recent posts. I'm looking for additional ideas, and particularly counterarguments. Also hunting for article titles. The series would look something like the following - noting that obviously this summary does not have much room for nuance or background argument. I'm assuming moral antirealism, with the selection of utilitarianism as an implemented moral system.
Intro - Utilitarianism has serious, fundamental measurement problems, and sometimes substantially contradicts our intuitions. One solution is to say our intuitions are wrong - this isn't quite right (i.e. a morality can't be "wrong") unless our intuitions are internally inconsistent, which I do not think is the problem. This is particularly problematic because agents (especially with high self modification capacities) may face socially undesirable incentives. I argue that a better solution is to ignore or discount the utility of certain agents in certain circumstances. This better fits general moral intuitions. (There remains a debate as to whether Morality A might be better than Morality B when Morality B better matches our general intuitions - I don't want to get into this, as I'm not sure there's a non-circular meaning of "better" as applied to morality that does not relate to moral intuitions.)
1 -First, expressly anti-utilitarian utility can be disregarded. Most of the cases of this are fairly simple and bright-line. No matter how much Bob enjoys raping people, the utility he derives from doing so is irrelevant unless he drinks the utilitarian Koolaid and only, for example, engages in rape fantasies (in which case his utility is counted - the issue is not that his desire is bad, it's that his actions are). This gets into some slight line-drawing problems with, for example, utility derived from competition (as one may delight in defeating people - this probably survives, however, particularly since it is all consensual).
1.5 - The above point is also related to the issue of discounting the future utility of such persons; I'm trying to figure out if it belongs in this sequence. The example I plan to use (which makes pretty much the entire point) is as follows. You have some chocolate ice cream you have to give away. You can give it to a small child and a person who has just brutally beaten and molested that child. The child kinda likes chocolate ice cream; vanilla is his favorite flavor, but chocolate's OK. The adult absolutely, totally loves chocolate ice cream; it's his favorite food in the world. I, personally, give the kid the ice cream, and I think so does well over 90% of the general population. On the other hand, if the adult were simply someone who had an interest in molesting children, but scrupulously never acted on it, I would not discount his utility so cheerfully. This may simply belong as a separate post on its own on the utility value of punishment. I'd be interested in feedback on it.
2 -Finally, and trickiest, is the problem of utility conditioned on false beliefs. Take two examples: an african village stoning a child to death because they think she's a witch who has made it stop raining, and the same village curing that witch-hood by ritually dunking her in holy water (or by some other innocuous procedure). In the former case, there's massive disutility that occurs because people will think it will solve a problem that it won't (I'm also a little unclear on what it would mean for the utility of the many to "outweigh" the utility of the one, but that's an issue I'll address in the intro article). In the latter, there's minimal disutility (maybe even positive utility), even though there's the same impotence. The best answer seems to be that utility conditioned on false beliefs should be ignored to the extent that it is conditioned on false beliefs. Many people (myself included) celebrate religious holidays with no belief whatsoever in the underlying religion - there is substantial value in the gathering of family and community. Similarly, there is some value to the gathering of the community in both village cases; in the murder it doesn't outweigh the costs, in the baptism it very well might.
3 - (tentative) How this approach coincides with the unweighted approach in the long term. Basically, if we ignore certain kinds of utility, we will encourage agents to pursue other kinds of utility (if you can't burn witches to improve your harvest, perhaps you'll learn how to rotate crops better). The utility they pursue is likely to be of only somewhat lower value to them (or higher value in some cases, if they're imperfect, i.e. human). However, it will be of non-negative value to others. Thus, a policy-maker employing adjusted utilitarianism is likely to obtain better outcomes from an unweighted perspective. I'm not sure this point is correct or cogent.
I'm aware at least some of this is against lesswrong canon. I'm curious as to if people have counterarguments, objections, counterexamples, or general feedback on whether this would be a desirable series to spell out.
17 comments
Comments sorted by top scores.
comment by Scott Alexander (Yvain) · 2011-04-19T16:21:25.865Z · LW(p) · GW(p)
I think I disagree with your interpretation on every one of these points!
1: If you're assuming the reason Bob enjoys his crimes is because he's lowering the utility of his victims, you need to make that assumption much more explicit. If Bob only committed his crimes because it makes him feel good, but doesn't feel better knowing that the victim is unhappy because he has committed them, then he's only in the same position as anyone else who wants something that's good for emself but hurts another person. For example, if I want a coat made of panda-bear fur, then this upsets panda-lovers and (if we allow animals to have utility) pandas, but it doesn't allow anyone to disregard my desire - it just means that I might not get it satisfied if the panda-lovers turn out to have more clout.
Even if Bob explicitly enjoys lowering utility, I still find it sketchy to dismiss his desire as an axiom of the system. In a counterfactual world where all desires are completely fixed and non-self-modifiable, Bob's desires should be taken into account (and probably rejected when it turns out they hurt other people more than they help him). In a world more like our own, where desires become more common and intense if there's a possibility of fulfilling them, these desires should be dismissed for game theoretic reasons to discourage any more such desires from ever forming, not just disregarded.
1.5: I can't tell whether you're just acknowledging the existence of punishment, or something different. Yes, we should punish child molesters as a game theoretic action to discourage child molestation in the future. And this punishment has to involve negative utility. But punishment should be specific and limited. If the child molester has already gone to jail, then ey gets the same weight in our calculations as anyone else. And if we do punish em, we do it not because we're using a system that says we can disregard eir utility on general principle, but because a greater good (discouraging future child molestation) overrides it.
2: Here I would say the villagers don't have a preference against the girl living, they have a preference against witchcraft. We take all of their preferences into account, but because drowning the girl doesn't reduce witchcraft, their preferences have no bearing on whether we save the girl or not. So since there's massive disutility (death) and minimal utility (they get small utility from not having to waste time worrying about witchcraft, but they don't get what they were hoping for which is less witchcraft occuring), it's a net loss and we save the girl. In the ritual dunking case, there's no disutility, and the utility is that they stop worrying about witchcraft, so it's a net gain and we let them do it.
(this makes more sense if we assign numbers: say having less witchcraft is +50, killing a girl is -25, not having to worry about witchcraft so much is +5, and ritually dunking is -1. These assignments make more sense if you imagine asking the villagers "If this girl wasn't a witch, would you want her to continue living?" - their answer is the appropriate preference)
I don't think it's ever okay to literally discount utility. I think it's often okay to let one source of utility counterbalance and override another source of utility, and that sometimes that first source may be a game theoretic precommitment to ignore someone's utility, but you don't stick ignoring someone's utility into the theory directly.
Replies from: Psychohistorian, Psychohistorian↑ comment by Psychohistorian · 2011-04-19T17:50:40.590Z · LW(p) · GW(p)
Pure curiosity, in response to the whole non-discounted utility argument.
In a case similar to the beginning to Kill Bill - orderly selling sex with a comatose, brain dead woman - how does your utilitarian calculator come out? Assume (unlike the movie) she is in fact completely gone. Do you simply bite the bullet and agree it's a positive outcome?
Replies from: Nick_Tarleton, Yvain, ArisKatsaris↑ comment by Nick_Tarleton · 2011-04-19T18:17:14.865Z · LW(p) · GW(p)
There are other ways around this conclusion, like taking into account people's typical preference not to have this done to them even if brain-dead.
↑ comment by Scott Alexander (Yvain) · 2011-04-19T19:02:59.023Z · LW(p) · GW(p)
I've never seen that movie, and based on your description I'm thinking I probably shouldn't. But I agree with Nick Tarleton - the relevant preferences are those of the people who are not in comas, but know that one day they might be.
↑ comment by ArisKatsaris · 2011-04-19T18:25:44.649Z · LW(p) · GW(p)
If I may offer my own position...
...I think lots and lots of the population assign a negative utility to brain-dead women getting raped. Or for that matter to gravedigging and mutilating corpses to produce grotesque puppet plays.
In a hypothetical alien world where alien psychologies don't bestow dignity and respect to the dead (or the brain-dead), I wouldn't consider it inherently wrong for such to happen.
Replies from: Psychohistorian↑ comment by Psychohistorian · 2011-04-19T19:35:25.546Z · LW(p) · GW(p)
Two hypotheticals:
What if people thought, "If I were gay, I would not want to be allowed to engage in homosexual sex because it's wrong/immoral/ungodly/unhealthy/whatever."
What if people thought, "If I were addicted to crack, I would not want to be allowed to use crack because it's wrong/immoral/unhealthy/ungodly/whatever."
These are opinions about future situations where someone has a positive desire - as opposed to people having no desire, like in the coma-rape case - but in either case deals with a present desire that necessarily contradicts a future desire. How would you resolve the two above?
(Note I do not think these two behaviours are morally equivalent at all. It's because this concept seems to fail to correctly separate them that I bring them up.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2011-04-19T21:47:19.721Z · LW(p) · GW(p)
I've not yet fully resolved in my mind to what extent people should be allowed to constrain the freedom of their future selves.
But as to your particular examples, the two situations could (don't know about "should", but they could) be resolved differently by the fact that a crack-addict will have a better overall existence if he de-toxes (which the former desire helps him accomplish); while the gay person will probably have a better overall existence if he does engage in gay sex (which the future desire will help him accomplish). Though the superficial desires are different, at their core, they presumably both desire some sort of happiness, peace of mind, etc, etc.
Replies from: Psychohistorian↑ comment by Psychohistorian · 2011-04-20T03:35:16.388Z · LW(p) · GW(p)
will have a better overall existence
This assumes the problem away. How do you know which will have a better existence? Each individual, who presumably understands himself far better than you do, has declared that if the specified event comes to pass, he will have a worse overall existence.
I do think the prior "mistake of fact" issue would probably resolve this, though it feels unsatisfactory. The individual who says he wants to avoid being a crack addict is probably fairly accurate about his assumptions. The person who wants to avoid being gay is very likely to have that desire out of some religious concern or concern for his soul or some similar thing. If we're willing to say that his factual error (i.e. there is no Hell) allows the discounting of all his entwined beliefs, it'd solve the problem.
Incidentally, none of this solves the people-are-offended problem. Sure, the idea of someone having sex with a comatose woman bothers most people. But 100 years ago, the idea of two people of different races bothered a lot of people (arguably more). I think the two are completely different, but how do you distinguish between the two? Ignoring utility based on factual error (there is no Hell) and utility based on counter-utilitarian preference (preventing people from engaging in consensual relationships is, with exceptions that don't apply here, counter-utilitarian). It seems like a simpler utilitarianism might be stuck imposing the will of the many on the few.
↑ comment by Psychohistorian · 2011-04-19T17:47:45.027Z · LW(p) · GW(p)
I don't think it's ever okay to literally discount utility.
I'm actually (and this is my intro point) not sure it's possible to avoid doing this. If we have some ice cream that you and I both want, we must necessarily engage in some weighing of our interest in the ice cream. There are objective measures we can use (i.e. how much we'd be willing to pay for it; how many hours of labor we'd sacrifice to obtain it, etc.), but I'm fairly confident there is not an Objective Measurement of True Utility that Tells Us Who Absolutely Deserves the Ice Cream. Much utilitarian thinking appears contingent on this philosophical fiction (perhaps this point is itself the primary one). Any selection of an objective criteria either implicitly or explicitly discounts something about one of the agents - willingness to pay favors the rich, willingness to spend time may favor the young or unemployed, etc.
As for Bob the Rapist, the issue is not that he enjoys rape because it hurts other people, but that he is knows it causes harm and doesn't care. This may surprise you, but the vast majority of humanity is not comprised of unweighted aggregate utilitarians. Though I think our actual disagreement may not exist - if he engages in fulfilling rape fantasies with consenting adults, or makes a simulated world for himself, or designs a sex-bot to enjoy being raped (which is itself an ontologically convoluted issue, but I digress), I'm not objecting. So it could be my, "Discounting his utility" is your "dismissal for game-theoretic reasons" are essentially the same thing. If we call my system U' versus standard U, perhaps the argument is that any kind of applied utilitarian framework needs to look more like U' than like U.
Without writing a whole article on the issue, there does appear to be a difference between forcibly raping someone and wearing fur. Off the cuff, I'd guess that this issue is marginal effect. Animal lovers tend to object to the existence of fur coats generally - the step from 1 fur coat or 100 fur coats or 100,000 fur coats is smaller than the step from 0 to one, and they do not "feel" each fur coat, in the same way that a person "feels" being raped.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2011-04-19T19:10:06.814Z · LW(p) · GW(p)
I'm not disagreeing that crimes are bad, just that this should be stated as saying that whatever utility they gives the perpetrator is overruled by the disutility they give the victim.
This has occasional theoretical implications: for example, if Bob was insane and incapable of realizing that his actions harmed his victim, then we still perform exactly the same calculation and stop him, even though the "he knows it causes harm and he doesn't care" argument is void.
Even if the panda analogy isn't perfect, there are suitably many analogies for acts where two people's utility is in competition for non-evil reasons: for example, if we both want pizza and there's only one slice left, my taking it isn't bad in itself, but if you are hungrier it may be that my (perfectly valid) desire for pizza is a utilitarian loss and should be prevented.
Given that this same principle of "if two people's interests are in conflict, we stop the person with the lower stake from pursuing that interest at the expense of the person with the higher stake" is sufficient to explain why crimes are bad, I don't see why another explanation is needed.
On an unrelated note, I've heard people suggest that it's a bad idea to use rape as an example in a case where any other example is possible because it's an extreme emotional trigger for certain people. I'm going to try to use murder as my go-to example of an immoral hurtful act from now on, on the grounds that it conveniently removes its victims from the set of people with triggers and preferences.
Replies from: Psychohistorian↑ comment by Psychohistorian · 2011-04-19T19:30:13.501Z · LW(p) · GW(p)
I'm not disagreeing that crimes are bad, just that this should be stated as saying that whatever utility they gives the perpetrator is overruled by the disutility they give the victim.
That's kind of the problem I'm getting at. Suppose we could torture one person and film it, creating a superlatively good video that would make N sadists very happy when they watched it, significantly because they value its authenticity. It seems that, if you choose torture over dust specks, you are similarly obliged to choose torture video over no video once N is sufficiently large, whatever sufficiently large means. Interestingly this applies even if there exist very close but inferior substitutes - N just needs to be larger. On the other hand, discounting non-consensual sadism resolves this as don't torture.
The central problem may be one of measurement, one of incentives (we don't want people cultivating non-consensual sadistic desires), or a combination of the two. Perhaps my goals are more pragmatic than conceptual.
Replies from: endoself↑ comment by endoself · 2011-04-21T23:49:53.015Z · LW(p) · GW(p)
The central problem may be one of measurement, one of incentives (we don't want people cultivating non-consensual sadistic desires), or a combination of the two.
I think this is pretty much it. We don't want people to want to rape coma patients. We don't want coma-rape to become common enough that people are afraid of having it happen to them. Similarly, if we decide to make this film, everyone has to be afraid that they or someone they know could be the person picked to be tortured, and the idea of torturing innocents becomes more normal. In general, caution should be applied in situations like this, even if no extreme disutility is immediately obvious (See http://lesswrong.com/lw/v0/ethical_inhibitions/).
comment by endoself · 2011-04-19T17:39:17.844Z · LW(p) · GW(p)
1.5
We don't need to discount child molesters' utilities to punish them. Punishing a child molester deters others from molesting children and acausally deters those who have already chosen whether or not to molest children. Since this good outweighs the negative utility of the punishment, punishing them is the right thing to do.
The child kinda likes chocolate ice cream; vanilla is his favorite flavor, but chocolate's OK. The adult [child molester] absolutely, totally loves chocolate ice cream; it's his favorite food in the world. I, personally, give the kid the ice cream, and I think so does well over 90% of the general population.
I give the ice cream to the child because otherwise my actions will be seen by society as an endorsement of child molestation, which will make more people molest children and which will lower my status. If no one else ever knows about my actions, I would give it to the adult because not being given ice cream by strangers is not an effective deterrent. However, in real life, I would probably never be certain enough of this, so I would always actually give it to the child.
I feel like you will object to this on the grounds that I am ignoring my intuition that the child molester deserves to be punished, which provides more reason to punish than just the considerations I have discussed and massively tips the scales toward punishment. However, not everyone shares this intuition (see here, here). They've done studies where they measure people's moral intuitions and detect differences. When I hear people justifying punishment for its own sake rather than as a deterrent or rehabilitative mechanism, I feel like I am living in the 1800s and hearing arguments for slavery. (Of course, this too is an intuition.) In the US south, "well over 90% of the general population" had an 'intuition' that slaves are subhuman and that their utility should be discounted. I don't know how to tell which intuitions are really components of our utility function and which will be destroyed by moral progress but I know that I do not have this particular intuition and I have no reason to support any decision procedure that takes punishment as fundamental rather than derived from deterrence.
Replies from: Psychohistorian↑ comment by Psychohistorian · 2011-04-21T23:21:24.034Z · LW(p) · GW(p)
This is absolutely on point and I thank you for it. I'd gotten mixed up as to the role of intuition in moral arguments, in that generally one should aim to have a coherent system that makes big-picture intuitive sense; one cannot have a system that has been jury-rigged to empower specific intuitions. I'm relying a bit much on the latter; I have a bigger system that makes overall sense and I think leads to similar results, and my point may simply be articulating that system.
comment by MinibearRex · 2011-04-19T16:52:33.116Z · LW(p) · GW(p)
One solution is to say our intuitions are wrong - this isn't quite right (i.e. a morality can't be "wrong") unless our intuitions are internally inconsistent, which I do not think is the problem.
I think that is the fundamental problem with non-utilitarianism. Take the trolley problem, for instance. Out intuitions are that the death of one person is preferable to the deaths of 5, but out intuitions also say we shouldn't deliberately kill someone. Our intuitions about morality conflict all the time, so we have to decide which intuition is more important.
comment by cousin_it · 2011-04-19T17:23:15.480Z · LW(p) · GW(p)
I'm a little confused. Is this post trying to describe someone's existing morality or define a new morality for someone to follow?
Replies from: Psychohistorian↑ comment by Psychohistorian · 2011-04-19T17:32:03.718Z · LW(p) · GW(p)
The logical inference of this post seems to be, "No one should talk about morality."
I'm also kind of curious as to what "AI-grade thinking" looks like, particularly since this was rather clearly disclaimed as a very rough draft.