Consequentialism FAQ

post by Scott Alexander (Yvain) · 2011-04-26T01:45:09.234Z · LW · GW · Legacy · 124 comments

There are a lot of explanations of consequentialism and utilitarianism out there, but not a lot of persuasive essays trying to convert people. I would like to fill that gap with a pro-consequentialist FAQ. The target audience is people who are intelligent but may not have a strong philosophy background or have thought about this matter too much before (ie it's not intended to solve every single problem or be up to the usual standards of discussion on LW).

I have a draft up at http://www.raikoth.net/consequentialism.html (yes, I have since realized the background is horrible, and changing it is on my list of things to do). Feedback would be appreciated, especially from non-consequentialists and non-philosophers since they're the target audience.

124 comments

Comments sorted by top scores.

comment by Vladimir_M · 2011-04-27T01:03:19.159Z · LW(p) · GW(p)

OK, I've read the whole FAQ. Clearly, a really detailed critique would have to be given at similar length. Therefore, here is just a sketch of the problems I see with your exposition.

For start, you use several invalid examples, or at least controversial examples that you incorrectly present as clear-cut. For example, the phlogiston theory was nothing like the silly strawman you present. It was a falsifiable scientific theory that was abandoned because it was eventually falsified (when it was discovered that burning stuff adds mass due to oxidation, rather than losing mass due to escaped phlogiston). It was certainly a reductionist theory -- it attempted to reduce fire (which itself has different manifestations) and the human and animal metabolism to the same underlying physical process. (Google "Becher-Stahl theory".) Or, at another place, you present the issue of "opposing condoms" as a clear-cut case of "a horrendous decision" from a consequentialist perspective -- although in reality the question is far less clear.

Otherwise, up to Section 4, your argumentation is passable. But then it goes completely off the rails. I'll list just a few main issues:

  • In the discussion of the trolley problem, you present a miserable caricature of the "don't push" arguments. The real reason why pushing the fat main is problematic requires delving into a broader game-theoretic analysis that establishes the Schelling points that hold in interactions between people, including those gravest ones that define unprovoked deadly assault. The reason why any sort of organized society is possible is that you can trust that other people will always respect these Schelling points without regards to any cost-benefit calculations, except perhaps when the alternative to violating them is by orders of magnitude more awful than in the trolley examples. (I have compressed an essay's worth of arguments into a few sentences, but I hope the main point is clear.)

  • In Section 5, you don't even mention the key problem of how utilities are supposed to be compared and aggregated interpersonally. If you cannot address this issue convincingly, the whole edifice crumbles.

  • In Section 6, at first it seems like you get the important point that even if we agree on some aggregate welfare maximization, we have no hope of getting any practical guidelines for action beyond quasi-deontologist heuristics. But they you boldly declare that "we do have procedures in place for breaking the heuristic when we need to." No, we don't. You may think we have them, but what we actually have are either somewhat more finely tuned heuristics that aren't captured by simple first-order formulations (which is good), or rationalizations and other nonsensical arguments couched in terms of a plausible-sounding consequentialist analysis (which is often a recipe for disaster). The law of unintended consequences often bites even in seemingly clear-cut "what could possibly go wrong?" situations.

  • Along similar lines, you note that in any conflict all parties are quick to point out that their natural rights are at stake. Well, guess what. If they just have smart enough advocates, they can also all come up with different consequentialist analyses whose implications favor their interests. Different ways of interpersonal utility comparison are often themselves enough to tilt the scales as you like. Further, these analyses will all by necessity be based on spherical-cow models of the real world, which you can usually engineer to get pretty much any implication you like.

  • Section 7 is rather incoherent. You jump from one case study to another arguing that even when it seems like consequentialism might imply something revolting, that's not really so. Well, if you're ready to bite awful consequentialist bullets like Robin Hanson does, then be explicit about it. Otherwise, clarify where exactly you draw the lines.

  • Since we're already at biting bullets, your FAQ fails to address another crucial issue: it is normal for humans to value the welfare of some people more than others. You clearly value your own welfare and the welfare of your family and friends more than strangers (and even for strangers there are normally multiple circles of diminishing caring). How to reconcile this with global maximization of aggregate utility? Or do you bite the bullet that it's immoral to care about one's own family and friends more than strangers?

  • Question 7.6 is the only one where you give even a passing nod to game-theoretical issues. Considering their fundamental importance in the human social order and all human interactions, and their complex and often counter-intuitive nature, this fact by itself means that most of your discussion is likely to be remote from reality. This is another aspect of the law of unintended consequences that you nonchalantly ignore.

  • Finally, your idea that it is possible to employ economists and statisticians and get accurate and objective consequentialist analysis to guide public policy is altogether utopian. If such things were possible, economic central planning would be a path to prosperity, not the disaster that it is. (That particular consequentialist folly was finally abandoned in the mainstream after it had produced utter disaster in a sizable part of the world, but many currently fashionable ideas about "scientific" management of government and society suffer from similar delusions.)

Replies from: Yvain, sark
comment by Scott Alexander (Yvain) · 2011-05-04T13:46:21.999Z · LW(p) · GW(p)

Phlogiston: my only knowledge of the theory is Eliezer's posts on it. Do Eliezer's posts make the same mistake, or am I misunderstanding those posts?

Trolley-problem: Agreed about Schelling points of interactions between people. What I am trying to do is not make a case for pushing people in hypothetical trolley problems, but to show that certain arguments against doing so are wrong. I think I returned to some of the complicating factors later on, although I didn't go quite so deep as to mention Schelling points by name. I'll look through it again and make sure I've covered that to at least the low level that would be expected in an introductory argument like this.

Aggregating interpersonal utilities: Admitted that I handwave this away by saying "Economists have some ideas on how to do this". The FAQ was never meant to get technical, only provide an introduction to the subject. Because it is already 25 pages long I don't want to go that deep, although I should definitely make it much clearer that these topics exist.

Procedures in place for violating heuristics: By this I mean that we have laws that sometimes supervene certain rights. For one example, even though we have a right to free speech, we also have a law against hate speech. Even though we have a right to property, we also have laws of eminent domain when one piece of property is blocking construction of a railway or something. Would it be proper to rephrase your objection as "We don't have a single elegant philosophical rule for deciding when it is or isn't okay to violate heuristics"?

Parties pointing out natural rights are at stake: In a deontological system, these conflicts are not solveable even in principle: we simply don't know how to decide between two different rights and the only hope is to refer it to politicians or the electorate or philosophers. In a consequentialist system it's certainly possible to disagree, and clever arguers can come up with models in their favor, but it's possible to develop mathematical and scientific tools for solving the problem (for example, prediction markets would solve half of this and serious experimental philosophy could make a dent on the other half). And there are certain problems which are totally opaque to rights-based arguments which you couldn't even begin to argue on consequentialist grounds (eg opt-out organ example given later)

Section 7: I don't really understand your criticism. Yes, it's jumping from place to place. I'm answering random objections that people tend to bring up. Do you think I'm straw-manning or missing the important objections? The Nazi and slavery objections at your link seem very much like the racism and slavery objections addressed on the FAQ, and the Hannibal-the-baby-eater objection only seems relevant if one confuses money with utility.

Welfare of some more than others: I admit that I have these preferences, but I don't think they're moral preferences. I might choose to save my mother rather than two strangers, I just would be doing it for reasons other than morality. This strikes me as a really weird objection - is there some large group of people who say that nepotism is the moral thing to do?

Game theoretic issues: Agreed that these are important. This is meant to be an introductory FAQ to prime some intuitions, not a complete description of all human behavior. Given that game theory usually means that consequentialism is more likely to give the intuitively correct answer to moral dilemmas, I don't feel like I'm being dishonest or cherry-picking by excluding most mentions of it. (game theory is against consequentialism only if you mistake consequentialism for certain consequentialism-signaling actions, like pushing people in front of trolleys or assassinating Hitler, rather than considering it as the thought process generating these actions. Learn the thought process first, then master the caveats)

Regarding economists and statisticians: The widespread consensus of economists and statisticians is that economic central planning doesn't work. I would expect something like prediction markets to be not only be able to guide certain policies, but to be able to accurately predict where to use and where not to use prediction markets.

General response to your comments: Mostly right but too deep for the level at which this FAQ is intended. I will try to revise the FAQ to emphasize that the FAQ is intended only to teach consequentialist thought processes, and that these must then be modified by knowledge of things like game theory.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-05-05T20:40:24.631Z · LW(p) · GW(p)

Each of these issues could be the subject of a separate lengthy discussion, but I'll try to address them as succinctly as possible:

  1. Re: phlogiston. Yes, Eliezer's account is inaccurate, though it seems like you have inadvertently made even more out of it. Generally, one recurring problem in the writings of EY (and various other LW contributors) is that they're often too quick to proclaim various beliefs and actions as silly and irrational, without adequate fact-checking and analysis.

  2. Re: interpersonal utility aggregation/comparison. I don't think you can handwave this away -- it's a fundamental issue on which everything hinges. For comparison, imagine someone saying that your consequentialism is wrong because it's contrary to God's commands, and when you ask how we know that God exists and what his commands are, they handwave it by saying that theologians have some ideas on how to answer these questions. In fact, your appeal to authority is worse in an important sense, since people are well aware that theologians are in disagreement on these issues and have nothing like definite unbiased answers backed by evidence, whereas your answer will leave many people thinking falsely that it's a well-understood issue where experts can provide adequate answers.

  3. Re: economists and statisticians. Yes, nowadays it's hard to deny that central planning was a disaster after it crumbled spectacularly everywhere, but read what they were saying before that. Academics are just humans, and if an ideology says that the world is a chaotic inefficient mess and experts like them should be put in charge instead, well, it will be hard for them to resist its pull. Nowadays this folly is finally buried, but a myriad other ones along similar lines are actively being pursued, whose only redeeming value is that they are not as destructive in the short to medium run. (They still make the world uglier and more dysfunctional, and life more joyless and burdensome, in countless ways.) Generally, the idea that you can put experts in charge and expect that they their standards of expertise won't be superseded by considerations of power and status is naively utopian.

  4. Re: procedures in place for violating heuristics. My problem is not with the lack of elegant philosophical rules. On the contrary, my objections are purely practical. The world is complicated and the law of unintended consequences is merciless and unforgiving. What's more, humans are scarily good at coming up with seemingly airtight arguments that are in fact pure rationalizations or expressions of intellectual vanity. So, yes, the heuristics must be violated sometimes when the stakes are high enough, but given these realistic limitations, I think you're way overestimating our ability to identify such situations reliably and the prudence of doing so when the stakes are less than enormous.

  5. Re: Section 7. Basically, you don't take the least convenient possible world into account. In this case, the LCPW is considering the most awful thing imaginable, assuming that enough people assign it positive enough value that the scales tip in their favor, and then giving a clear answer whether you bite the bullet. Anything less is skirting around the real problem.

  6. Re: welfare of some more than others. I'm confused by your position: are you actually biting the bullet that caring about some people more than others is immoral? I don't understand why you think it's weird to ask such a question, since utility maximization is at least prima facie in conflict with both egoism and any sort of preferential altruism, both of which are fundamental to human nature, so it's unclear how you can resolve this essential problem. In any case, this issue is important and fundamental enough that it definitely should be addressed in your FAQ.

  7. Re: game theory and the thought process. The trouble is that consequentialism, or at least your approach to it, encourages thought processes leading to reckless action based on seemingly sophisticated and logical, but in reality sorely inadequate models and arguments. For example, the idea that you can assess the real-world issue of mass immigration with spherical-cow models like the one to which you link approvingly is every bit as delusional as the idea -- formerly as popular among economists as models like this one are nowadays -- that you can use their sophisticated models to plan the economy centrally with results far superior to those nasty and messy markets.

General summary: I think your FAQ should at the very least include some discussion of (2) and (6), since these are absolutely fundamental problems. Also, I think you should research more thoroughly the concrete examples you use. If you've taken the time to write this FAQ, surely you don't want people dismissing it because parts of it are inaccurate, even if this isn't relevant to the main point you're making.

Regarding the other issues, most of them revolve around the general issues of practical applicability of consequentialist ideas, the law of unintended consequences (of which game-theoretic complications are just one special case), the reliability of experts when they are in positions where their ideas matter in terms of power, status, and wealth, etc. However you choose to deal with them, I think that even in the most basic discussion of this topic, they deserve more concern than your present FAQ gives them.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2011-05-05T22:00:49.013Z · LW(p) · GW(p)

Okay, thank you.

I will replace the phlogiston section with something else, maybe along the lines of the example of a medicine putting someone to sleep because it has a "dormitive potency".

I agree with you that there are lots of complex and messy calculations that stand between consequentialism and correct results, and that at best these are difficult and at worst they are not humanly feasible. However, this idea seems to me fundamentally consequentialist - to make this objection, one starts by assuming consequentialist principles, but then saying they can't be put into action and so we should retreat from pure consequentialism on consequentialist grounds. The target audience of this FAQ is people who are not even at this level yet - people who don't even understand that you need to argue against certain "consequentialist" ideas on consequentialist grounds, but rather that they can be dismissed by definition because consequences don't matter. Someone who accepts consequentialism on a base level but then retreats from it on a higher level is already better informed than the people I am aiming this FAQ at. I will make this clearer.

This gets into the political side of things as well. I still don't understand why you think consequentialism implies or even suggests centralized economic planning when we both agree centralized economic planning would have bad consequences. Certain decisions have to be made, and making them on consequentialist grounds will produce the best results - even if those consequentialist grounds are "never give the government the power to make these decisions because they will screw them up and that will have bad consequences". I continue to think prediction markets allow something slightly more interesting than that, and I think if you disagree we can resolve that disagreement only on consequentialist grounds - eg would a government that tried to intervene where prediction markets recommended intervention create better consequences than one that didn't. Nevertheless, I'll probably end up deleting a lot of this section since it seemed to give everyone an impression I don't endorse.

Hopefully the changes I listed in my other comment on this thread should help with some of your other worries.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-05-06T03:40:09.831Z · LW(p) · GW(p)

However, this idea seems to me fundamentally consequentialist - to make this objection, one starts by assuming consequentialist principles, but then saying they can't be put into action and so we should retreat from pure consequentialism on consequentialist grounds.

Fair enough. Though I can grant this only for consequentialism in general, not utilitarianism -- unless you have a solution to the fundamental problem of interpersonal utility comparison and aggregation. (In which case I'd be extremely curious to hear it.)

I still don't understand why you think consequentialism implies or even suggests centralized economic planning when we both agree centralized economic planning would have bad consequences.

I gave it as a historical example of a once wildly popular bad idea that was a product of consequentialist thinking. Of course, as you point out, that was an instance of flawed consequentialist thinking, since the consequences were in fact awful. The problem however is that these same patterns of thinking are by no means dead and gone -- it is only that some of their particular instances have been so decisively discredited in practice that nobody serious supports them any more. (And in many other instances, gross failures are still being rationalized away.)

The patterns of thinking I have in mind are more or less what you yourself propose as a seemingly attractive consequentialist approach to problems of public concern: let's employ accredited experts who will use their sophisticated models to do a cost-benefit analysis and figure out a welfare-maximizing policy. Yes, this really sounds much more rational and objective compared to resolving issues via traditional customs and institutions, which appear to be largely antiquated, irrational, and arbitrary. It also seems far more rational than debating issues in terms of metaphysical constructs such as "liberties," "rights," "justice," "constitutionality," etc. Trouble is, with very few exceptions, it is usually a recipe for disaster.

Traditional institutions and metaphysical decision-making heuristics are far from perfect, but with a bit of luck, at least they can provide for a functional society. They are a product of cultural (and to some degree biological) evolution, as as such they are quite robust against real-world problems. In contrast, the experts' models will sooner or later turn out to be flawed one way or another -- the difficulty of the problems and the human biases that immediately rear their heads as soon as power and status are at stake practically guarantee this outcome.

Ultimately, when science is used to create policy, the practical outcome is that official science will be debased and corrupted to make it conform to ideological and political pressures. It will not result in elevation of public discourse to a real scientific standard (what you call reducing politics to math) -- that is an altogether utopian idea. So, for example, when that author whose article you linked uses sophisticated-looking math to "analyze" a controversial political issue (in this case immigration), he's not bringing mathematical clarity and precision of thought into the public discourse. Rather, he is debasing science by concocting a shoddy spherical-cow model with no connection to reality that has some superficial trappings of scientific discourse; the end product is nothing more than Dark Arts. Of course, that was just a blog post, but the situation with real accredited expert output is often not much better.

Now, you can say that I have in fact been making a consequentialist argument all along. In some sense, I agree, but what I wrote certainly applies even to the minimalist interpretation of your positions stated in the FAQ.

comment by sark · 2011-04-29T17:05:47.341Z · LW(p) · GW(p)

I have compressed an essay's worth of arguments into a few sentences, but I hope the main point is clear.

I unfortunately don't get the main point :(

Could you elaborate on or at least provide a reference for how a consideration of Schelling points would suggest that we shouldn't push the fat man?

Replies from: Vladimir_M
comment by Vladimir_M · 2011-04-29T21:06:40.353Z · LW(p) · GW(p)

This essay by David Friedman is probably the best treatment of the subject of Schelling points in human relations:
http://www.daviddfriedman.com/Academic/Property/Property.html

Applying these insights to the fat man/trolley problem, we see that the horrible thing about pushing the man is that it transgresses the gravest and most terrible Schelling point of all: the one that defines unprovoked deadly assault, whose violation is understood to give the other party the licence to kill the violator in self-defense. Normally, humans see such crucial Schelling points as sacrosanct. They are considered violable, if at all, only if the consequentialist scales are loaded to a far more extreme degree than in the common trolley problem formulations. Even in the latter case, the act will likely cause serious psychological damage. This is probably an artifact of additional commitment not to violate them, which may also be a safeguard against rationalizations.

Now, the utilitarian may reply that this is just human bias, an unfortunate artifact of evolutionary psychology, and we’d all be better off if people instead made decisions according to pure utilitarian calculus. However, even ignoring all the other fatal problems of utilitarianism, this view is utterly myopic. Humans are able to coordinate and cooperate because we pay respect to the Schelling points (almost) no matter what, and we can trust that others will also do so. If this were not so, you would have to be constantly alert that anyone might rob, kill, cheat, or injure you at any moment because their cost-benefit calculations have implied doing so, even if these calcualtions were in terms of the most idealistic altruistic utilitarianism. Clearly, no organized society could exist in that case: even if with unlimited computational power and perfect strategic insight you could compute that cooperation is viable, this would clearly be impractical.

It is however possible in practice for humans to evaluate each other’s personalities and figure out if others’, so to say, decision algorithms follow these constraints. Think of how people react when they realize that someone has a criminal history or sociopathic tendencies. This person is immediately perceived as creepy and dangerous, and with good reason: people realize that his decision algorithm lacks respect for the conventional Schelling points, so that normal trust and relaxed cooperation with him is impossible, and one must be on the lookout for nasty surprises. Similarly, imagine meeting someone who was in the fat man/trolley situation and who mechanically made the utilitarian decision and pushed the man without a twitch of guilt. Even the most zealous utilitarian will in practice be creeped out by such a person, even though he should theoretically perceive him as an admirable hero. (As always when it comes to ideology, people may be big on words but usually know better when their own welfare is at stake.)

(This comment is also cursory and simplified, and an alert reader will likely catch multiple imprecisions and oversimplifications. This is unfortunately unavoidable because of the complexity of the topic. However, the main point stands regardless. In particular, I haven’t addressed the all too common cases where cooperation between people breaks down and all sorts of conflict ensue. But this analysis would just reinforce the main point that cooperation critically depends on mutual recognition of near-unconditional respect for Schelling points.)

Replies from: utilitymonster, sark
comment by utilitymonster · 2011-04-30T08:51:42.554Z · LW(p) · GW(p)

Can you explain why this analysis renders directing away from the five and toward the one permissible?

Replies from: Vladimir_M
comment by Vladimir_M · 2011-05-01T00:47:55.760Z · LW(p) · GW(p)

The switch example is more difficult to analyze in terms of the intuitions it evokes. I would guess that the principle of double effect captures an important aspect of what's going on, though I'm not sure how exactly. I don't claim to have anything close to a complete theory of human moral intuitions.

In any case, the fact that someone who flipped the switch appears much less (if at all) bad compared to someone who pushed the fat man does suggest strongly that there is some important game-theoretic issue involved, or otherwise we probably wouldn't have evolved such an intuition (either culturally or genetically). In my view, this should be the starting point for studying these problems, with humble recognition that we are still largely ignorant about how humans actually manage to cooperate and coordinate their actions, instead of naive scoffing at how supposedly innumerate and inconsistent our intuitions are.

comment by sark · 2011-04-29T23:42:39.708Z · LW(p) · GW(p)

Thanks! That makes sense.

comment by PlaidX · 2011-04-26T02:40:58.328Z · LW(p) · GW(p)

I like it, but stop using "ey". For god's sake, just use "they".

Replies from: AlexMennen, ArisKatsaris, Yvain, Emile
comment by AlexMennen · 2011-04-26T03:04:45.509Z · LW(p) · GW(p)

I reluctantly agree. I like Spivac pronouns, but I since most people haven't even heard of them, using them probably makes your FAQ less effective for most people.

comment by ArisKatsaris · 2011-04-26T09:49:51.944Z · LW(p) · GW(p)

Seconded. I strongly dislike Spivac pronouns. Use "they".

comment by Scott Alexander (Yvain) · 2011-05-04T15:02:02.886Z · LW(p) · GW(p)

I agree that "ey" is annoying and distracting, but I feel like someone's got to be an early adopter or else it will never stop being annoying and distracting.

Replies from: PlaidX
comment by PlaidX · 2011-05-05T05:41:43.996Z · LW(p) · GW(p)

I know where you're coming from, but "they" is already the world's gender-neutral third person pronoun of choice, so why pick a different one? Even if it wasn't, you've got to pick your battles.

comment by Emile · 2011-04-27T07:56:47.000Z · LW(p) · GW(p)

Note that these first show up in the section on signaling.

Later on, there's a criticism of Deontology (using Rules as the final arbiter of what's right), by appealing to Rules:

There are only two possible justifications for the deontologist's action. First, [th]ey might feel that rules like “don't murder” are vast overarching moral laws that are much more important than simple empirical facts like whether people live or die. But this violates the Morality Lives In The World principle

Still later on:

No, this FAQ wasn't just an elaborate troll.

Hmm.

comment by Morendil · 2011-04-26T06:45:12.154Z · LW(p) · GW(p)

Some "first impressions" feedback: though it has a long tradition in geekdom, and occasionally works well, I for one find the fake-FAQ format extremely offputting. These aren't "frequently" asked questions - they are your questions, and not so much questions as lead-ins to various parts of an essay.

I'd be more interested if you started off with a concise definition of what you mean by "consequentialism". (If you were truly writing a FAQ, that would be the most frequently asked question of all). As it is, the essay starts losing me by the time it gets to "part one" - I skipped ahead to see how long I should expect to spend on preliminaries before getting to the heart of the matter.

(Usually, when you feel obliged to make a self-deprecating excuse such as "Sorry, I fell asleep several pages back" in your own writing, there's a structure issue that you want to correct urgently.)

Replies from: NihilCredo, None, sark, kpreid, endoself
comment by NihilCredo · 2011-04-26T15:31:27.524Z · LW(p) · GW(p)

Myself, I found the fake-FAQ format to work pretty well, since it's a relatively faithful recreation of Internet debates on morality/politics/whatever.

comment by [deleted] · 2011-04-26T07:44:37.368Z · LW(p) · GW(p)

I think the fake-FAQ format is good when you can use it to skip to the interesting things. I wouldn't read an essay but if I could just read two answers that interest me, I might read the rest too. This being said, in the Cons-FAQ a lot of questions refer to previous questions which of course completely destroys this advantage.

comment by sark · 2011-04-29T17:25:23.737Z · LW(p) · GW(p)

Fake-FAQs can be a method of misrepresenting arguments against your viewpoint. Like: "Check out all these silly arguments anti-consequentialists frequently use". Just an example, I'm not saying Yvain is doing this.

comment by kpreid · 2011-04-26T19:47:11.582Z · LW(p) · GW(p)

I don't care whether it's in the format of a FAQ, but don't call it a FAQ if the questions are not frequently asked.

Replies from: None
comment by [deleted] · 2011-04-26T19:50:59.634Z · LW(p) · GW(p)

I've long had the suspicion that many FAQs aren't really frequently asked.

comment by endoself · 2011-04-26T19:56:38.865Z · LW(p) · GW(p)

I have only read his anti-libertarian FAQ but the concerns mentioned in the questions did seem to be typical of those that would be asked.

comment by AlephNeil · 2011-04-26T05:24:33.894Z · LW(p) · GW(p)

I think your analysis of the 'gladiatorial objection' misses something:

I hope I'm not caricaturing you too much if I condense your rebuttal as follows: "People wouldn't really enjoy watching gladiators fight to the death. In fact, they'd be sickened and outraged. Therefore, utilitarianism does not endorse gladiatorial games after all."

But there's a problem here: If the negative reaction to gladiatorial games is itself partly due to analyzing those games in utilitarian terms then we have a feedback loop.

Games are outrageous --> decrease utility --> are outrageous --> etc.

But this could just as well be a 'virtuous circle':

Games are noble --> increase utility --> are noble --> etc.

If we started off with a society like that of ancient Rome, couldn't it be that the existence of gladiatorial games is just as 'stable' (with respect to the utilitarian calculus) as their non-existence in our own society?

Couldn't it be that we came to regard such bloodsports as being 'immoral' for independent, non-utilitarian reasons*? And then once this new moral zeitgeist became prevalent, utilitarians could come along and say "Aha! Far from being 'fun', just look at how much outrage the games would generate. If only our predecessors had been utilitarians, we could have had avoided all this ugly carnage."

(Perhaps you will bite the bullet here, and grant that there could be a society where gladiatorial games are 'good' by utilitarian standards. But then there doesn't seem to be much hope for a utilitarian justification of the idea that, insofar as we have outlawed bloodsports, we have 'progressed' to a better state of affairs.

Or perhaps you will say that bloodsports would always be judged 'bad' under ideal rational reflection (that is, they go against our CEV). I think this is a much stronger reply, but it's not clear that CEV actually makes sense (i.e. that the limit is well-defined).)

* Sadly my knowledge of history is too meagre to venture an account of how this actually happened.

I have many other objections to utilitarianism up my sleeve. To give the gist of a few of them:

  1. Utilitarian calculations are impossible in practice because the future cannot be predicted sufficiently far.
  2. Utilitarian calculations are impossible even if theory because outcomes are incommensurable. The indeterminacies concerning whether 'more people' are preferable to 'happier people', and how far a superbeing's happiness is 'worth more' than a human's, are special cases of this, but incommensurability is ubiquitous. (For instance, just try weighing up all of the effects of the decision to buy a car rather than use public transport. The idea is there is a Right Answer Out There seems to me an article of blind faith.)
  3. Utilitarianism holds 'terminal preferences' to be beyond reproach. It does not allow for the possibility that an entire self-contained society ought to change its system of preferences, no matter how 'brutal' and 'destructive' these preferences are. (The point about gladiatorial games is a special case of this). It denies that one can make an objective judgement as to whether a paperclipper is 'wrong' and/or 'stupid' to fill the universe with paperclips. Ultimately, might makes right in the struggle between humanity and clippy.
  4. Utilitarianism faces some awkward choices in how it values the lives of 'ordinary people' (people who live reasonably happy lives but do not make lasting 'achievements' e.g. progress in science). If their value is positive then apparently it would be better to fill the universe with them than not, which seems absurd. How is it worthwhile or noble to try to explore the entire 'soap opera of Babel'? Isn't it just a stationary stochastic process? Haven't you seen it all once you've seen the first few billion episodes? But if their value is zero (resp. negative) then it seems that nuking an entire planet full of 'ordinary people', assuming it's not the only such planet, is morally neutral (resp. desirable). The only way of resolving the contradiction of human life being both incredibly precious and utterly worthless is to deny that the premise that we need to assign it some value in order to decide how to act.
Replies from: Eugine_Nier, Yvain
comment by Eugine_Nier · 2011-04-26T05:49:36.282Z · LW(p) · GW(p)
  • Sadly my knowledge of history is too meagre to venture an account of how this actually happened.

Well, we have Christianity to blame for the decline of gladiatorial games.

Incidentally, now that we know Christianity to be false and thus gladiatorial games were banned under false pretenses, does recursive consistency require us to re-examine whether they are a good idea?

Replies from: NihilCredo, bogus, AlexanderRM
comment by NihilCredo · 2011-04-26T15:41:14.479Z · LW(p) · GW(p)

I hear that there already are voluntary, secretive leagues of people fighting to the death, even though the sport is banned. I don't know whether most fighters are enthusiastic or desperate for cash, though. But considering that becoming a Formula One pilot was a common dream even when several pilot deaths per year were the rule, I wouldn't be surprised if it were the former.

comment by bogus · 2011-04-26T20:45:02.577Z · LW(p) · GW(p)

Notwithstanding NihilCredo's point, the lack of gladiatorial combat today is most likely due to a genuine change in taste, probably related to secular decline in social violence and availability of increasingly varied entertainment (movie theaters, TV, video games etc.). The popularity of blood sports in general is decreasing. We also know that folks used to entertain themselves in ways that would be unthinkable today, such as gathering scores of cats and burning them in a fire.

Replies from: Eugine_Nier, steven0461
comment by Eugine_Nier · 2011-04-26T21:14:50.452Z · LW(p) · GW(p)

Notwithstanding NihilCredo's point, the lack of gladiatorial combat today is most likely due to a genuine change in taste, probably related to secular decline in social violence and availability of increasingly varied entertainment (movie theaters, TV, video games etc.).

For gladiatorial games specifically, their decline was caused by Christian objections. Sorry, you don't get to redefine historical facts just because they don't fit your narrative.

gathering scores of cats and burning them in a fire.

Wait, that sounds like fun.

Replies from: Alicorn
comment by Alicorn · 2011-04-26T21:15:57.065Z · LW(p) · GW(p)

Can you shed any light on why, or what would be fun about it? This confuses me.

comment by steven0461 · 2011-04-26T20:54:40.481Z · LW(p) · GW(p)

We also know that folks used to entertain themselves in ways that would be unthinkable today, such as gathering scores of cats and burning them in a fire.

It makes me suspicious when some phenomenon is claimed to be general, but in practice is always supported using the same example.

Replies from: Nornagest
comment by Nornagest · 2011-04-26T21:35:40.802Z · LW(p) · GW(p)

There's no shortage of well-documented blood sports both before and during the Christian era. I know of few as shocking as bogus's example (which was, incidentally, new to me), but one that comes close might be the medieval French practice of players tying a cat to a tree, restraining their own hands, and proceeding to batter the animal to death with their heads. This was mentioned in Barbara Tuchman's A Distant Mirror; Google also turns up a reference here.

I suppose there's something about cats that lends itself to shock value.

comment by AlexanderRM · 2015-04-18T21:57:32.385Z · LW(p) · GW(p)

I would say yes, we should re-examine it.

The entertainment value of forced gladiatorial games on randomly-selected civilians... I personally would vote against them because I probably wouldn't watch them anyway, so it would be a clear loss for me. Still, for other people voting in favor of them... I'm having trouble coming up with a really full refutation of the idea in the Least Convenient Possible World hypothetical where there's no other way to provide gladiatorial games, but there are some obvious practical alternatives.

It seems to me that voluntary gladiatorial games where the participants understand the risks and whatnot would be just fine to a consequentialst. It's especially obvious if you consider the case of poor people going into the games for money. There are plenty of people currently who die because of factors relating to lack of money. If we allowed such people to voluntarily enter gladiatorial games for money, then the gladiators would be quite clearly better off. If we ever enter a post-scarcity society but still have demand for gladiatorial games, then we can obviously ask for volunteers and get people who want the glory/social status/whatnot of it.

If for some reason that source of volunteers dried up, yet we still have massive demand, then we can have everyone who wants to watch gladiatorial games sign up for a lottery in exchange for the right to watch them, thus allowing their Rawlsian rights to be maintained while keeping the rest of the population free from worry.

comment by Scott Alexander (Yvain) · 2011-05-04T13:54:35.400Z · LW(p) · GW(p)

With the gladiatorial games, you seem to have focused on what I intended to be a peripheral point (I'll rephrase it later so this is clearer).

The main point is that forcing people to become gladiators against their will requires a system that would almost certainly lower utility (really you'd have to have an institution of slavery or a caste system; any other way and people would revolt against the policy since they would expect a possibility of being to be gladiators themselves).

Allowing people who want to, to become gladiators risks the same moral hazards brought up during debates on prostitution - ie maybe they're just doing it because they're too poor or disturbed to have another alternative, and maybe the existence of this option might prevent people from creating a structure in which they do have another alternative. I'm split on the prostitution debate myself, but in a society where people weren't outraged by gladiatorial games, I would be willing to bite the bullet of saying the gladiator question should be resolved the same way as the prostitute question.

In a utopian society where no one was poor or disturbed, and where people weren't outraged by gladiatorial games, I would be willing to allow people to become gladiators.

(in our current society, I'm not even sure whether American football is morally okay)

Replies from: AlexanderRM
comment by AlexanderRM · 2015-04-18T22:03:28.811Z · LW(p) · GW(p)

"The main point is that forcing people to become gladiators against their will requires a system that would almost certainly lower utility (really you'd have to have an institution of slavery or a caste system; any other way and people would revolt against the policy since they would expect a possibility of being to be gladiators themselves)."

It seems to me that, specifically, gladiatorial games that wouldn't lower utility would require that people not revolt against the system since they accept the risk of being forced into the games as the price they pay to watch the games. If gladiators are drawn exclusively from the slaves and lower castes, and the people with political power are exempted, then most likely the games are lowering utility.

@ Prostitution: Don't the same arguments apply to paid labor of any type?

Replies from: Jiro
comment by Jiro · 2015-04-19T02:03:42.876Z · LW(p) · GW(p)

In the case of prostitution, similar arguments apply to some extent to all jobs, but "to some extent" refers to very different degree.

My test would be as follows: ask how much people would have to be paid before they would be willing to take the job (in preference to a job of some arbitrary but fixed level of income and distastefulness) Compare that amount to the price that the job actually gets in a free market. The higher the ratio gets, the worse the moral hazard.

I would expect both prostitution and being a gladiator to score especially low in this regard.

comment by Vladimir_M · 2011-04-26T20:49:40.131Z · LW(p) · GW(p)

I skimmed the FAQ (no time to read it in detail right now, though I have bookmarked it for later). I must say that it doesn't address some of the most crucial problems of consequentialism.

Most notably, as far as I can tell, you don't even address the problem of interpersonal utility comparison, which makes the whole enterprise moot from the beginning. Then, as far as I see, you give the game-theoretic concerns only a cursory passing mention, whereas in reality, the inability to account for those is one reason why attempts to derive useful guidelines for action based on maximizing some measure of aggregate welfare are usually doomed to end up in nonsense. This, in turn, is just a special case of the general law of unintended consequences that consequentialists typically treat with hubristic nonchalance. I don't see any discussion of these essential issues.

On the whole, I would guess that your FAQ will sound convincing to a certain type of people, but it fails to address the most important problems with the views you advocate.

comment by Scott Alexander (Yvain) · 2011-05-04T15:09:48.943Z · LW(p) · GW(p)

Okay, summary of things I've learned I should change from this feedback:

  1. Fix dead links (I think OpenOffice is muddling open quotes and close quotes again)

  2. Table of contents/navigation.

  3. Stress REALLY REALLY REALLY HARD that this is meant to be an introduction and that there's much more stuff like game theory and decision theory that is necessary for a full understanding of utilitarianism.

  4. Change phlogiston metaphor subsequent to response from Vladimir_M

  5. Remove reference to Eliezer in "warm fuzzies" section.

  6. Rephrase "We have procedures in place for violating heuristics" to mention that it's patchwork and we still don't have elegant rules for this sort of thing.

  7. In part about using utilitarianism to set policy, stress involvement of clever tools like prediction markets to lessen the immediate reaction that I'm obviously flaming mad.

  8. Rephrase gladiatorial games to be more clear.

  9. Possibly change title to "Rarely Asked Questions" or "Never Asked Questions" or just keep the abbreviation the same but expand it to "Formatted Answers and Questions"

  10. Remove reference to desire utilitarianism until I'm sure I understand it.

  11. Change 'most consequentialists give similar results' to 'most popular consequentialisms...'

  12. Change bit about axiology collapsing distinctions.

  13. Consequentialism chooses best state of world ---> consequentialism chooses better world-state of two actions.

  14. Add to part about fat man that although the example is correct insofar as it correctly teaches consequentialist reasoning, game/decision theory type considerations might change the actual correct answer.

comment by Paul Crowley (ciphergoth) · 2011-04-26T07:16:18.523Z · LW(p) · GW(p)

One small thing: you define consequentialism as choosing the best outcome. I think it makes a big difference, at least to our intuitions, if we instead say something like:

Okay. The moral law is that you should take actions that make the world better. Or, put more formally, when asked to select between possible actions A and B, the more moral choice is the one that leads to the better state of the world by whatever standards you judge states of the world by.

In other words, it's not all about the one point at the pinnacle of all the choices you could make - it's about the whole scale. This helps people get over the burdensomeness/ "I'm not going to give all my money to SIAI/GWWC, so I might as well give up" thing.

comment by AlexMennen · 2011-04-26T03:56:23.893Z · LW(p) · GW(p)

Some criticism that I hope you will find useful:

First of all, you should mention timeless decision theory, or at least superrationality. These concepts are useful for explaining why people's intuition that they should not steal is not horribly misguided even if the thief cares about himself more and/or would needs it more than the previous owner. You touched on this by pointing out that the economy would collapse if everyone stole all the time, but I would suggest being more explicit.

(3.8) I think the best course of action would be to assign equal value to yourself and other people, which seems nicely in accord with there being no objective reason for a moral difference between you.

I take issue with this simply because it is not even remotely similar to the way anyone acts. I'd prefer it if we could just admit that we cared more about ourselves than about other people. Sure, utilitarianism says that the right thing to do would be to act like everyone, including oneself, is of equal value, and the world would be a better place if people actually acted this way. But no one does, and endorsing utilitarianism does not usually get them closer.

(5.31) Desire utilitarianism replaces preferences with desire. The differences are pretty technical and I don't understand all of them, but desire utilitarians sure seem to think their system is better.

Then I would suggest either doing the research or not mentioning it, since this is not critical to the concept of consequentialism. I'm not entirely clear on it either.

(7.4) For example, in coherent extrapolated volition utilitarianism, instead of respecting a specific racist's current preference, we would abstract out the reflective equilibrium of that racist's preferences if ey was well-informed and in philosophical balance. Presumably, at that point ey would no longer be a racist.

But what if he doesn't? You are right that this situation is a problem for for simple preference utilitarianism that can be rectified by some other form of utilitarianism, but your suggested solution leads to a slippery slope towards justifying anything you want with CEV utilitarianism by claiming that everyone else's moral preferences would be exactly what you want them to be in their CEV. I think the real issue here is that we respect some forms of preferences much more than others. Recall that pleasure utilitarianism (which would be the extreme case of giving 0 weight to all but one form of preference) gives the answer we like in this case.

Replies from: NihilCredo, CronoDAS
comment by NihilCredo · 2011-04-26T15:46:02.148Z · LW(p) · GW(p)

First of all, you should mention timeless decision theory, or at least superrationality. These concepts are useful for explaining why people's intuition that they should not steal is not horribly misguided even if the thief cares about himself more and/or would needs it more than the previous owner. You touched on this by pointing out that the economy would collapse if everyone stole all the time, but I would suggest being more explicit.

Very strongly disagree, and not just because I'm sceptical about both. The article is supposed about consequentialism, not Yvain's particular moral system. It should explain why you should apply your moral analysis to certain data (state of the world) instead of others ("rights"), but it shouldn't get involved in how your moral analysis exactly works.

Yvain correctly mentions that you can be a paperclip maximiser and still be a perfect consequentialist.

Replies from: bogus
comment by bogus · 2011-04-26T20:59:12.016Z · LW(p) · GW(p)

UDT and TDT are decision theories, not "moral systems". To the extent that consequentialism necessarily relies on some kind of decision theory--as is clearly the case, since it advocates choosing the optimal actions to take based on their outcomes--a brief mention of CDT, UDT and TDT explaining their relevance to consequentialist ethics (see e.g. the issue of "rule utilitarianism" vs. "action utilitarianism") would have been appropriate.

Replies from: NihilCredo
comment by NihilCredo · 2011-04-26T22:29:03.643Z · LW(p) · GW(p)

I deleted a moderate wall of text because I think I understand what you mean now. I agree that two consequentialists sharing the same moral/utility function, but adopting different decision theories, will have to make different choices.

However, I don't think it would be a very good idea to talk about various DTs in the FAQ. That is: showing that "people's intuition that they should not steal is not horribly misguided", by offering them the option of a DT that supports a similar rule, doesn't seem to me like a worthy goal for the document. IMO, people should embrace consequentialism because it makes sense - because it doesn't rely on pies in the sky - not because it can be made to match their moral intuitions. If you use that approach, you could in the same way use the fat man trolley problem to support deontology.

I might be misinterpreting you or taking this too far, but what you suggest sounds to me like "Let's write 'Theft is wrong' on the bottom line because that's what is expected by readers and makes them comfortable, then let's find a consequentialist process that will give that result so they will be happy" (note that it's irrelevant whether that process happens to be correct or wrong). I think discouraging that type of reasoning is even more important than promoting consequentialism.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-04-26T23:16:25.108Z · LW(p) · GW(p)

people should embrace consequentialism because it makes sense - because it doesn't rely on pies in the sky - not because it can be made to match their moral intuitions.

The whole point of CEV, reflexive consistency and the meta-ethics sequence is that morality is based on our intuitions.

Replies from: NihilCredo, Marius
comment by NihilCredo · 2011-04-26T23:34:04.596Z · LW(p) · GW(p)

Yes, I personally think that's awful. LessWrong rightly tends to promote being sceptical of one's mere intuitions in most contexts, and I think the same approach should be taken with morality (basically, this post on steroids).

comment by Marius · 2011-04-28T01:43:31.945Z · LW(p) · GW(p)

If this is to be useful, it would have to read "that our intuitions are based on morality".

comment by CronoDAS · 2011-04-26T05:48:24.614Z · LW(p) · GW(p)

(5.31) Desire utilitarianism replaces preferences with desire. The differences are pretty technical and I don't understand all of them, but desire utilitarians sure seem to think their system is better.

Then I would suggest either doing the research or not mentioning it, since this is not critical to the concept of consequentialism. I'm not entirely clear on it either.

Desire utilitarianism doesn't replace preferences with desires, it replaces actions with desires. It's not a consequentialist system; it's actually a type of virtue ethics. When confronted with the "fat man" trolley problem, it concludes that there are good agents that would push the fat man and other good agents that wouldn't. You should probably avoid mentioning it.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2011-05-04T13:58:40.453Z · LW(p) · GW(p)

Thank you. That makes more sense than the last explanation of it I read.

comment by Matt_Simpson · 2011-04-27T05:49:33.648Z · LW(p) · GW(p)

One problem with the FAQ: The standard metaethics around here, at least EY's metaethics, is not utilitarianism. Utilitarianism says maximize aggregate utility, with "aggregate" defined in some suitable way. EY's metaethics says maximize your own utility (with the caveat that you only have partial information of your utility function), and that all humans have sufficiently similar utility functions.

You get something pretty similar to utilitarianism from that last condition (if everyone has the same utility function and you're maximizing your own utility function, then you're also maximizing aggregate utility in many senses of the term "aggregate"). But note that the interpersonal comparison of utility problem vanishes: you're maximizing your own utility function. Maximization of the aggregate (under some methods of comparisons) is merely a consequence of this, nothing more. Also note that if we relax that last condition and let humans have differing utility function, there is no intrinsic problem for EY's theory. If someone has a legitimate preference for killing people, the utilitarian has to take that into account as positive utility (or add some ad hoc assumptions about which preferences matter). On EY's theory, sans that last condition, you only take into account someone's preference for murder if your utility function tells you to. You may value other humans satisfying their preferences, but that doesn't mean you have to value every single preference every single human has. You could, but it really just depends on what your utility function says.

Replies from: ata, komponisto, Yvain, benelliott
comment by ata · 2011-04-27T07:26:04.531Z · LW(p) · GW(p)

One problem with the FAQ: The standard metaethics around here, at least EY's metaethics, is not utilitarianism. Utilitarianism says maximize aggregate utility, with "aggregate" defined in some suitable way. EY's metaethics says maximize your own utility (with the caveat that you only have partial information of your utility function), and that all humans have sufficiently similar utility functions.

Utilitarianism isn't a metaethic in the first place; it's a family of ethical systems. Metaethical systems and ethical systems aren't comparable objects. "Maximize your utility function" says nothing, for the reasons given by benelliott, and isn't a metaethical claim (nor a correct summary of EY's metaethic); metaethics deals with questions like:

What does moral language mean? Do moral facts exist? If so, what are they like, and are they reducible to natural facts? How can we know whether moral judgments are true or false? Is there a connection between making a moral judgment and being motivated to abide by it? Are moral judgments objective or subjective, relative or absolute? Does it make sense to talk about moral progress?

EY's metaethic approaches those questions as an unpacking of "should" and other moral symbols. While it does give examples of some of the major object-level values we'd expect to find in ethical systems, it doesn't generate a brand of utilitarianism or a specific utility function.

(And "utility" as in what an agent with a (VNM) utility function maximizes (in expectation), and "utility" as in what a utilitarian tries to maximize in aggregate over some set of beings, aren't comparable objects either, and they should be kept cognitively separate.)

Replies from: Matt_Simpson
comment by Matt_Simpson · 2011-04-28T19:05:54.382Z · LW(p) · GW(p)

Utilitarianism isn't a metaethic in the first place; it's a family of ethical systems.

Good point. Here's the intuition behind my comment. Classical utilitarianism starts with "maximize aggregate utility" and jumps off from there (Mill calls it obvious, then gives a proof that he admits to be flawed). This opens them up to a slew of standard criticisms (e.g. utility monsters). I'm not very well versed on more modern versions of utilitarianism, but the impression I get is that they do something similar. But, as you point out, all the utilitarian is saying is which utility function you should be maximizing (answer: the aggregate of the utility functions of all suitable agents).

EY's metaethics, on the other hand, eventually says something like "maximize this specific utility function that we don't know perfectly. Oh yeah, it's your utility function, and most everyone else's." With a suitable utility function, EY's metaethics seems completely compatable with utilitarianism, I admit, but that seems unlikely. The utilitarian has to take into account the murderer's preference for murder, should that preference actually exist (and not be a confusion). It seems highly unlikely to me that I and most of my fellow humans (which is where the utility function in question exists) care about someone's preference for murder. Even assuming that I/we thought faster, more rationally, etc.

Oh, and a note on the "maximize your own utility function" language that I used. I tend to think about ethics in the first person: what should I do. Well, maximize my own utility function/preferences, whatever they are. I only start worrying about your preferences when I find out that they are information about my own preferences (or if I specifically care about your preferences in my own.) This is an explanation of how I'm thinking, but I should know better than to use this language on LW where most people haven't seen it before and so will be confused.

Replies from: steven0461
comment by steven0461 · 2011-04-28T19:28:07.821Z · LW(p) · GW(p)

all the utilitarian is saying is which utility function you should be maximizing (answer: the aggregate of the utility functions of all suitable agents)

The answer is the aggregate of some function for all suitable agents, but that function needn't itself be a decision-theoretic utility function. It can be something else, like pleasure minus pain or even pleasure-not-derived-from-murder minus pain.

Replies from: Matt_Simpson
comment by Matt_Simpson · 2011-04-29T05:48:29.704Z · LW(p) · GW(p)

Ah, I was equating preference utilitarianism with utilitarianism.

I still think that calling yourself a utilitarian can be dangerous if only because it instantly calls to mind a list of stock objects (in some interlocutors) that just don't apply given EY's metaethics. It may be worth sticking to the terminology despite the cost though.

comment by komponisto · 2011-04-28T05:43:47.429Z · LW(p) · GW(p)

One problem with the FAQ: The standard metaethics around here, at least EY's metaethics, is not utilitarianism...EY's metaethics says maximize your own utility...

Be careful not to confuse ethics and metaethics. You're talking about ethical theories here, rather than metaethical theories. (EY's metaethics is a form of cognitivism).

comment by Scott Alexander (Yvain) · 2011-05-04T14:35:18.601Z · LW(p) · GW(p)

I'm glad you brought that up, since it's something I've mentally been circling around but never heard verbalized clearly before.

Both the classical and the Yudkowskian system seem to run into some problems that the other avoids, and right now I'm classifying the difference as "too advanced to be relevant to the FAQ". Right now my own opinions are leaning toward believing that under reflective equilibrium my utility function should reference the aggregate utility function and possible be the same as it.

comment by benelliott · 2011-04-27T06:59:16.136Z · LW(p) · GW(p)

It is literally impossible to maximise anything other than your own utility function, because your utility function is defined as 'that which you maximise'. In that sense EY's meta-ethics is a tautology, the important part is about not knowing what your utility function is.

Replies from: wedrifid, Matt_Simpson
comment by wedrifid · 2011-04-27T14:49:32.798Z · LW(p) · GW(p)

It is literally impossible to maximise anything other than your own utility function,

No. People can be stupid. They can even be wrong about what their utility function is.

because your utility function is defined as 'that which you maximise'.

It is "that which you would maximise if you weren't a dumbass and knew what you wanted'.

Replies from: benelliott
comment by benelliott · 2011-04-27T15:32:27.298Z · LW(p) · GW(p)

Good point.

Perhaps I should have said "its impossible to intentionally maximise anything other than your utility function".

Replies from: blacktrance
comment by blacktrance · 2013-06-18T07:43:53.150Z · LW(p) · GW(p)

People can intentionally maximize anything, including the number of paperclips in the universe. Suppose there was a religion or school of philosophy that taught that maximizing paperclips is deontologically the right thing to do - not because it's good for anyone, or because Divine Clippy would smite them for not doing it, just that morality demands that they do it. And so they choose to do it, even if they hate it.

Replies from: benelliott
comment by benelliott · 2013-06-18T15:17:50.653Z · LW(p) · GW(p)

In that case, I would say their true utility function was "follow the deontological rules" or "avoid being smited by divine clippy", and that maximising paperclips is an instrumental subgoal.

In many other cases, I would be happy to say that the person involved was simply not utilitarian, if their actions did not seem to maximise anything at all.

Replies from: blacktrance
comment by blacktrance · 2013-06-18T19:44:29.175Z · LW(p) · GW(p)

If you define "utility function" as "what agents maximize" then your above statement is true but tautological. If you define "utility function" as "an agent's relation between states of the world and that agent's hedons" then it's not true that you can only maximize your utility function.

Replies from: benelliott
comment by benelliott · 2013-06-18T21:07:26.085Z · LW(p) · GW(p)

I certainly do not define it the second way. Most people care about something other than their own happiness, and some people may care about their own happiness very little, not at all, or negatively, I really don't see why a 'happiness function' would be even slightly interesting to decision theorists.

I think I'd want to define a utility function as "what an agent wants to maximise" but I'm not entirely clear how to unpack the word 'want' in that sentence, I will admit I'm somewhat confused.

However, I'm not particularly concerned about my statements being tautological, they were meant to be, since they are arguing against statements that are tautologically false.

comment by Matt_Simpson · 2011-04-27T08:16:18.562Z · LW(p) · GW(p)

Unless I'm misunderstanding you, your definition leaves no room for moral error. Surely it's possible to come up with some utility function under which your actions are maximizing. So everyone has a utility function under which the actions they took were maximizing.

Replies from: benelliott
comment by benelliott · 2011-04-27T08:26:51.349Z · LW(p) · GW(p)

I'm not quite sure what you mean.

comment by Larks · 2011-04-26T10:26:24.211Z · LW(p) · GW(p)

A contents section, with links to the relivant sections, would aid navigation.

comment by XiXiDu · 2011-04-26T11:01:54.995Z · LW(p) · GW(p)

Some thoughts I had while reading (part of) the FAQ:

...our moral intuition...that we should care about other people.

Is it an established fact that this is a natural human intuition, or is it a cultural induced disposition?

If it is a natural human tendency, can we draw the line at caring about other people or do we also care about cute kittens?

Other moral systems are more concerned with looking good than being good...

Signaling is a natural human tendency. Just like caring about other people, humans care how they appear to other people.

Why should a moral theory satisfy one intuition but not the other?

You'll have to differentiate what it means to be good from what it means to look good.

...but in fact people are generally very moral: they feel intense moral outrage at the suffering in the world...

People also feel intense moral outrage about others burning paper books or eating meat.

You can't establish a fact about morality by cherry-picking certain behaviors.

So no particular intuition can be called definitely correct until a person has achieved a reflective equilibrium of their entire morality, which can only be done through careful philosophical consideration.

I am very skeptical of trying to apply game theoretic concepts to human values.

Human values are partly inconsistent. Can you show that forced consistency won't destroy what it means to be human?

People enjoy watching a large moon, even though it is an optical illusion. If you were to argue that we generally assign more importance to not falling prey to optical illusions and therefore shouldn't conclude that it is desireable to see a large moon, you're messing with human nature, you're creating the sort of rational agent that is assumed by economic and game theoretic theories rather than protecting human nature.

It's my moral intuition that if I failed to reflect on my disgust over homosexuality, and ended out denying homosexuals the right to marry based on that disgust, then later when I thought about it more I would wish I had reflected earlier.

You expect hindsight bias and therefore conclude that you should discard all your complex values in favor of...what?

I am not saying it is wrong, but we'll have to decide if we want to replace human nature with alien academic models or swallow the bitter pill and accept that we are inconsistent beings without stable utility functions that are constantly reborn.

...that morality must live in the world, and that morality must weight people equally.

Define "people"? What is it that assigns less weight to a chimpanzee than me?

Why should we assign a nonzero value to other people?

If I only assign weight to certain people then other people with more power might do the same and I will lose. So everyone except those who can overpower all others together would be better off to weigh each other equally.

But just because there is an equilibrium doesn't mean that it is desirable. Humans don't work like that. Humans refuse blackmail and are willing to sacrifice their own life's rather than accepting a compromise.

But guilt is a faulty signal; the course of action which minimizes our guilt is not always the course of action that is morally right. A desire to minimize guilt is no more noble than any other desire to make one's self feel good at the expense of others, and so a morality that follows the principle of according value to other people must worry about more than just feeling guilty.

You are wrapping your argument in terms your are trying to explain. You are begging the question.

But just as guilt is not a perfect signal, neither are warm fuzzies. As Eliezer puts it, you might well get more warm fuzzy feelings from volunteering for an afternoon at the local Shelter For Cute Kittens With Rare Diseasess than you would from developing a new anti-malarial drug, but that doesn't mean that playing with kittens is more important than curing malaria.

You don't explain why warm fuzzies aren't more important than curing malaria.

If you go on and argue that what one really wants by helping cute kittens is to minimize suffering you are introducing and inducing an unwarranted proposition. You need empirical evidence to show that what humans really want isn't warm fuzzies.

Ironically, although these sorts of decisions are meant to prove the signaler is moral, they are not in themselves moral decisions: they demonstrate interest only in a good to the signaler...

So? What is a moral decision? You still didn't show why signaling is less important than saving your friend. All you do is telling people to feel guilty by signaling that it is immoral to be more concerned with signaling...

...there's a big difference between promoting your own happiness by promoting the happiness of others, and promoting your own happiness instead of promoting the happiness of others.

You are signaling again...

comment by D_Alex · 2011-04-26T03:17:51.670Z · LW(p) · GW(p)

On 3.4: The term "warm fuzzies" was invented before EY was born - I remember the story from high school.

With the power of Google, I found the story on the web, it is by Claude Steiner, copyrighted in 1969.

comment by satt · 2011-04-26T02:19:55.868Z · LW(p) · GW(p)

Not really philosophical feedback, but all the links except the one to the LW metaethics sequence seem to be broken for me.

Replies from: Sniffnoy
comment by Sniffnoy · 2011-04-26T18:57:27.486Z · LW(p) · GW(p)

Seems to be because they were written using "smart quotes" instead of actual quote marks.

comment by zeshen · 2021-02-28T14:46:24.950Z · LW(p) · GW(p)

Can this be an article on LW please? This link isn't very pretty and the raikoth link doesn't work. Thanks!

Replies from: amanda-de-vasconcellos
comment by Amanda de Vasconcellos (amanda-de-vasconcellos) · 2021-07-14T22:53:43.442Z · LW(p) · GW(p)

This seems to be the most recent: https://web.archive.org/web/20161020171351/http://www.raikoth.net/consequentialism.html

comment by Vladimir_Nesov · 2011-06-20T00:56:24.633Z · LW(p) · GW(p)

If God made His rules arbitrarily, then there is no reason to follow them except for self-interest (which is hardly a moral motive)

A perfectly moral motive, in the sense you are using the term.

Replies from: orthonormal
comment by orthonormal · 2011-06-21T17:40:52.523Z · LW(p) · GW(p)

Yeah, this jumped out at me too, but I think that expanding on that caveat would probably lose the intended audience.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-06-21T18:37:26.497Z · LW(p) · GW(p)

I would just remove the parenthetical and/or change the example. Possibly explain the distinction between terminal and instrumental goals first.

comment by Morendil · 2011-04-27T12:06:03.870Z · LW(p) · GW(p)

I'm looking forward to live discussion of this topic at the Paris meetup. :)

Meanwhile, I've read through it, more closely. Much of it seems, not necessarily right, but at least unobjectionable - it raises few red flags. On the other hand, I don't think it makes me much the wiser about the advantages of consequentialism.

Paras 7.2 and 7.3 (the slavery and gladiator questions) left me with an odd impression. The "test" you propose in both cases is more or less the same as Rawls' Veil of Ignorance. So at that point I was wondering, if you apply Rawls' procedure to determine what is a preferable social contract, perhaps you're a Rawlsian more than you're a consequentialist. :) BTW, are you familiar with Rawls' objections to (classical) utilitarianism?

Para 8.2 comes across as terribly naive, and "politics has been reduced to math" in particular seems almost designed to cause people to dismiss you. (A nitpick: the links at the end of 8.2 are broken.)

One thing that makes the essay confusing for me is the absence of a clear distinction between the questions "how do I decide what to do next" and "what makes for a desirable set of agreements among a large number of people" - between evaluating the morality of individual actions and choosing a social contract.

Another thing that's left out is the issue of comparing or aggregating happiness, or "utility", across different people. The one place where you touch on it, your response to the "utility monster" argument, does not match my own understanding of how a "utility monster" might be a problem. As I understood it, a "utility monster" isn't someone who is to you as you are to an ant, but someone just like you. They just happen to insist that an ice cream makes them a thousand times happier than they make you, so in all cases where it must be decided which of you should get an ice cream they should always get it.

Your analogy with optical illusions is apt, and it gives a good guideline for evaluating a proposed system of morality: in what cases does the proposed system lead me to change my mind on something that I previously did or avoided doing because of a moral judgement.

Interestingly, though, you give more examples that have to do with the social contract (gun control, birth control, organ donation policy, public funding of art, discrimination, slavery, etc.) than you give examples that have to do with personal decisions (giving to charities, trolley problems).

My own positions are contractarian, much more than they are deontological or consequentialist. I'm generally truthful, not because it is "wrong" to lie or because I have a rule against it (for instance I'm OK with lying in the context of a game, say Diplomacy, where the usual social conventions are known to be suspended - though I'd be careful about hurting others' feelings through my play even in such a context). I don't trust myself to compute the complete consequences of lying vs. not lying in each case, and so a literal consequentialism isn't an option for me.

However, I would prefer to live in a world where people can be relied upon to tell the truth, and for that I am willing to sacrifice the dubious advantage of being able to put a fast one over on other people from time to time. It is "wrong" to lie in the sense that if you didn't know ahead of time what particular position you'd end up occupying in the world (e.g. a politician with power) but only knew some general facts about the world, you would find a contract that banned lying acceptable, and would be willing to let this contract sanction lying with penalties. (At the same time, and for the same reason, I also put some value on privacy: being able to lie by omission about some things.)

I find XiXiDu's remarks interesting. It seems to me that at present something like "might makes right" is descriptively true of us humans: we could describe a morality only in terms of agreements and generally reliable penalties for violating these agreements. "If you injure others, you can expect to be put in prison, because that's the way society is currently set up; so if you're rational, you'll curb your desires to hurt others because your expected utility for doing so is negative".

However this sort of description doesn't help in finding out what the social contract "should" be - it doesn't help us find what agreements we currently have that are wrong because they result from the moral equivalent of optical illusions "fooling us" into believing something that isn't the case.

It also doesn't help us in imagining what the social contract could be if we weren't the sort of beings we are: if the agreements we enter were binding for reasons other than fear of penalties. This is a current limitation of our cognitive architectures but not a necessary one.

(I find this a very exciting question, and at present the only place I've seen where it can even be discussed is LW: what kind of moral philosophy would apply to beings who can "change their own source code".)

EDIT: having read Vladimir_M's reply below, his comments capture much of what I wanted to say, only better.

Replies from: AlexanderRM, Yvain
comment by AlexanderRM · 2015-04-18T22:23:52.858Z · LW(p) · GW(p)

"Paras 7.2 and 7.3 (the slavery and gladiator questions) left me with an odd impression. The "test" you propose in both cases is more or less the same as Rawls' Veil of Ignorance. So at that point I was wondering, if you apply Rawls' procedure to determine what is a preferable social contract, perhaps you're a Rawlsian more than you're a consequentialist. :) BTW, are you familiar with Rawls' objections to (classical) utilitarianism?"

I can't speak for Yvain but as someone who fully agreed with his use of that test, I would describe myself as both a Rawlsian (in the sense of liking the "veil of ignorance" concept) and a Utilitarian. I don't really see any conflict between the two. I think maybe the difference between my view and that of Rawls is that I apply something like the Hedonic Treadmill fully (despite being a Preference Utilitarian), which essentially leads to Yvain's responses.

...Actually I suppose I practically define the amount of Utility in a world by whether it would be better to live there, so maybe it would in fact be better to describe me as a Rawslian. I still prefer to think of myself as a Utilitarian with a Rawlsian basis for my utility function, though (essentially I define the amount of utility in a world as "how desirable it would be to be born as a random person in that world). I think it's that Utilitarianism sounds easier to use as a heuristic for decisions, whereas calling yourself a Rawlsian requires you to go one step further back every time you analyze a thought experiment.

Replies from: Morendil
comment by Morendil · 2015-04-20T20:46:30.289Z · LW(p) · GW(p)

This later piece is perhaps relevant.

comment by Scott Alexander (Yvain) · 2011-05-04T14:32:16.650Z · LW(p) · GW(p)

I've responded to some of Vladmir's comments, but just a few things you touched on that he didn't:

Utility monsters: if a utility monster just means someone who gets the same amount of pleasure from an ice cream that I get from an orgasm, then it just doesn't seem that controversial to me that giving them an ice cream is as desirable as giving me an orgasm. Once we get to things like "their very experience is a million times stronger and more vivid than you could ever imagine" we're talking a completely different neurological makeup that can actually hold more qualia, which is where the ant comes in.

I don't see a philosophical distinction between the morality an individual should use and the morality a government should use (although there's a very big practical distinction since governments are single actors in their own territories and so can afford to ignore some game theoretic and decision theoretic principles that individuals have to take into account). The best state of the world is the best state of the world, no matter who's considering it.

I use mostly examples from government because moral dilemmas on the individual level are less common, less standardized, and less well-known.

comment by Peterdjones · 2011-06-23T02:55:03.940Z · LW(p) · GW(p)

suppose some mathematician were to prove, using logic, that it was moral to wear green clothing on Saturday. There are no benefits to anyone for wearing green clothing on Saturday, and it won't hurt anyone if you don't. But the math apparently checks out. Do you shrug and start wearing green clothing? Or do you say “It looks like you have done some very strange mathematical trick, but it doesn't seem to have any relevance to real life and I feel no need to comply with it"?

Supposing a consequentialist were to prove using maths that you should be tortured a bit in order to bring about a nett tincrease the sum total of human happiness. Do you shrug and say: "get out the thumbscrews"? Or do you say "this has something to do with the world, but it is not moral because there is a rule that the end does not justify the means"

comment by zaph · 2011-04-26T13:36:25.567Z · LW(p) · GW(p)

This isn't so much a critique against consequentialism as the attempt at creating objective moral systems in general. I would love for the world to follow a particular moral order (namely mine). But there are people who, for what I would see as being completely sane reasons, disagree with me. On the edges, I have no problem writing mass murderers off as being insane. Beyond that, though, in the murky middle, there are a number of moral issues (and how is that dividing line drawn? Is it moral to have anything above a sustenance level meal if others are starving i the world, for instance?) that I see as leading only to endless argument. This doesn't indicate one of the sides is being disingenuous, just that they have different values that cannot be simultaneously optimized. The Roman gladiator post by another commenter is an example. I view the Romans as PETA members would view me. I have justifications for my actions, as I'm sure Romans had for their actions. That's just the nature of the human condition. Academic moral philosophizing always comes across to me as trying to unearth a cosmic grading scale, even if there isn't a cosmic grader.

Replies from: fubarobfusco
comment by fubarobfusco · 2011-04-26T17:26:38.456Z · LW(p) · GW(p)

I view the Romans as PETA members would view me. I have justifications for my actions, as I'm sure Romans had for their actions. That's just the nature of the human condition.

What would it mean for the PETA member to be right? Does it just mean that the PETA member has sympathy for chickens, whereas you and I do not? Or is there something testable going on here?

It doesn't seem to me that the differences between the PETA members, us, and the Romans, are at all unclear. They are differences in the parties' moral universe, so to speak: the PETA member sees a chicken as morally significant; you and I see a Scythian, Judean, or Gaul as morally significant; and the Roman sees only another Roman as morally significant. (I exaggerate slightly.)

A great deal of moral progress has been made through the expansion of the morally significant; through recognition of other tribes (and kinds of beings) as relevant objects of moral concern. Richard Rorty has argued that it is this sympathy or moral sentiment — and not the knowledge of moral facts — which makes the practical difference in causing a person to act morally; and that this in turn depends on living in a world where you can expect the same from others.

This is an empirical prediction: Rorty claims that expanding people's moral sympathies to include more others, and giving them a world in which they can expect others to do the same in turn, is a more effective way of producing good moral consequences, than moral philosophizing is. I wonder what sort of experiment would provide evidence one way or the other.

Replies from: zaph, Nornagest
comment by zaph · 2011-04-26T18:09:04.346Z · LW(p) · GW(p)

That's an interesting link to Rorty; I'll have to read it again in some more detail. I really appreciated this quote:

We have come to see that the only lesson of either history or anthropology is our extraordinary malleability. We are coming to think of ourselves as the flexible, protean, self-shaping, animal rather than as the rational animal or the cruel animal.

That really seems to hit it for me. That flexibility, the sense that we can step beyond being warlike, or even calculating, seems to be critical to what morals are all about. I don't want to make it sound like I'm against a generally moral culture, where happiness is optimized (or some other value I like personally). I just don't think moral philosophizing would get us there. I'll have to read up more on the moral sentiments approach. I have read some of Rorty's papers, but not his major works. I would be interested to see these ideas of his paired with meme theory. Describing moral sentiment as a meme that enters a positive feedback loop where groups that have it survive longer than ones that don't seems very plausible to me.

I'll have to think more about your PETA question. I think it goes beyond sympathy. I don't know how to test the positions though. I don't think viewing chickens as being equally morally significant would lead to a much better world (for humans - chickens are a different matter). Even with the moral sentiment view, I don't see how each side could come to a clear resolution.

comment by Nornagest · 2011-04-26T17:52:45.025Z · LW(p) · GW(p)

I do wonder what would constitute "good moral consequences" in this context. If it's being defined as the practical extension of goodwill, or of its tangible signs, then the argument seems very nearly tautological.

Replies from: fubarobfusco
comment by fubarobfusco · 2011-04-27T02:40:22.668Z · LW(p) · GW(p)

Not to put too fine a point on it, but part of Rorty's argument seems to be that if you don't already have a reasonably good sense for what "good moral consequences" would be, then you're part of the problem. Rorty claims that philosophical ethics has been largely concerned with explaining to "psychopaths" like Thrasymachus and Callicles (the sophists in Plato's dialogues who argue that might makes right) why they would do better to be moral; but that the only way for morality to win out in the real world is to avoid bringing agents into existence that lack moral sentiment:

It would have been better if Plato had decided, as Aristotle was to decide, that there was nothing much to be done with people like Thrasymachus and Callicles, and that the problem was how to avoid having children who would be like Thrasymachus and Callicles.

As far as I can tell, this fits perfectly into the FAI project, which is concerned with bringing into existence superhuman AI that does have a sense of "good moral consequences" before someone else creates one that doesn't.

Replies from: Nornagest
comment by Nornagest · 2011-04-27T05:31:35.081Z · LW(p) · GW(p)

You can't write an algorithm based on "if you don't get it, you're part of the problem". You can get away with telling that to your children, sort of, but only because children are very good at synthesizing behavioral rules from contextual cues. Rorty's advice might be useful as a practical guide to making moral humans, but it only masks the underlying issue: if the only way for morality to win in the real world is to avoid bringing amoral agents into existence, then there must already exist a well-bounded set of moral utility functions for agents to follow. It doesn't tell us much about what such a set might contain, giving only a loose suggestion that good morality functions tend to be relatively subject-independent.

Now, to encode a member of such a set into an AI (which may or may not end up being Friendly depending on how well those functions generalize outside the human problem domain), you need a formalization of it. To teach one implicitly, you need a formalization of something analogous (but not necessarily identical) to the social intuitions that human children use to derive their morals, which is most likely a harder problem. And if you have such a formalization, explaining an instance of moral behavior to a rational sociopath is as easy as running it on particular inputs.

Presented with an irrational sociopath you're out of luck, but I can't think of any ethical systems that don't have that problem.

comment by Morendil · 2011-04-26T06:36:33.141Z · LW(p) · GW(p)

Somewhat cringeworthy:

This term ("warm fuzzies"), invented as far as I know by computer ethicist Eliezer Yudkowsky...

I'm pretty sure not. See here for a reference dating the term itself back to 1969; also, it's been in use in geek culture for quite a while.

comment by Aiyen · 2019-01-02T23:24:51.531Z · LW(p) · GW(p)

Link doesn't seem to work.

comment by XiXiDu · 2011-04-27T09:43:08.857Z · LW(p) · GW(p)

Here is what David Pearce has to say about the FAQ (via Facebook):

Lucid. But the FAQ (and lesswrong) would be much stronger if it weren't shot through with anthropocentric bias...

Suggestion: replace "people" with "sentient beings".

For the rational consequentialist, the interests of an adult pig are (other things being equal) as important as a human toddler. Sperm whales are probably more sentient than human toddlers; chickens probably less.

Ethnocentric bias now seems obvious. If the FAQ said "white people" throughout rather than "people", then such bias would leap off the page - though it wouldn't to the Victorians.

Sadly, anthropocentric bias is largely invisible.

Replies from: Vaniver, wedrifid, Tripitaka, Yvain, Sniffnoy, Emile
comment by Vaniver · 2011-04-27T17:55:02.594Z · LW(p) · GW(p)

For the rational consequentialist, the interests of an adult pig are (other things being equal) as important as a human toddler.

Because, in order to be a rational consequentialist, one needs to forget that human toddlers grow into adult humans and adult pigs grow into... well, adult pigs.

Replies from: NihilCredo
comment by NihilCredo · 2011-04-29T03:26:53.312Z · LW(p) · GW(p)

Careful, that leads straight into an abortion debate (via "if you care that much about potential development, how much value do you give a fetus / embryo / zygote?").

Replies from: Vaniver
comment by Vaniver · 2011-04-29T20:05:41.919Z · LW(p) · GW(p)

I am aware. If the thought process involved is "We can't assign values to future states because then we might be opposed to abortion" then I recommend abandoning that process. If the thought process is just "careful, there's a political schism up ahead" that fails to realize we are already in a political schism about animal rights.

Replies from: None, NihilCredo
comment by [deleted] · 2011-04-29T23:26:00.896Z · LW(p) · GW(p)

There is more than one way to interpret your original objection and I wonder whether you and NihilCredo are talking about the same thing. Consider two situations: (1) The toddler and the pig are in mortal danger. You can save only one of them. (2) The toddler and the pig will both live long lives but they're about to experience extreme pain. Once again, you can prevent it for only one of them.

I think it's correct to take future states into consideration in the second case where we know that there will be some suffering in the future and we can minimize it by asking whether humans or pigs are more vulnerable to suffering resulting from past traumas.

But basing the decision of which being gets to have the descendants of its current mind-state realized in the future on the awesomeness of those descendants rather than solely on the current mind-state seems wrong. And the logical conclusion of that wouldn't be opposition to abortion, it would be opposition to anything that isn't efficient procreation.

Replies from: Vaniver
comment by Vaniver · 2011-04-30T05:09:49.109Z · LW(p) · GW(p)

But basing the decision of which being gets to have the descendants of its current mind-state realized in the future on the awesomeness of those descendants rather than solely on the current mind-state seems wrong.

Why throw away that information? Because it's about the future?

Replies from: None
comment by [deleted] · 2011-05-03T01:13:21.124Z · LW(p) · GW(p)

I don't know how to derive my impression from first principles. So the answer has to be: because my moral intuitions tell me to do so. But they only tell me so in this particular case -- I don't have a general rule of disregarding future information.

Replies from: Vaniver
comment by Vaniver · 2011-05-03T03:45:32.454Z · LW(p) · GW(p)

Ok. I will try to articulate my reasoning, and see if that helps clarify your moral intuitions: a "life" is an ill-defined concept, compared to a "lifespan." So when we have to choose one of two individuals, the way our choice changes the future depends on the lifespans involved. If the choice is between saving someone 10 years old with 70 years left or someone 70 years old with 10 years left, then one choice results in 60 more years of aliveness than the other! (Obviously, aliveness is not the only thing we care about, but this is good enough for a first approximation to illustrate the idea.)

And so the state between now and the next second (i.e. the current mind-state) is just a rounding error when you look at the change to the whole future; in the future of the human toddler it is mostly not a human toddler, whereas in the future of the adult pig it is mostly an adult pig. If we prefer adult humans to adult pigs, and we know that adult pigs have a 0% chance of becoming adult humans and human toddlers have a ~98% chance of becoming adult humans, then combining those pieces of knowledge gives us a clear choice.

If this is not a general principle, it may be worthwhile to try and tease out what's special about this case, and why that seems special. It may be that this is a meme that's detached from its justification, and that you should excise it, or that there is a worthwhile principle here you should apply in other cases.

comment by NihilCredo · 2011-04-29T23:23:26.395Z · LW(p) · GW(p)

I meant the latter. Your assessment is correct, although the mind-killing ability of a real-life debate (prenatal abortion y/n) is significantly higher than that of a largely hypothetical debate (equalising the rights of toddlers and smart animals).

comment by wedrifid · 2011-04-27T17:44:50.839Z · LW(p) · GW(p)

If the FAQ said "white people" throughout rather than "people", then such bias would leap off the page - though it wouldn't to the Victorians.

What would leap off the page is the 'white people' phrase. Making that explicit would be redundant and jarring. Perhaps even insulting. It should have been clear what 'people' meant without specifying color.

comment by Tripitaka · 2011-04-29T21:05:07.795Z · LW(p) · GW(p)

In fact, adult pigs are of more concern than <2 year old toddlers; they pass a modified version of the mirror-test and thus seem to be selfconscious. http://en.wikipedia.org/wiki/Pigs#cite_note-AnimalBehaviour-10

comment by Scott Alexander (Yvain) · 2011-05-04T15:08:21.609Z · LW(p) · GW(p)

I can't even use nonstandard pronouns without it impeding readability, so I think I'm going to sacrifice precision and correctness for the sake of ease-of-understanding here.

comment by Sniffnoy · 2011-04-28T00:35:55.784Z · LW(p) · GW(p)

"People" need not mean "humans", it can mean "people".

Also, people should really stop using the word "sentient". It's a useless word that seems to serve no purpose beyond causing people to get intelligence and consciousness confused. (OK, Pearce does seem pretty clear on what he means here; he doesn't seem to have been confused by it himself. Nonetheless, it's still aiding in the confusion of others.)

comment by Emile · 2011-04-27T09:49:38.750Z · LW(p) · GW(p)

Now we know Clippy's true identity!

(I kid, I kid. Thinking correctly about morality applied to non-human sentient beings is a Tough Problem)

comment by XiXiDu · 2011-04-26T11:29:16.610Z · LW(p) · GW(p)

As far as I can tell, she means “Better that everyone involved dies as long as you follow some arbitrary condition I just made up, than that most people live but the arbirary condition is not satisfied.” Do you really want to make your moral decisions like this?

The whole problem is that everything is framed in moral terminology.

The trolley problem really isn't any different from the prisoner's dilemma with regard to human nature.

On one side there is the game theoretic pareto-suboptimal solution that a rational agent would choose and on the other side there is human nature.

Human nature and game theory are incompatible.

To cooperate or not to push the fat guy are human values.

Academic models like game theory are memes that discard complex human values and replace them with simple equilibria between agents carrying the same memes.

comment by Morendil · 2011-04-26T07:10:31.956Z · LW(p) · GW(p)

Here's something that, if it's not a frequently asked question, ought to be one. What do you mean by "state of the world" - more specifically, evaluated at what time, and to what degree of precision?

Some utilitarians argue that we should give money to poorer people because it will help them much more than the same amount will help us. An obvious question is "how do you know"? You don't typically know the consequences that a given action will have over time, and there may be different consequences depending on what "state of the world" you consider.

The immediate difference is that someone has a little more money and you have a little less, but it becomes a much more complicated problem when you start thinking of longer term consequences. If I give that person money, will they use it well or will they spend it on drink; if we send aid to poor countries, will it help the people who need it or be diverted; will these countries become dependent on foreign aid, destroying an autonomy which is more precious than money; etc.

The organ donation example is interesting in this regard. You have worked out one particular subset of the consequences ("the lives of a thousand people a year"), but what guarantees do you have that the proposed opt-out system wouldn't have other consequences, in the short, medium or long term, that eventually turn out to outweigh these positive consequences?

comment by James_Miller · 2011-04-26T02:39:43.156Z · LW(p) · GW(p)

6.4) In the United States at least there are so many laws that it's not possible to live a normal life without breaking many of them.

See: http://www.harveysilverglate.com/Books/tabid/287/Default.aspx

7.1) you could come up with better horribly "seeming" outcomes that consequentialism would result in. For example, consequentialists who believe in heaven and hell and think they have insight into how to get people into heaven would be willing to do lots of nasty things to increase the number of people who go to heaven. Also, dangerous sweatshops in poor countries that employ eight-year-olds become praiseworthy if they provide the children with better outcomes than the children would otherwise receive.

8.1) Does "lack of will" account for failure to solve collective action problems?

Replies from: NihilCredo, AlexanderRM
comment by NihilCredo · 2011-04-26T15:50:54.809Z · LW(p) · GW(p)

Also, dangerous sweatshops in poor countries that employ eight-year-olds become praiseworthy if they provide the children with better outcomes than the children would otherwise receive.

This is actually a serious, mainstream policy argument that I've heard several times. It goes like "If you ban sweatshops, sweatshop workers won't have better jobs; they'll just revert to subsistence farming or starve to death as urban homeless". I'm not getting into whether it's a correct analysis (and it probably depends on where and how exactly 'sweatshops' are 'banned'), but my point is that it wouldn't work quite well as an "outrageous" example.

Replies from: James_Miller, AlexanderRM
comment by James_Miller · 2011-04-28T01:32:03.289Z · LW(p) · GW(p)

That's why I wrote "horribly 'seeming'" and not just horribly.

comment by AlexanderRM · 2015-04-18T22:37:08.412Z · LW(p) · GW(p)

Interesting observation: You talked about that in terms the effects of banning sweatshops, rather than talking about it in terms of the effects of opening them. It's of course the exact same action and the same result in every way- deontological as well as consequentialist- but it changes from "causing people to work in horrible sweatshop conditions" to "leaving people to starve to death as urban homeless", so it switches around the "killing vs. allowing to die" burden. (I'm not complaining, FYI, I think it's actually an excellent technique. Although maybe it would be better if we came up with language to list two alternatives neutrally with no burden of action.)

comment by AlexanderRM · 2015-04-18T22:33:37.854Z · LW(p) · GW(p)

"consequentialists who believe in heaven and hell and think they have insight into how to get people into heaven would be willing to do lots of nasty things to increase the number of people who go to heaven."

I fully agree with this (as someone who doesn't believe in heaven and hell, but is a consequentialist), and also would point out that it's not that different from the way many people who believe in heaven and hell already act (especially if you look at people who strongly believe in them; ignore anyone who doesn't take their own chances of heaven/hell into account in their own decisions).

In fact I suspect that even from an atheistic, humanist viewpoint, consequentialism on this one would have been better in many historical situations than the way people acted in real life; if a heathen will go to hell but can be saved by converting them to the True Faith, then killing heathens becomes an utterly horrific act. Of course, it's still justified if it allows you to conquer the heathens and forcibly convert them all, as is killing a few as examples if it gets the rest to convert, but that's still better than the way many European colonizers treated native peoples in many cases.

comment by Carinthium · 2013-08-05T00:36:33.714Z · LW(p) · GW(p)

As a philosophy student, complicated though the FAQ is I think I could knock down the arguments therein fairly easily. That's kind of irrelevant, however.

More importantly, I'm looking for the serious argument for Consequentialism on Lesswrong. Could somebody help me out here?

Replies from: shminux
comment by shminux · 2013-08-05T05:41:26.030Z · LW(p) · GW(p)

If not consequentialism, what's your preferred ethics?

Replies from: Carinthium
comment by Carinthium · 2013-08-05T06:06:08.279Z · LW(p) · GW(p)

I haven't fully made up my mind yet- it would be inaccurate to place me in any camp for that reason.

comment by drnickbone · 2012-04-20T21:09:56.986Z · LW(p) · GW(p)

Some observations... There is no discussion in your FAQ of the distinctions between utilitarianism and other forms of consequentialism, or between act consequentialism and rule consequentialism, or between "partial" and "full" rule consequentialism. See http://plato.stanford.edu/entries/consequentialism-rule/ for more on this.

Where you discuss using "heuristics" for moral decision making (rather than trying to calculate the consequences of each action), you are heading into the "partial rule consequentialism" camp. To move further into that camp, you might consider whether it is praiseworthy or blameworthy to follow usual moral rules (heuristics) in a case where breaking the rules would lead to higher expected utility. Generally it will lead to better consequences if we praise people for following the heuristics, while blaming them for departing from the heuristics. And then ask yourself whether you really want an ethical theory that says when faced with a choice between action X and Y, act X is strictly the "right" one (while Y is strictly wrong), yet X is blameworthy (while Y is praiseworthy). These are the sorts of considerations that lead to full rule utilitarianism.

Another thing you might want to cover is the objection that consequentialism is extremely demanding. Act consequentialism in particular requires well-off people to give away almost all their money to charities which have a high utility impact per dollar (sanitation imrovements in slums, malaria nets and de-worming of children in poor African countries), rather than spending the marginal dollar on themselves or their friends or family. Most versions of rule utilitarianism can avoid that problem (humans, being human, will never accept such overly demanding moral rules, and it will have very bad consequences to try and make them).

comment by MarkLee · 2011-04-28T00:57:41.443Z · LW(p) · GW(p)

Part One: Methodology: Why think that intuitions are reliable? What is reflective equilibrium, other than reflecting on our intuitions? If it is some process by which we balance first-order intuitions against general principles, why think this process is reliable? Metaethics: Realism vs. Error vs. Expressivism?

Part Two: 2.6 I don't see the collapse - an axiology may be paired with different moralities - e.g. a satisficing morality, or a maximizing morality. Maybe all that is meant by the collapse is that the right is a function of the good? Then 'collapse' is misleading.

Part Four: 4.2 Taking actions that make the world better is different from taking actions that make the world best. Consequentialism says that only consequences matter - a controversial claim that hasn't been addressed.
4.4 Makes a strawman of the deontologist. Deontologists differ from consequentialists in ways other than avoiding dirtying their hands / guilt. They may care about not using others as means, or distinctions like doing/allowing, killing/letting die, etc., which apply to some trolley cases, and (purportedly) justify not producing the best consequences. More argument is needed to show that this precludes morality from 'living in the world'.

Part Five: 5.4 Not obvious that different consequentialisms converge on most practical cases. Some desire pain. Some desire authenticity, achievement, relationships, etc. (no experience machine). Some desire not to be cheated on / have their wills disregarded / etc.

Part Seven: 7.3 Doesn't address the strongest form of the objection. A stronger form is: we know that certain acts or institutions are necessarily immoral (gladiatorial games, slavery); utilitarianism could (whether or not it does) require we promote these; therefore utilitarianism is false. I like the utility monster example of this. The response in 7.5 to the utility monster case is bullet-biting - this should be the response in 7.3. The response that utilitarianism probably won't tell us to promote these is inadequate. The mistake is remade by the three responses in 7.4 (prior to the appeal to ideal rather than actual preferences). 7.6 Similar problem here. The response quibbles with contingent facts, but the force of the objection is that vicious, repugnant, petty, stupid, etc., preferences have no less weight in principle, i.e. in virtue of their status as such. 7.7 Response misses the point. The objection is that it's hard to see how utilitarianism can accommodate the intuitive distinction between higher and lower pleasures. Sure, utilitarians have nothing against symphonies, but would a world with symphonies be best? (Would an FAI-generated world contain symphonies?) 7.9 Rather quick treatment of the demandingness objection. One relevant issue in the vicinity is that of agent-centered permissions - permissions to do less than the best (in consequentialist terms), e.g. to favor those with whom we have special relations. Many philosophers and folk alike believe in such permissions - utilitarianism has a counterintuitive result here.

Suggestions for further content: (1) How are we to conceive of 'better' consequences? Perhaps any of the answers given by the aforementioned systems would suffice - pleasure, preference satisfaction, ideal preference satisfaction. But I'm not convinced these are practically/pragmatically equivalent. For instance, there may be different best methods for investigating what produces the most pleasure vs. what would best satisfy our ideal preferences, and so different practical recommendations. (2) What's our axiology? Is it total utilitarian, egalitarian, prioritarian, maximizing, satisficing, etc.? How do the interests of animals, future time slices, and future individuals weigh against present human interests? A total utilitarian approach seems to be advocated, but that faces its own set of problems (repugnant conclusions, fanaticism, etc.).

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2011-05-04T14:58:41.909Z · LW(p) · GW(p)

P1: Intuitions being "reliable" requires that the point of intuitions be to correspond to something outside themselves. I'm not sure moral intuitions have this point.

P2: Point taken.

P4.2: I agree with taking actions that make the world better instead of best and will rephrase. I don't understand the point of your second sentence.

4.4: Concern about not using others as means, or doing/allowing distinctions, seem to me common-sensically not to be about states of the world. I'm not sure what further argument is possible let alone necessary. The discussion of guilt only says that's the only state-of-the-world-relevant difference.

5.4: Would you agree that most of the philosophically popular consequentialisms (act, rule, preference, etc.) usually converge?

7.3 and below: I don't think slavery and gladiators are necessarily wrong. I can imagine situations in which they would be okay (I've mentioned some for gladiators above) and I remain open to moral argument from people who want to convince me they're okay in our own world (although I don't expect this argument to succeed any more than I expect to be convinced that the sky is green).

If the belief that slavery is wrong is not an axiom, but instead derives from deeper moral principles that when formalized under reflective equilibrium give you consequentialism, then I think it's fair to say that consequentialism proves they are wrong, but that in a counterfactual world where consequentialism proved they were right, I would either have intuitions that they were right, or be willing to discard my intuition that they were wrong after considering the consequentialist arguments against it.

comment by DavidM · 2011-04-27T21:20:23.755Z · LW(p) · GW(p)

I like the idea of a non-technical explanation of consequentialism, but I worry that many important distinctions will be conflated or lost in the process of generating something that reads well and doesn't require that the reader to spend a lot of time thinking about the subject by themselves before it makes sense.

The issue that stands out the most to me is what you write about axiology. The point you seem to want to get across, which is what I would consider to be the core of consequentilalism in one sentence, is that "[...]our idea of “the good” should be equivalent or directly linked to our idea of 'the right'." But that woefully underspecifies a moral theory; all it does is pick out a group of related theories, which we call "consequentialist".

It's important to realize how much possible variation there is among consequentialist theories, and the most straightforward way to do it that I see is to give more serious consideration to the role that axiology plays in a total moral theory. For example, a basic way to taxonomize theories while simplifying away a lot of technical issues that are not important for the kind of overview you're providing is:

1) Does the theory fundamentally concern itself with outcomes or something other than outcomes?

  • A theory that fundamentally concerns itself with outcomes (however "outcomes" are defined) is consequentialist. (Other theories have other concerns and other names.)

2) What kinds of outcomes does the theory concern itself with?

  • Outcomes concerning the satisfaction of people's preferences.

  • Outcomes concerning people's happiness.

  • Outcomes concerning non-human happiness.

  • Outcomes concerning ecological sustainability.

  • Outcomes concerning paperclips.

  • etc.

All of these describe consequentialist axiologies which lead to different consequentialist theories.

3) For consequentialist theories, the rightness of an action (i.e. the indicator of whether it should be done or not) depends on its consequences or expected consequences. What actions are right for an agent?

  • Any action that leads to a sufficiently good outcome, where "sufficiently good" is somehow defined...

    -...in relation to the current state of the world. E.g. an action is right if it leads to an outcome that is better (according to the theory's axiology) than the current state of things. "Leave things better than you found them."

    -...in relation to the actions that an agent can actually do. E.g. an action is right if it leads to an outcome that is better than 90% of the other outcomes (according to the theory's axiology) that an agent can bring into effect through other actions. "Do enough good; sainthood not required."

  • Any action that leads to an outcome for which there is no better outcome according to the theory, among all the other outcomes the agent can bring about.

    -E.g. You have the task of distributing two bars of chocolate to Ann and Bob. Any bar of chocolate you don't distribute immediately disappears. Your theory's complete axiology is "all else being equal, it's better when one person has more chocolate bars than they otherwise would have had." Your actions can distribute chocolate bars like this:

    Action A --> Outcome A: Ann 0 Bob 0

    Action B --> Outcome B:: Ann 1 Bob 0

    Action C --> Outcome C: Ann 0 Bob 1

    Action D --> Outcome D: Ann 2 Bob 0

    Action E --> Outcome E: Ann 0 Bob 2

    Action F --> Outcome F: Ann 1 Bob 1

    According to your theory, every outcome is better than A. Outcomes D and F are better than B. Outcomes E and F are better than C. Outcomes D, E, and F are neither better than nor worse than nor equal to each other. So Actions D, E, and F would be right, and the rest would be wrong. "Act to maximize value; if that's undefined, don't leave any extra value on the table."

  • etc.

I've left out a lot of issues concerning expected consequences vs. actual consequences, agent knowledge, value measurement and aggregation, satisficing, etc. which I think are not important given the goals of your FAQ. But I'd say it's important to get across to the non-specialist that the range of consequentialist theories is pretty large, and there are a lot of issues that a consequentialist theory will have to deal with. (In other words, there's no monolithic theory called "consequentialism" that you can subscribe to which will pass judgements on your actions. If you say believe in "consequentialism", you have to say more in order to pin down what actions you believe are right and wrong.) If you don't make this clear, people may fill in the blanks in idiosyncratic ways and then react to the ways they've filled them in, which is likely not to lead to anyone being persuaded, or more importantly, anyone being informed. One easy way to resolve this is to define consequentialist theories as those concerned with outcomes, define consequentialist axiologies as theories of what kinds of outcomes are valuable, describe some common methods for determining right actions, and say that a consequentialist moral theory is a consequentialist axiology + a way of determining right actions based on that axiology.

EDIT FOR CLARITY: My point is not that you don't ever bring up these issues, but that these issues are fundamental (theoretically and pedagogically) and I'd make sure that the structure of the FAQ emphasizes that.

comment by Jayson_Virissimo · 2011-04-27T02:11:06.481Z · LW(p) · GW(p)

The basic thesis is that consequentialism is the only system which both satisfies our moral intuition that morality should make a difference to the real world, and that we should care about other people.

Why am I supposed to adopt beliefs about ethics and decision theory based on how closely they match my intuitions, but I am not supposed to adopt beliefs about quantum physics or cognitive psychology based on how closely they match my intuitions? What warrants this asymmetry?

Also, your FAQ lacks an answer to the most important question regarding utilitarianism: By what method could we aggregate the utilities of different persons?

comment by RyanCarey · 2011-04-27T00:40:42.820Z · LW(p) · GW(p)

Fairly good summary. I don't mind the FAQ structure. The writing style is good, and the subject matter suggests obvious potential to contribute to the upcoming Wiki Felicifia in some way. Now as good as the essay is, I have some specific feedback:

In section 2.2, I wonder if you could at put your point more strongly...

you wrote: if morality is just some kind of metaphysical rule, the magic powers of the Heartstone should be sufficient to cancel that rule and make morality irrelevant. But the Heartstone, for all its legendary powers, is utterly worthless and in fact totally indistinguishable, by any possible or conceivable experiment, from a fake...

I would suggest: Metaphysical rules are like a kind of heartstone that one can wear when they make moral decisions. It is reputed to rule our moral considerations. But despite its reputation, the heartstone is utterly worthless and..."

If you're going to use a metaphor, you might as well get full value from it!

2.61: I understand the point you're making here. I couldn't agree with it more. Still, if you're trying to reduce the amount of words in the way between the reader and the later sections - as you should be - then this section is one you could consider abbreviating or removing. The whole phlogiston analogy is not obvious to a layperson.

Your line of thought seems to get somewhat deraied at 3.5. I don't quite understand why 'signalling' fits in 'assigning value to other people'.

4 is extremely good. The trolley discussions are reminiscent of Peter Unger's Living High and Letting Die. It's a shame it takes so long to get there.

Continued in this Felicifia post

comment by [deleted] · 2011-04-26T07:51:51.758Z · LW(p) · GW(p)

On this site it's probably just me, but I just can't (or won't) bring myself to assigning a non-zero value to people I do not even know. Since the FAQ rest on the assumption that this is not the case it's practically worthless to me. This could be as it should, if people like me just aren't your target audience in which case it would be helpful to have such a statement in the 3.1 answer.

Edit: Thinking about it, this is probably not the case. If all of the people except those I know were to die, I might feel sad about it. So my valuation of them might actually be non-zero.

Replies from: wedrifid
comment by wedrifid · 2011-04-26T07:58:29.216Z · LW(p) · GW(p)

On this site it's probably just me, but I just can't (or won't) bring myself to assigning a non-zero value to people I do not even know.

In that case let me introduce myself. I'm Cameron. Come from Melbourne. I like walks on the beach.

Do I get epsilon value?

comment by SilasBarta · 2011-04-26T01:52:41.692Z · LW(p) · GW(p)

OT: The format seemed familiar and then I looked back and found out it was because I had read your libertarian FAQ. Best anti-libertarian FAQ I've seen!

Replies from: timtyler, timtyler
comment by timtyler · 2011-04-26T07:25:52.085Z · LW(p) · GW(p)

http://www.raikoth.net/libertarian.html

The home page references on these pages should be links.

Replies from: NihilCredo
comment by NihilCredo · 2011-04-26T15:20:23.933Z · LW(p) · GW(p)

I've seen the link before, but I hadn't read it. The best part was probably the one about formaldehyde, I'll be stealing that in the future.

The most ironic part has got to be this sentence:

I think the biggest problem is that most people are used to working in a moral muddle and have insufficient appreciation for the clean, crisp beauty of utilitarianism.

in a freaking critique of libertarianism.

comment by Amanojack · 2011-05-02T10:29:05.746Z · LW(p) · GW(p)

Or, put more formally, when asked to select between several possible actions A, B and C, the most moral choice is the one that leads to the best state of the world by whatever standards you judge states of the world by.

This is the key definition, yet it doesn't seem to actually say anything. Moral choice = the choice that makes you happy. This is a rejection of ethics, not an ethical system. If it were, it would be called individual consequentialism, that is, "Forget all this ethics tripe."

Yet after that pretense of doing away with all ethics, you gently slip in the collectivist consequentialism as rule-of-thumb principles, but to my mind these can hardly serve as anything other than a political agenda. From then on you seem to equivocate between individual consequentialism and collective consequentialism. It would be a lot clearer - and shorter - if you made that distinction more explicit throughout.

comment by Paul Crowley (ciphergoth) · 2011-04-26T08:13:44.331Z · LW(p) · GW(p)

Isn't the difference between good and right where decision theory lives?