Deontology for Consequentialists

post by Alicorn · 2010-01-30T17:58:43.881Z · LW · GW · Legacy · 255 comments

Consequentialists see morality through consequence-colored lenses.  I attempt to prise apart the two concepts to help consequentialists understand what deontologists are talking about.

Consequentialism1 is built around a group of variations on the following basic assumption:

It's a very diverse family of theories; see the Stanford Encyclopedia of Philosophy article.  "Classic utilitarianism" could go by the longer, more descriptive name "actual direct maximizing aggregative total universal equal-consideration agent-neutral hedonic act2 consequentialism".  I could even mention less frequently contested features, like the fact that this type of consequentialism doesn't have a temporal priority feature or side constraints.  All of this is is a very complicated bag of tricks for a theory whose proponents sometimes claim to like it because it's sleek and pretty and "simple".  But the bottom line is, to get a consequentialist theory, something that happens after the act you judge is the basis of your judgment.

To understand deontology as anything but a twisted, inexplicable mockery of consequentialism, you must discard this assumption.

Deontology relies on things that do not happen after the act judged to judge the act.  This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong.  This may include, but is not limited to:

Individual deontological theories will have different profiles, just like different consequentialist theories.  And some of the theories you can generate using the criteria above have overlap with some consequentialist theories3.  The ultimate "overlap", of course, is the "consequentialist doppelganger", which applies the following transformation to some non-consequentialist theory X:

  1. What would the world look like if I followed theory X?
  2. You ought to act in such a way as to bring about the result of step 1.

And this cobbled-together theory will be extensionally equivalent to X: that is, it will tell you "yes" to the same acts and "no" to the same acts as X.

But extensional definitions are terribly unsatisfactory.  Suppose4 that as a matter of biological fact, every vertebrate is also a renate and vice versa (that all and only creatures with spines have kidneys).  You can then extensionally define "renate" as "has a spinal column", because only creatures with spinal columns are in fact renates, and no creatures with spinal columns are in fact non-renates.  The two terms will tell you "yes" to the same creatures and "no" to the same creatures.

But what "renate" means intensionally has to do with kidneys, not spines.  To try to capture renate-hood with vertebrate-hood is to miss the point of renate-hood in favor of being able to interpret everything in terms of a pet spine-related theory.  To try to capture a non-consequentialism with a doppelganger commits the same sin.  A rabbit is not a renate because it has a spine, and an act is not deontologically permitted because it brings about a particular consequence.

If a deontologist says "lying is wrong", and you mentally add something that sounds like "because my utility function has a term in it for the people around believing accurate things.  Lying tends to decrease the extent to which they do so, but if I knew that somebody would believe the opposite of whatever I said, then to maximize the extent to which they believed true things, I would have to lie to them.  And I would also have to lie if some other, greater term in my utility function were at stake and I could only salvage it with a lie.  But in practice the best I can do is to maximize my expected utility, and as a matter of fact I will never be as sure that lying is right as I'd need to be for it to be a good bet."5... you, my friend, have missed the point.  The deontologist wasn't thinking any of those things.  The deontologist might have been thinking "because people have a right to the truth", or "because I swore an oath to be honest", or "because lying is on a magical list of things that I'm not supposed to do", or heck, "because the voices in my head told me not to"6.

But the deontologist is not thinking anything with the terms "utility function", and probably isn't thinking of extreme cases unless otherwise specified, and might not care whether anybody will believe the words of the hypothetical lie or not, and might hold to the prohibition against lying though the world burn around them for want of a fib.  And if you take one of these deontic reasons, and mess with it a bit, you can be wrong in a new and exciting way: "because the voices in my head told me not to, and if I disobey the voices, they will blow up Santa's workshop, which would be bad" has crossed into consequentialist territory.  (Nota bene: Adding another bit - say, "and I promised the reindeer I wouldn't do anything that would get them blown up" - can push this flight of fancy back into deontology again.  And then you can put it back under consequentialism again: "and if I break my promise, the vengeful spirits of the reindeer will haunt me, and that would make me miserable.")  The voices' instruction "happened" before the prospective act of lying.  The explosion at the North Pole is a subsequent potential event.  The promise to the reindeer is in the past.  The vengeful haunting comes up later.

A confusion crops up when one considers forms of deontology where the agent's epistemic state - real7 or ideal8 - is a factor.  It may start to look like the moral agent is in fact acting to achieve some post-action state of affairs, rather than in response to a pre-action something that has moral weight.  It may even look like that to the agent.  Per footnote 3, I'm ignoring expected utility "consequentialist" theories; however, in actual practice, the closest one can come to implementing an actual utility consequentialism is to deal with expected utility, because we cannot perfectly predict the effects of our actions.

The difference is subtle, and how it gets implemented depends on one's epistemological views.  Loosely, however: Suppose a deontologist judges some act X (to be performed by another agent) to be wrong because she predicts undesirable consequence Y.  The consequentialist sitting next to her judges X to be wrong, too, because he also predicts Y if the agent performs the act.  His assessment stops with "Y will happen if the agent performs X, and Y is axiologically bad."  (The evaluation of Y as axiologically bad might be more complicated, but this all that goes into evaluating X qua X.)  Her assessment, on the other hand, is more complicated, and can branch in a few places.  Does the agent know that X will lead to Y?  If so, the wrongness of X might hinge on the agent's intention to bring about Y, or an obligation from another source on the agent's part to try to avoid Y which is shirked by performing X in knowledge of its consequences.  If not, then another option is that the agent should (for other, also deontic reasons) know that X will bring about Y: the ignorance of this fact itself renders the agent culpable, which makes the agent responsible for ill effects of acts performed under that specter of ill-informedness.

 

1Having taken a course on weird forms of consequentialism, I now compulsively caveat anything I have to say about consequentialisms in general.  I apologize.  In practice, "consequentialism" is the sort of word that one has to learn by familiarity rather than definition, because any definition will tend to leave out something that most people think is a consequentialism.  "Utilitarianism" is a type of consequentialism that talks about utility (variously defined) instead of some other sort of consequence.

2Because it makes it dreadfully hard to write readably about consequentialism if I don't assume I'm only talking about act consequentialisms, I will only talk about act consequentialisms.  Transforming my explanations into rule consequentialisms or world consequentialisms or whatever other non-act consequentialisms you like is left as an exercise to the reader.  I also know that preferentism is more popular than hedonism around here, but hedonism is easier to quantify for ready reference, so if called for I will make hedonic rather than preferentist references.

3Most notable in the overlap department is expected utility "consequentialism", which says that not only is the best you can in fact do to maximize expected utility, but that is also what you absolutely ought to do.  Depending on how one cashes this out and who one asks, this may overlap so far as to not be a real form of consequentialism at all.  I will be ignoring expected utility consequentialisms for this reason.

4I say "suppose", but in fact the supposition may be actually true; Wikipedia is unclear.

5This is not intended to be a real model of anyone's consequentialist caveats.  But basically, if you interpret the deontologist's statement "lying is wrong" to have something to do with what happens after one tells a lie, you've got it wrong.

6As far as I know, no one seriously endorses "schizophrenic deontology".  I introduce it as a caricature of deontology that I can play with freely without having to worry about accurately representing someone's real views.  Please do not take it to be representative of deontic theories in general.

7Real epistemic state means the beliefs that the agent actually has and can in fact act on.

8Ideal epistemic state (for my purposes) means the beliefs that the agent would have and act on if (s)he'd demonstrated appropriate epistemic virtues, whether (s)he actually has or not.

255 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2010-01-31T16:10:14.672Z · LW(p) · GW(p)

This might be unfair to deontologists, but I keep getting the feeling that deontology is a kind of "beginner's ethics". In other words, deontology is the kind of ethical system you get once you build it entirely around ethical injunctions, which is entirely reasonable if you don't have the computing power to calculate the probable consequences of your actions with a very high degree of confidence. So you resort to what are basically cached rules that seem to work most of the time, and elevate those to axioms instead of treating them as heuristics.

And before I'm accused of missing the difference between consequentialism and deontology: no, I don't claim that deontologists actually consciously think that this is why they're deontologists. It does, however, seem like a plausible explanation of the (either development psychological or evolutionary) reason why people end up adopting deontology.

Replies from: pjeby, Alicorn, Johnicholas, roystgnr
comment by pjeby · 2010-01-31T21:22:51.855Z · LW(p) · GW(p)

I don't claim that deontologists actually consciously think that this is why they're deontologists. It does, however, seem like a plausible explanation of the (either development psychological or evolutionary) reason why people end up adopting deontology.

Indeed, I get the impression from the article that a deontologist is someone who makes moral choices based on whether they will feel bad about violating a moral injunction, or good for following it... and then either ignorantly or indignantly denies this is the case, treating the feeling as evidence of a moral judgment's truth, rather than as simply a cached response to prior experience.

Frankly, a big part of the work I do to help people is teaching them to shut off the compelling feelings attached to the explicit and implicit injunctions they picked up in childhood, so I'm definitely inclined to view deontology (at least as described by the article) as a hopelessly naive and tragically confused point of view, well below the sanity waterline... like any other belief in non-physical entities, rooted in mystery worship.

I also seem to recall that previous psychology research showed that that sort of thinking was something people naturally tended to grow out of as they got older (stages of moral reasoning), but then I also seem to recall that there was some more recent dispute about that, and accusations of gender bias in the research.

Nonetheless, it's evolutionarily plausible that we'd have a simple, injunction-based emotional trigger system used in early life, until our more sophisticated reasoning abilities come online. And my experience working with my own and other people's brains seems to support this: when broad childhood injunctions are switched off, people's behavior and judgments in the relevant area immediately become more flexible and sophisticated.

Unfortunately, the deontological view sounds like it's abusing higher reasoning simply to retroactively justify whatever (cached-feeling) injunctions are already in place, by finding more-sophisticated ways to spell the injunctions so they don't sound like they have anything to do with one's own past shames, guilts, fears, and other experiences. (What Robert Fritz refers to as an "ideal-belief-reality conflict", or what Shakespeare called, "The lady doth protest too much, methinks." I.e., we create high-sounding ideals and absolute moral injunctions specifically to conceal our personally-experienced failings or conflicts around those issues.)

Of course, I could just be missing the point of deontology entirely. But I can't seem to even guess at what that point would be, because everything I'm reading here seems to closely resemble something that I had to grow out of... making it really hard for me to take it seriously.

Replies from: JenniferRM, Seth_Goldin
comment by JenniferRM · 2010-02-03T01:44:55.670Z · LW(p) · GW(p)

Do you think it is likely that the emotional core of your claim was captured by the statement that "everything I'm reading here seems to closely resemble something that I had to grow out of... making it really hard for me to take it seriously"?

And then assuming this question finds some measure of ground.... how likely do you think it is that you would grow in a rewarding way by applying "your emotional reprogramming techniques" to this emotional reaction to an entry-level exposition on deontological modes of reasoning so that you could consider the positive and negative applications in a more dispassionate manner?

I haven't read into your writings super extensively, but from what I read you have quite a lot of practice doing something like "soul dowsing" to find emotional reactions. Then you trace them back to especially vivid "formative memories" which can then then be rationally reprocessed using other techniques - the general goal being to allow clearer thinking about retrospectively critical experiences in a more careful manner and in light of subsequent life experiences. (I'm sure there's a huge amount more, but this is my gloss that's relevant to your post.)

I've never taken your specific suggestions along these lines into practice (for various reasons having mostly to do with opportunity costs) but the potential long term upside seem high and your post just seemed like a gorgeous opportunity to explore some of the longer term consequences of your suggested practices.

Replies from: pjeby
comment by pjeby · 2010-02-03T05:13:22.817Z · LW(p) · GW(p)

how likely do you think it is that you would grow in a rewarding way by applying "your emotional reprogramming techniques" to this emotional reaction to an entry-level exposition on deontological modes of reasoning so that you could consider the positive and negative applications in a more dispassionate manner?

That's an interesting question. I don't think an ideal-belief-reality conflict is involved, though, as an IBRC motivates someone to try to convince the "wrong" others of their error, and I didn't feel any particular motivation to convince deontologists that they're wrong! I included the disclaimer because I'm honestly frustrated by my inability to grok the concept of deontological morality except in terms of a feeling-driven injunctions model. (Had I been under the influence of an IBRC, I'd have been motivated to express greater certainty, as has happened occasionally in the past.)

So, if there's any emotional reaction taking place, I'd have to say it was frustration with an inability to understand something... and the intensity level was pretty low.

In contrast, I've had discussions here last year where I definitely felt an inclination to convince people of things, and at a much higher emotional intensity -- so I fixed them. This doesn't feel to me like something in the same category.

It might be interesting to check out the frustration-at-inability-to-understand thing at some point, but at the moment it's a bit like a hard-to-reproduce bug. I don't have a specific trigger thought I can use to call up the feeling of frustration, so I would have no way at the moment to know if I actually changed anything.

from what I read you have quite a lot of practice doing something like "soul dowsing" to find emotional reactions.

I've never heard that phrase before, and Google actually finds your comment as the third-highest ranking result for the phrase. Is it of your invention?

In any event, I don't believe I do anything that could be called dowsing. It would be more appropriate to refer it as a form of behavior modification via memory alteration.

We know that memories are fluid and their interpretations can be altered by suggestively-worded questions - mindhacking can be thought of as a way of using this brain bug, to fix other brain bugs.

Then you trace them back to especially vivid "formative memories" which can then then be rationally reprocessed using other techniques - the general goal being to allow clearer thinking

So far, so good, but this bit:

about retrospectively critical experiences in a more careful manner and in light of subsequent life experiences.

is beside the point. The purpose is that once you've altered the memory structure involved, your behavior -- both in the form of thought patterns and actions -- automatically changes to fall in line with the shift in the emotional relevance of what's stored in your memory. The memory goes from being an unconscious emotional trigger, to an easily forgotten irrelevancy.

Indeed, the only reason I even remember the content of what I changed the other day regarding my mother yelling at me, is because I make a deliberate practice of trying to retain such memories. If I don't write something down about what I change, the specific memories involved fade rapidly. I've had clients who within minutes or hours forgot they'd even had a problem in the first place.

Even trying to retain them in memory, the only record I now have of a change I made about two weeks ago, is the one I wrote down at the time. I remember remembering it, sure, but I don't remember it directly -- it's now more like a story I heard, than something that actually happened to me.

IOW, amnesia for the original issue or where it come from is a normal and expected side-effect of successfully changing an emotionally-charged memory into a merely factual anecdote about something that happened to you, once upon a time.

I've never taken your specific suggestions along these lines into practice (for various reasons having mostly to do with opportunity costs) but the potential long term upside seem high and your post just seemed like a gorgeous opportunity to explore some of the longer term consequences of your suggested practices.

The intended outcome is to provide a means of effective self-modification, one that does not require constant vigilance to monitor an ever-increasing number of biases or enforce an ever-increasing number of required behaviors. There are an enormous number of hardware biases that I cannot modify, but on a day-to-day basis, we are far more affected by our acquired, "software" biases anyway.

To give a concrete example, what I do can't modify the general tendency of humans to identify with ingroups and attack outgroups -- but it can remove entries from the "outgroup description table" in an individual's brain, one at a time!

This isn't much, but it's still something. I call it mindhacking, because that's really what it is: making use of the brain's bugs (e.g. malleable memory) to patch over some of its other bugs.

Hm. I think I just found a test stimulus that matches the feeling of frustration I had re: the deontology discussion. So I'll work through it "live" right now.

[edit: the rest was too long to fit, so I've split it off into a separate, child comment as a reply to this one]

Replies from: pjeby
comment by pjeby · 2010-02-03T06:01:52.112Z · LW(p) · GW(p)

[split from parent comment due to length]

Hm. I think I just found a test stimulus that matches the feeling of frustration I had re: the deontology discussion. So I'll work through it "live" right now.

I am frustrated at being unable to find common ground with what seems like abstract thoughts taken to the point of magical and circular thinking... and it seems the emotional memory is arguing theism and other subjects with my mother at a relatively young age... she would tie me in knots, not with clever rhetoric, but with sheer insanity -- logical rudeness writ large.

But I couldn't just come out and say that to her... not just because of the power differential, but also because I had no handy list of biases and fallacies to point to, and she had no attention span for any logically-built-up arguments.

Huh. No wonder I feel frustrated trying to understand deontology... I get the same, "I can't even understand this craziness well enough to be able to say it's wrong" feeling.

Okay, so what abilities did I lose to learned helplessness in this context? I learned that there was nothing I could say or do about logical craziness... which would certainly explain why I started and deleted my deontology comment multiple times before finally posting it... and didn't really try to achieve any common ground during it... I just took a victim posture and said deontology was nonsense. I also waited until I could "safely" say it in the context of someone else's comment, rather than directly addressing the post's author -- either to seek the truth or argue a clear position.

So, what do I want to replace that feeling of helplessness with? Would I rather be curious, so that I find out more about someone's apparently circular reasoning before dismissing it or fighting with it? How about compassionate, so I try to help the person find the flaw in their reasoning, if they're actually interested in the first place? What about amusement, so that I'm merely entertained and move on?

Just questioning these possibilities and bringing them into mind is already modifying the emotional response, since I've now had an (imagined) sensory experience of what it would be like to have those different emotions and behaviors in the circumstance. I can also see that I don't need to understand or persuade in such a circumstance, which feels like a relief. I can see that I didn't need to argue with my mother and frustrate myself; I could have just let her be who she was, and gone about my business.

So, this is a good time for a test. How do I feel about arguing theism with my mother? No big deal. How about deontology? Not a big deal either, but then it wasn't earlier, either, which is why I couldn't use it as a test directly. So the real test is the thought of "having to explain practical things to people hopelessly stuck in impractical thinking", which was reliably causing me to wrinkle my brow, hunch slightly, and sigh in frustration.

Now, instead of that, I get a mixed feeling of compassion/patience, felt lightly in the chest area... but there's still a hint of the old feeling, like a component is still there.

Ah... I see, I've dealt with only one need axis: connection/bonding, but not status/significance. A portion of the frustration was not being able to connect, and that portion I've resolved, but the other part was frustration with a status differential: the person making the argument is succeeding in lowering my status if I can't address their (nonsensical) argument.

Ugh. I hate status entanglements. I can't fix the brain's need for status, only remove specific entries from the "status threats" table. So let's see if we can take this one out.

I'm noticing that other memories of kids teasing or insulting me in school are coming up in connection with this -- the same fundamental circumstance of being in a conversation with no good answers, silence included. No matter what I do, I will lose face.

Ouch. This is a tough one. The rookie mistake here would be to think I have to be able to come up with better comebacks or something... that is, that I have to solve the problem in the outside world, in order to change my feelings. But if I instead change my feelings first on the inside, then my behavior will change to match.

So, what do I want to feel? Amused? Confident? As with other forms of learned helplessness, I am best off if I can feel the outcome emotions in advance of tthe outside world conforming to my preference. (That is, if I already feel the self-esteem I want from the interaction, before the interaction takes place, it is more likely that I will act in a way that results in a favorable interaction.)

So how would I feel if those kids were praising, instead of teasing or insulting? I would feel honored by the attention...

Boom! The memory just changed, popping into a new interpretation: the kids teasing and insulting me were giving me positive attention. This new interpretation drives a different feeling about it... along with a change to my feelings about certain discussions that have taken place on LW. ;-) Netiher seems like a threat any more.

Similarly, thinking about being criticized in other contexts doesn't seem like a threat... I strangely feel genuinely honored that somebody took the time to tell me how they feel, even if I don't agree with it. Wow. Weird. ;-) (But then, as I'm constantly telling people, if your change doesn't surprise you in some way, you probably didn't really change anything.)

The change also sent me reeling for a moment, as suddenly the sense of loneliness and "outsider"-ness I had as a child begins to feel downright stupid and unnecessary in retrospect.

Wow. Deep stuff. Did not expect anything of this depth from your suggestion, JenniferRM. I think I will take the rest of my processing offline, as it's been increasingly difficult to type about this while doing it... trying to explain the extra context/purpose stuff has been kind of distracting anyway, while I was in the middle of doing things.

Whew. Anyway, I hope that was helpfully illustrative, nonetheless.

Replies from: Alicorn, JenniferRM
comment by Alicorn · 2010-02-03T14:18:46.832Z · LW(p) · GW(p)

This comment has done more than anything else you've written to convince me that you aren't generally talking nonsense.

Replies from: pjeby
comment by pjeby · 2010-02-03T16:26:58.483Z · LW(p) · GW(p)

Thank you, that's very kind of you to say.

Overnight, I continued working on that thread of thoughts, and dug up several related issues. One of them was that I've also not been nearly as generous with giving positive attention and appreciation as I would've liked others to be. So I made a change to fix that this morning, and I actually felt genuine warmth and gratitude in response to your comment... something that I generally haven't felt, even towards very positive comments here in the past.

So really, thank you, as it was indeed both kind and generous of you to say it.

comment by JenniferRM · 2010-02-05T09:51:15.119Z · LW(p) · GW(p)

Thanks for the response.

That was way more than I was hoping to get back and went in really interesting directions - the corrections about the way the "reprocessing" works and the limits of reprocessing was helpful. The detail about the way vivid memories can no longer be accessed through the same "index" and become more like stories was totally unexpected and fascinating.

Also, that was very impressive in terms of just... raw emotional openness, I guess. I don't know about other readers, but it stirred up my emotions just reading about your issues as you worked through them. I have a hard time imagining the courage it would take for me to make similar emotional disclosures in a place like this if they were my own. I'm a little frightened by how much trust you gave me I think? But I'm very grateful too.

(And yes, "soul dousing" is a term I made up for the post for the sake of trying to summarize things I've read by you in the past in my own words to see if I was hearing what you were trying to say.)

Replies from: pjeby
comment by pjeby · 2010-02-05T17:26:28.229Z · LW(p) · GW(p)

I have a hard time imagining the courage it would take for me to make similar emotional disclosures in a place like this if they were my own.

Not as much as you might think. Bear in mind that by the time anybody reads anything I've written about something like that, it's no longer the least bit emotional for me -- it has become an interesting anecdote about something "once upon a time".

If it was still emotional for me after I made the changes, I would have more trouble sharing it, here or even with my subscribers. In fact, the reason I cut off the post where I did was because there was some stuff I wasn't yet "done" with and wanted to work on some more.

Likewise, it's a lot easier to admit to your failures and shortcomings if you are acutely aware that 1) "you" aren't really responsible, and 2) you can change. It's easier to face the truth of what you did wrong, if you know that your reaction will be different in the future. It takes out the "feeling of being a bad person" part of the equation.

comment by Seth_Goldin · 2010-02-01T02:53:49.244Z · LW(p) · GW(p)

Yes! Both you and Kaj Sotala seem right on the money here. Deontology falls flat. A friend once observed to me that consequentialism is a more challenging stand to take because one needs to know more about any particular claim to defend an opinion about it.

I know it's been discussed here on Less Wrong, but Jonathan Haidt's research is really great, and relevant to this discussion. Professor Haidt's work has validated David Hume's assertions that we humans do not reason to our moral conclusions. Instead, we intuit about the morality of an action, and then provide shoddy reasoning as justification one way or the other.

comment by Alicorn · 2010-01-31T16:15:25.346Z · LW(p) · GW(p)

Deciding whether a rule "works" based on whether it usually brings about good consequences, and following the rules that do and calling that "right", is called rule consequentialism, not deontology.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-01-31T16:36:50.291Z · LW(p) · GW(p)

That's if you do it consciously, which I wasn't suggesting. My suggestion was that this would be a mainly unconscious process, similar to the process of picking up any other deeply-rooted preference during childhood / young age.

comment by Johnicholas · 2010-02-02T12:39:42.758Z · LW(p) · GW(p)

How about this formulation:

Suppose that humans' aggregate utility function includes both path-independent ("ends") terms, and path-dependent ("means") terms.

A (pseudo) deontologist in this scenario is someone who is concerned that all this talk about "achieving the best possible state of affairs" means that the path-dependent terms may be being neglected.

If you think about it, any fixed "state of affairs" is undesirable, simply because it is FIXED. I don't know for sure, but I think almost everything that you value is actually a path unfolding in time - possibilities might include: falling in love, learning something new, freedom/self-determination, growth and change.

comment by roystgnr · 2012-04-23T16:15:15.745Z · LW(p) · GW(p)

"Deontologists are just elevating intermediate heuristics to terminal values" is true, but also misleading and unfair unless you prepend "Consequentialists and " first. After all, it seems quite likely that joy, curiosity, love, and all the other things we value are also merely heuristics that evolution found to be useful for its terminal goal of "Make more mans. More mans!" But if our terminal values happen to match some other optimizing process' instrumental values, so what? That's an interesting observation, not a devastating criticism.

comment by Douglas_Knight · 2010-02-01T02:56:30.497Z · LW(p) · GW(p)

Sometimes I believe that

  • consequentialism calls possible worlds good
  • deontology calls acts good
  • virtue ethics calls people good

Of course, everyone uses "good" to label all three, but the difference is what is fundamental. cf Richard Chappell

Replies from: MichaelVassar, Alicorn, thomblake, TheAncientGeek, Morendil
comment by MichaelVassar · 2010-02-02T07:17:44.960Z · LW(p) · GW(p)

Possible worlds, however, encompass acts and people.

Replies from: wedrifid
comment by wedrifid · 2010-02-02T07:26:50.479Z · LW(p) · GW(p)

To be fair, deontology encompasses possible worlds in a similar way to consequentialism encompassing acts.

Replies from: MichaelVassar
comment by MichaelVassar · 2010-02-02T17:35:43.985Z · LW(p) · GW(p)

I don't think so, but I'd be happy to hear why you say that.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-02-02T17:49:15.068Z · LW(p) · GW(p)

I don't know whether this bears directly on this point, but I am reminded of a discussion in Toby Ord's PhD thesis on how just as the consequences of an action propagate forwards in time, rightness propagates backwards. If it is right to pull the lever, it is right to push the button that pulls the lever, right to throw the ball that pushes the button that pulls the lever and so on.

This struck me as an argument for consequentialism in itself, since this observation is a natural consequence of consequentialism and doesn't follow so obviously from deontology, but perhaps this kind of thinking is built in in a way I don't see.

Replies from: DanielVarga
comment by DanielVarga · 2010-02-03T03:16:10.257Z · LW(p) · GW(p)

Can consequentialism handle the possibility of time-travel? If not, then something may be wrong with consequentialism, regardless of whether time-travel is actually possible or not.

One of the intuitions leading me to deontology is exactly the time-symmetry of physics. Almost by definition, the rightness of an act can only be perfectly decided by an outside observer of the space-time continuum. (I could call the observer God, but I don't want to be modded down by inattentive mods.) Now, maybe I have read too much Huw Price and Gary Drescher, but I don't think this fictional outside observer would care too much about the local direction of the thermodynamic arrow of time.

Replies from: pengvado
comment by pengvado · 2010-02-03T05:29:28.668Z · LW(p) · GW(p)

I don't see any problem whatsoever with time travel + consequentialism. As a consequentialist, I have preferences about the past just as much as about the future. But I don't know how to affect the past, so if necessary I'll settle for optimizing only the future.

The ideal choice is: argmax over actions A of utility( what happens if I do A ). Time travel may complicate the predicting of what happens (as if that wasn't hard enough already), but doesn't change the form of the answer.

Btw, my favorite model of time travel is described here (summary: locally ordinary physical causality plus closed timelike curves is still consistent). Causal decision theory probably chokes on it, but that's nothing new, and has to do with a bad formalization of "if I do A", not due to the focus on outcomes.

comment by Alicorn · 2010-02-01T02:58:32.772Z · LW(p) · GW(p)

This seems like a pretty good first pass classification to me.

comment by thomblake · 2010-02-01T14:27:51.274Z · LW(p) · GW(p)

I think you're right on, in broad brushstrokes.

I've actually diagrammed this for people, showing the [person]->[action]->[result] system, and haven't seen a philosopher object to the rough characterization.

comment by TheAncientGeek · 2014-02-11T20:59:15.955Z · LW(p) · GW(p)

That's about my theory: different theories of morality are talking about different theories(but interconnected) things...that is a desirable outcome,or not, what is a culpable act , or not.

comment by Morendil · 2010-02-01T07:41:55.475Z · LW(p) · GW(p)

...and I'm wondering where contractualism fits in there.

Replies from: Douglas_Knight
comment by Douglas_Knight · 2010-02-01T08:04:15.800Z · LW(p) · GW(p)

Contracts tend to be about acts. Social contract theories, including Scanlon's sound deontological to me.

comment by orthonormal · 2010-01-30T19:25:11.878Z · LW(p) · GW(p)

My issue with deontology-as-fundamental is that, whenever someone feels compelled to defend a deontological principle, they invariably end up making a consequentialist argument.

E.g. "Of course lying is wrong, because if lying were the general habit, communication would be impossible" or variants thereof.

The trouble, it seems to me, is that consequentialist moralities are easier to ground in human preferences (current and extrapolated) than are deontological ones, which seem to beg for a Framework of Objective Value to justify them. This is borne out by the fact that it is extremely difficult to think of a basic deontological rule which the vast majority of people (or the vast majority of educated people, etc.) would uphold unconditionally in every hypothetical.

If someone is going to argue that their deontological system should be adopted on the basis of its probable consequences, fine, that's perfectly valid. But in that case, as in the story of Churchill, we've already established what they are, we're just haggling over the price.

Replies from: Jack, Alicorn, TheAncientGeek
comment by Jack · 2010-01-31T05:15:54.706Z · LW(p) · GW(p)

This is borne out by the fact that it is extremely difficult to think of a basic deontological rule which the vast majority of people (or the vast majority of educated people, etc.) would uphold unconditionally in every hypothetical.

Afaict this is true for any ethical principle, consequentialist ones included. I'm skeptical that there are unconditional principles.

comment by Alicorn · 2010-01-30T19:26:58.708Z · LW(p) · GW(p)

E.g. "Of course lying is wrong, because if lying were the general habit, communication would be impossible" or variants thereof.

Dude. "Counterfactuals." Fourth thing on the bulleted list, straight outta Kant.

The trouble, it seems to me, is that consequentialist moralities are easier to ground in human preferences (current and extrapolated) than are deontological ones, which seem to beg for a Framework of Objective Value to justify them.

I take exception to your anthropocentric morality!

This is borne out by the fact that it is extremely difficult to think of a basic deontological rule which the vast majority of people (or the vast majority of educated people, etc.) would uphold unconditionally in every hypothetical.

And if we lived on the Planet of the Sociopaths, what then? Ethics leap out a window and go splat?

If someone is going to argue that their deontological system should be adopted on the basis of its probable consequences

See here for what this is like.

Replies from: pengvado, Jack
comment by pengvado · 2010-01-30T19:55:45.306Z · LW(p) · GW(p)

"Counterfactuals." Fourth thing on the bulleted list, straight outta Kant.

Any talk about consequences has to involve some counterfactual. Saying "outcome Y was a consequence of act X" is an assertion about the counterfactual worlds in which X isn't chosen, as well as those where it is. So if you construct your counterfactuals using something other than causal decision theory, and you choose an act (now) based on its consequences (in the past), is that another overlap between consequentialism and deontology?

Replies from: Alicorn
comment by Alicorn · 2010-01-30T20:01:09.440Z · LW(p) · GW(p)

I can't parse your comment well enough to reply intelligently.

Replies from: loqi
comment by loqi · 2010-01-30T21:42:31.778Z · LW(p) · GW(p)

What I think pengvado is getting at is that the concept of "consequence" is derived from the concept of "causal relation", which itself appears to require a precise notion of "counterfactual".

I read Newcomb's paradox as a counter-example to the idea that causality must operate forward in time. Essentially, one-boxing is choosing an act in the present based on its consequences in the past. This smells a bit like a Kantian counterfactual to me, but I haven't read Kant.

Replies from: Alicorn
comment by Alicorn · 2010-01-30T21:48:46.768Z · LW(p) · GW(p)

There are many accounts of causation; some of them work in terms of counterfactuals and some don't. (I don't have many details; I've never taken a class on causation.) There is considerable disagreement about the extent to which causation must operate forward in time, especially in things like discussions of free will.

I haven't read Kant.

Don't. It's a miserable pastime.

Replies from: loqi
comment by loqi · 2010-01-30T22:24:29.822Z · LW(p) · GW(p)

I'm pretty satisfied with Pearl's formulation of causality, it seems to capture everything of interest about the phenomenon. An account of causality that involves free will sounds downright unsalvageable, but I'd be interested in pointers to any halfway decent criticism of Pearl's approach.

Thanks for affirming my suspicions regarding Kant.

comment by Jack · 2010-01-31T05:13:32.990Z · LW(p) · GW(p)

Dude. "Counterfactuals." Fourth thing on the bulleted list, straight outta Kant.

I wouldn't characterize Kant this way. He isn't thinking about a possible world in which the maxim is universalized, whether a maxim can or cannot be universalized has to do with the form of the maxim, nothing else. It might be the case that he sneaks in some counter-factual thinking but it isn't his intention to make his ethics rely on it. It wouldn't be a priori otherwise.

Replies from: Alicorn
comment by Alicorn · 2010-01-31T14:54:21.256Z · LW(p) · GW(p)

No two people can agree on how to characterize Kant, but it is a legitimate interpretation that I have heard advanced by a PhD-having philosopher that you can think about that formulation of the CI as referring to a possible world where the maxim is followed like a natural law.

Replies from: bogus, Breakfast, Jack
comment by bogus · 2010-01-31T15:06:10.211Z · LW(p) · GW(p)

you can think about that formulation of the CI as referring to a possible world where the maxim is followed like a natural law.

This is what Kant seems to do in practice whenever he illustrates normative application of the CI. But his notion of a priori does appear to preclude this. Then again, Kant also managed to develop Newtonian physics a priori, so maybe he just knew something we don't.

comment by Breakfast · 2010-01-31T16:30:31.041Z · LW(p) · GW(p)

What has never stopped bewildering me is the question of why anyone should consider such a possible world relevant to their individual decision-making. I know Kant has some... tangled, Kantian argument regarding this, but does anyone who isn't a die-hard Kantian have any sensible reason on hand for considering the counterfactual "What if everyone did the same"?

Everyone doing X is not even a remotely likely consequence of me doing X. Maybe this is to beg the question of consequences mattering in the first place. But I suppose I have no idea what use deontology is if it doesn't boil down to consequentialism at some level... or, particularly, I have no idea what use it is if it makes appeals to impossibly unlikely consequences like "Everyone lying all the time," instead of likely ones.

Replies from: Alicorn, bogus, Kaj_Sotala
comment by Alicorn · 2010-01-31T16:41:58.380Z · LW(p) · GW(p)

Everyone doing X is not even a remotely likely consequence of me doing X.

AAAAAAAAAAAAH

*ahem* Excuse me.

I meant: Wow, have I ever failed at my objective here! Does anyone want me to keep trying, or should I give up and just sob quietly in a corner for a while?

Replies from: Breakfast
comment by Breakfast · 2010-01-31T16:55:39.171Z · LW(p) · GW(p)

Sorry. But then I said:

Maybe this is to beg the question of consequences mattering in the first place.

And added,

But I suppose I have no idea what use deontology is if it doesn't boil down to consequentialism at some level.

?

Replies from: Alicorn
comment by Alicorn · 2010-01-31T16:58:09.736Z · LW(p) · GW(p)

Yeah, if you have no idea what "use" deontology is unless it's secretly just tarted-up consequentialism, I have failed.

Replies from: Breakfast
comment by Breakfast · 2010-01-31T17:07:57.121Z · LW(p) · GW(p)

Huh? To be fair, I don't think you were setting out to make the case for deontology here. All I am saying about its "use" is that I don't see any appeal. I think you gave a pretty good description of what deontologists are thinking; the North Pole - reindeer - haunting paragraph was handily illustrative.

Anyway, I think Kant may be to blame for employing arguments that consider "what would happen if others performed similar acts more frequently than they actually do". People say similar things all the time -- "What if everyone did that?" -- as though there were a sort of magical causal linkage between one's individual actions and the actions of the rest of the world.

Replies from: JenniferRM, Alicorn
comment by JenniferRM · 2010-02-01T00:16:28.830Z · LW(p) · GW(p)

There is a "magical causal connection" between one's individual actions and the actions of the rest of the world.

Other people will observe you acting and make reasonable inferences on the basis of their observation. Depending on your scientific leanings, it's plausible to suppose that these inferences have been so necessary to human survival that we may have evolutionary optimizations that make moral reasoning more effective than general reasoning.

For example, if they see you "get away with" an act they will infer that if they repeat your action the will also avoid reprisal (especially if you and they are in similar social reference classes). If they see you act proudly and in the open they will infer that you've already done the relevant social calculations to determine that no one will object and apply sanctions. If they see you defend the act with words, they will assume that they can cite you as an authority and you'll support them in a factional debate in order not to look like a hypocrite... and so on ad nauseum.

There are various reasons people might deny that they function as role models in society. Perhaps they are hermits? Or perhaps they are not paying attention to how social processes actually happen? Or it may also be the case that they are momentarily confabulating excuses because they've been caught with blood on their hands?

Not that I'm a big deontologist, but I think deontologists say things that are interesting, worthwhile, and seem unlikely to be noticed from other theoretical perspectives. Several apologists for deontology who I've known from a distance (mostly in speech and debate contexts) were super big brains.

Their pitch, to get people into the relevant deliberative framework, frequently involved an epistemic argument at the beginning. Basically they pointed out that it was silly to make moral judgments with instantaneous behavioral consequences based on things you can't see or measure or know in the present. There is more to it than that (like there are nice ways to update and calculate deontic moral theories based on morality estimates, subsequent acts, and independent "retrospective moral feelings" about how the things turned out) but we're just in the comment section, and I'd rather not have my fourth post in this community spend a lot of time articulating the upsides a moral theory that I don't "fully endorse" :-)

Replies from: SilasBarta, Breakfast
comment by SilasBarta · 2010-02-01T23:37:53.595Z · LW(p) · GW(p)

Very insightful comment (and the same for your follow-up). I don't have much to add except shamelessly link a comment I found on Slashdot that it reminded me of. (I had also posted it here.) For those who don't want to click the link, here goes:

I also disagree that our society is based on mutual trust. Volumes and volumes of laws backed up by lawyers, police, and jails show otherwise.

That's called selection/observation bias. You're looking at only one side of the coin.

I've lived in countries where there's a lot less trust than here. The notion of returning an opened product to a store and getting a full refund is based on trust (yes, there's a profit incentive, and some people do screw the retailers [and the retailers their customers -- SB], but the system works overall). In some countries I've been to, this would be unfeasible: Almost everyone will try to exploit such a retailer.

When a storm knocks out the electricity and the traffic lights stop working, I've always seen everyone obeying the rules. I doubt it's because they're worried about cops. It's about trust that the other drivers will do likewise. Simply unworkable in other places I've lived in.

I've had neighbors whom I don't know receive UPS/FedEx packages for me. Again, trust. I don't think they're afraid of me beating them up.

There are loads of examples. Society, at least in the US, is fairly nice and a lot of that has to do with a common trust.

Which is why someone exploiting that trust is a despised person.

What's interesting is that if you follow the Slashdot link, the parent of the comment replies and says (to paraphrase) that his neighborhood is of the broken window kind, where people don't act like that. The person I quoted above then says,

And because of it, your neighborhood sucks, and mine doesn't. ... Suggesting people become mistrustful will likely turn my neighborhood into one like yours.

Which ties in with what you said about the cascading effect of behavior as others notice it.

Please continue to post here!

comment by Breakfast · 2010-02-01T02:38:45.788Z · LW(p) · GW(p)

I'm newish here too, JenniferRM!

Sure, I have an impact on the behaviour of people who encounter me, and we can even grant that they are more likely to imitate/approve of how I act than disapprove and act otherwise -- but I likely don't have any more impact on the average person's behaviour than anyone else they interact with does. So, on balance, my impact on the behaviour of the rest of the world is still something like 1/6.5 billion.

And, regardless, people tend to invoke this "What if everyone ___" argument primarily when there are no clear ill effects to point out, or which are private, in my experience. If I were to throw my litter in someone's face, they would go "Hey, asshole, don't throw your litter in my face, that's rude." Whereas, if I tossed it on the ground, they might go "Hey, you shouldn't litter," and if I pressed them for reasons why, they might go "If everyone littered here this place would be a dump." This also gets trotted out in voting, or in any other similar collective action problem where it's simply not in an individual's interests to 'do their part' (even if you add in the 1/6.5-billion quantity of positive impact they will have on the human race by their effect on others).

"You may think it was harmless, but what if everyone cheated on their school exams like you did?" -- "Yeah, but, they don't; it was just me that did it. And maybe I have made it look slightly more appealing to whoever I've chosen to tell about it who wasn't repelled by my doing so. But that still doesn't nearly get us to 'everyone'."

Replies from: JenniferRM, JenniferRM, JenniferRM
comment by JenniferRM · 2010-02-01T10:00:54.950Z · LW(p) · GW(p)

Err... I suspect our priors on this subject are very different.

From my perspective you seem to be quibbling over an unintended technical meaning of the word "everyone" while not tracking consequences clearly. I don't understand how you think littering is coherent example of how people's actions do not affect the rest of the world via social signaling. In my mind, littering is the third most common example of a "signal crime" after window breaking and graffiti.

The only way your comments are intelligible to me is that you are enmeshed in a social context where people regularly free ride on community goods or even outright ruin them... and they may even be proud to do so as a sign of their "rationality"?!? These circumstances might provide background evidence that supports what you seem to be saying - hence the inference.

If my inference about your circumstances is correct, you might try to influence your RL community, as an experiment, and if that fails an alternative would be to leave and find a better one. However, if you are in such a context, and no one around you is particularly influenced by your opinions or actions, and you can't get out of the context, then I agree that your small contribution to the ruin of the community may be negligible (because the people near to you are already ruining the broader community, so their "background noise" would wash out your potentially positive signal). In that case, rule breaking and crime may be the only survival tactic available to you, and you have my sympathy.

In contrast, when I picture littering, I imagine someone in a relatively pristine place who throws the first piece of garbage. Then they are scolded by someone nearby for harming the community in a way that will have negative long term consequences. If the litterbug walks away without picking up their own litter, the scolder takes it upon themselves to pick up the litter and dispose of it properly on behalf of the neighborhood.

In this scenario, the cost of littering is born, personally and directly, by the scolder who picks up the garbage, who should follow this up by telling other people about it, badmouthing the person who littered and claiming credit for scolding and cleaning up after them. This would broadcast and maintain positive norms within the community.

I prefer using norms in part because the major alternatives I'm aware of are either (1) letting the world to "fall to shit" or else (2) fixing problems using government solutions. If positive social customs can do the job instead, that's a total win to me :-)

comment by JenniferRM · 2010-02-01T09:59:04.855Z · LW(p) · GW(p)

Err... I suspect our priors on this subject are very different.

From my perspective you seem to be quibbling over an unintended technical meaning of the word "everyone" while not tracking consequences clearly. I don't understand how you think littering is coherent example of of how people's actions do not affect the rest of the world via social signaling. In my mind, littering is the third most common example of a "signal crime" after window breaking and graffiti.

The only way your comments are intelligible to me is that you are enmeshed in a social context where people regularly free ride on community goods or even outright ruin them... and they may even be proud to do so as a sign of their "rationality"?!? These circumstances might provide background evidence that supports what you seem to be saying - hence the inference.

If my inference about your circumstances is correct, you might try to influence your RL community, as an experiment, and if that fails an alternative would be to leave and find a better one. However, if you are in such a context, and no one around you is particularly influenced by your opinions or actions, and you can't get out of the context, then I agree that your small contribution to the ruin of the community may be negligible (because the people near to you are already ruining the broader community, so their "background noise" would wash out your potentially positive signal). In that case, rule breaking and crime may be the only survival tactic available to you, and you have my sympathy.

In contrast, when I picture littering, I imagine someone in a relatively pristine place who throws the first piece of garbage. Then they are scolded by someone nearby for harming the community in a way that will have negative long term consequences. If the litterbug walks away without picking up their own litter, the scolder takes it upon themselves to pick up the litter and dispose of it properly on behalf of the neighborhood.

In this scenario, the cost of littering is born, personally and directly, by the scolder who picks up the garbage, who should follow this up by telling other people about it, badmouthing the person who littered and claiming credit for scolding and cleaning up after them. This would broadcast and maintain positive norms within the community.

I prefer using norms in part because the major alternatives I'm aware of are either (1) letting the world to "fall to shit" or else (2) fixing problems using government solutions. If positive social customs can do the job instead, that's a total win to me :-)

comment by JenniferRM · 2010-02-01T09:56:34.870Z · LW(p) · GW(p)

Err... I suspect our priors on this subject are very different.

From my perspective you seem to be quibbling over an unintended technical meaning of the word "everyone" while not tracking consequences clearly. I don't understand how you think littering is coherent example of of how people's actions do not affect the rest of the world via social signaling. In my mind, littering is the third most common example of a "signal crime" after window breaking and graffiti.

The only way your comments are intelligible to me is that you are enmeshed in a social context where people regularly free ride on community goods or even outright ruin them... and they may even be proud to do so as a sign of their "rationality"?!? These circumstances might provide background evidence that supports what you seem to be saying - hence the inference.

If my inference about your circumstances is correct, you might try to influence your RL community, as an experiment, and if that fails an alternative would be to leave and find a better one. However, if you are in such a context, and no one around you is particularly influenced by your opinions or actions, and you can't get out of the context, then I agree that your small contribution to the ruin of the community may be negligible (because the people near to you are already ruining the broader community, so their "background noise" would wash out your potentially positive signal). In that case, rule breaking and crime may be the only survival tactic available to you, and you have my sympathy.

In contrast, when I picture littering, I imagine someone in a relatively pristine place who throws the first piece of garbage. Then they are scolded by someone nearby for harming the community in a way that will have negative long term consequences. If the litterbug walks away without picking up their own litter, the scolder takes it upon themselves to pick up the litter and dispose of it properly on behalf of the neighborhood.

In this scenario, the cost of littering is born, personally and directly, by the scolder who picks up the garbage, who should follow this up by telling other people about it, badmouthing the person who littered and claiming credit for scolding and cleaning up after them. This would broadcast and maintain positive norms within the community.

I prefer using norms in part because the major alternatives I'm aware of are either (1) letting the world to "fall to shit" or else (2) fixing problems using government solutions. If positive social customs can do the job instead, that's a total win to me :-)

comment by Alicorn · 2010-01-31T17:23:18.781Z · LW(p) · GW(p)

I wasn't trying to make the case for deontology, no - just trying to clear up the worst of the misapprehensions about it. Which is that it's not just consequentialism in Kantian clothing, it's a whole other thing that you can't properly understand without getting rid of some consequentialist baggage.

There does not have to be a causal linkage between one's individual actions and those of the rest of the world. (Note: my ethics don't include a counterfactual component, so I'm representing a generalized picture of others' views here.) It's simply not about what your actions will cause! A counterfactual telling you that your action is un-universalizeable can be informative to a deontic evaluation of an act even if you perform the act in complete secrecy. It can be informative even if the world is about to end and your act will have no consequences at all beyond being the act it is. It can be informative even if you'd never have dreamed of performing the act were it a common act type (in fact, especially then!). The counterfactual is a place to stop. It is, if justificatory at all, inherently justificatory.

Replies from: Breakfast
comment by Breakfast · 2010-01-31T17:32:05.231Z · LW(p) · GW(p)

A counterfactual telling you that your action is un-universalizeable can be informative to a deontic evaluation of an act even if you perform the act in complete secrecy. It can be informative even if etc.

Okay, I get that. But what does it inform you of? Why should one care in particular about the universalizability of one's actions?

I don't want to just come down to asking "Why should I be moral?", because I already think there is no good answer to that question. But why this particular picture of morality?

Replies from: Alicorn
comment by Alicorn · 2010-01-31T17:41:47.440Z · LW(p) · GW(p)

I don't have an arsenal with which to defend the universalizeability thing; I don't use it, as I said. Kant seems to me to think that performing only universalizeable actions is a constraint on rationality; don't ask me how he got to that - if I had to use a CI formulation I'd go with the "treat people as ends in themselves" one.

But why this particular picture of morality?

It suits some intuitions very nicely. If it doesn't suit yours, fine; I just want people to stop trying to cram mine into boxes that are the wrong shape.

Replies from: Breakfast
comment by Breakfast · 2010-01-31T18:00:36.101Z · LW(p) · GW(p)

It suits some intuitions very nicely.

I suppose that's about as good as we're going to get with moral theories!

Well, I hope I haven't caused you too much corner-sobbing; thanks for explaining.

comment by bogus · 2010-01-31T16:50:38.102Z · LW(p) · GW(p)

Kant's point is not that "everyone doing X" matters, it's that ethical injunctions should be indexically invariant, i.e. "universal". If an ethical injunction is affected by where in the world you are, then it's arguaby no ethical injunction at all.

Wei_Dai and EY have done some good work in reformulating decision theory to account for these indexical considerations, and the resulting theories (UDT and TDT) have some intuitively appealing features, such as cooperating in the one-shot PD under some circumstances. Start with this post.

Replies from: Breakfast
comment by Breakfast · 2010-01-31T17:02:07.281Z · LW(p) · GW(p)

I'm (obviously) no Kant scholar, but I wonder if there is any possible way to flesh out a consistent and satisfactory set of such context-invariant ethical injunctions.

For example, he infamously suggests not lying to a murderer who asks where your friend is, even if you reasonably expect him to go murder your friend, because lying is wrong. Okay -- even if we don't follow our consequentialist intuitions and treat that as a reductio ad absurdum for his whole system -- that's your 'not lying' principle satisfied. But what about your 'not betraying your friends' principle? How many principles have we got in the first place, and how can we weigh them against one another?

Replies from: bogus
comment by bogus · 2010-01-31T17:18:48.138Z · LW(p) · GW(p)

For example, he infamously suggests not lying to a murderer who asks where your friend is

Actually, Kant only defended the duty not to lie out of philanthropic concerns. But if the person inquired of was actually a friend, then one might reasonably argue that you have a positive duty not to reveal his location to the murderer, since to do otherwise would be inconsistent with the implied contract between you and your friend.

To be fair, you might also have a duty to make sure that your friend is not murdered, and this might create an ethical dilemma. But ethical dilemmas are not unique to deontology.

ETA: It has also been argued that Kant's reasoning in this case was flawed since the murderer engages in a violation of a perfect duty, so the maxim of "not lying to a known murderer" is not really universalizable. But the above reasoning would go through if you replaced the murderer with someone else whom you wished to keep away from your friend out of philanthropic concerns.

Replies from: Jack, Breakfast
comment by Jack · 2010-01-31T19:51:11.587Z · LW(p) · GW(p)

Actually, Kant only defended the duty not to lie out of philanthropic concerns.

This just isn't true. Lying is one of the examples used to explain the universalization maxim. It is forbidden in all contexts. Can't right now, but I'll come back with cites.

Replies from: bogus
comment by bogus · 2010-01-31T20:17:19.167Z · LW(p) · GW(p)

Actually I'm going to save you the effort and provide the cite myself:

... if we were to be at all times punctiliously truthful we might often become victims of the wickedness of others who were ready to abuse our truthfulness. If all men were well-intentioned it would not only be a duty not to lie, but no one would do so because there would be no point in it. But as men are malicious, it cannot be denied that to be punctiliously truthful is often dangerous... if I cannot save myself by maintaining silence, then my lie is a weapon of defense.

(Lectures on Ethics)

Specifically, in the Metaphysics of Morals, Kant states that "not suffer[ing our] rights to be trampled underfoot by others with impunity" is a perfect duty of virtue.

Replies from: Douglas_Knight, Jack
comment by Douglas_Knight · 2010-02-01T04:28:11.301Z · LW(p) · GW(p)

I don't see how lying to the murderer fails the test you quote, yet Kant does forbid it elsewhere

Truthfulness in statements that cannot be avoided is the formal duty of man to everyone, however great the disadvantage that may arise therefrom for him or for any other.

ETA: perhaps it's OK to lie out of love of money, but not out of love of man?

Added, years later: by "love of money," I mean that Kant says that it is OK to lie to the thief, but not to the murderer.

comment by Jack · 2010-02-01T22:44:26.269Z · LW(p) · GW(p)

We're allowed self-defense and punishment, according to Kant (indeed, it is required). It may, for example, be acceptable to lie to a murderer if he lies to you, since we are obligated to punish those who violate the CI. (EDIT: It could also mean that we don't have to say anything to murderers, we aren't obligated to tell the truth in every situation, but we are obligated to tell the truth in every case where we tell something. )

That said, I'm not not sure exactly what you mean by the original line "Kant only defended the duty not to lie out of philanthropic concerns". It could mean, "Kant defended the duty not to lie, but his reasons for this duty were mere philanthropic ones." It could also mean "With respect to truth-telling, Kant only says we have a duty when we might prefer to lie for philanthropic reasons." Both interpretations are wrong. Here is a quote from Kant's explicit tackling of the issue in the appropriately titled "On a supposed right to lie from philanthropy." Apologies for the long quote but I don't want to have to debate context.

Truthfulness in statements that one cannot avoid is a human being's duty to everyone, however great the disadvantage to him or to another that may result from it... If I falsify... I... do wrong in the most essential part of duty in general by such falsification... that is, I bring it about, as far as I can, that statements (declarations) in general are not believed, and so too that all rights which are based on contracts come to nothing and lose their force; and this is a wrong inflicted upon humanity generally... For a lie always harms another, even if not another individual, nevertheless humanity generally, inasmuch as it makes the source of right unusable. ---- "On a supposed right to lie from philanthropy", Berliner Blätter, September 1797

comment by Breakfast · 2010-01-31T17:36:17.725Z · LW(p) · GW(p)

Actually, Kant only defended the duty not to lie out of philanthropic concerns.

Huh! Okay, good to know. ... So not-lying-out-of-philanthropic-concerns isn't a mere context-based variation?

comment by Kaj_Sotala · 2010-02-03T16:53:52.639Z · LW(p) · GW(p)

I thought of one possible reason that would make deontology "justifiable" in consequentialist terms. Those classic "my decision has negligible effect by itself, but if everyone made the same decision, it would be good/bad" situations, like "should I bother voting" or "is okay if I shoplift". If everyone were consequentialists, each might individually decide that the effect of their action is negligible, and thus end up not voting or deciding that shoplifting was okay, with disastrous effects for society. In contrast, if more people were deontologists, they'd do the right thing even if the effect of their individual decision probably didn't change anything.

comment by Jack · 2010-02-01T23:01:15.186Z · LW(p) · GW(p)

Erm. I agree with the PhD-having philosopher that you can think about the formulation that way. But my PhD-having philosophers are pretty clear that even if Kant ends up implicitly relying on this it can't be what he is really trying to argue since it obviously precludes a priori knowledge of the CI. And if you can't know it a priori then Kant's entire edifice falls apart.

And below, Breakfast is wondering why one should consider possible worlds relevant to decision making and says "I know Kant has some... tangled, Kantian argument regarding this". But of course Kant has no such argument! Because that isn't his argument. The argument for the CI stems from Kant's conception of freedom (that it is self-governance and that the only self-governance we could have a priori comes from the form of self-governance itself). The argument fails, I think, but it has nothing to do with counterfactuals. So when you say "Counterfactuals, straight out of Kant", it seems a lot of people who haven't read Kant are going to be mislead.

I know you're just using Kant illustratively, but maybe qualify it as "some formulations of Kant"?

comment by TheAncientGeek · 2014-02-11T21:09:36.943Z · LW(p) · GW(p)

For us hybridist, it is the function of cosequentialism to justify rules, and the function of rules to justify sanctions.

Replies from: Nornagest
comment by Nornagest · 2014-02-11T21:30:17.805Z · LW(p) · GW(p)

That seems to lead to a logical cycle. What is the function of sanctions? To modify the behavior of other agents. Why do we want to modify the behavior of other agents? Because we find some actions undesirable. Why do we find them undesirable? Because of their consequences, or because they violate established rules...

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-02-11T21:34:58.377Z · LW(p) · GW(p)

Not all cycles are bad.

comment by Jayson_Virissimo · 2010-01-31T20:33:21.526Z · LW(p) · GW(p)

As someone who is on the fence between between noncognitivism and deontic/virtue ethics, I seem to be witnessing a kind of incommensurability of ethical theories going on in this thread. It is almost like Alicorn is trying to show us the rabbit, but all we are seeing is the duck and talking about the "rabbit" as if is it some kind of bad metaphor for a duck.

On Less Wrong, consequentialism isn't just another ethical theory that you can swap in and out of our web of belief. It seems to be something much more central and interwoven. This might be due to the fact that some disciplines like economics implicitly assume some kind of vague utilitarianism and so we let certain ethical theories become more central to our web of belief than is warranted.

I predict that Alicorn would have similar problems trying to get people on Less Wrong to understand Aristotelian physics, since it is really closer to common sense biology than Einsteinian physics (which I am guessing is very central to our web of belief).

Replies from: wnoise
comment by wnoise · 2010-02-01T19:16:14.540Z · LW(p) · GW(p)

You're confusing "understand" and "accept as useful or true".

Alicorn's post was good summary of deontology. I understand it, I just don't agree with it. Richard Garfinkle's SF novel Celestial Matters in addition to being a great read, also elucidates some consequences of Aristotelian physics, increasing the intuition of the reader. I certainly think that Garfinkle understands Aristotelian physics, and just as assuredly is unwilling to use it for orbital calculations in practice (though quite capable of doing the same for fiction purposes).

EDIT: reading further in the comments, I do indeed see plenty of people who don't understand deontic ethics. But just your comment about "not being able to swap in or out" does not at all demonstrate lack of understanding.

EDIT: I'd also appreciate a comment by the person who downvoted me about their reasoning (or anyone else who disagrees with the substance). I obviously think this is fairly straight-forward point -- understanding and accepting are two different things. Wanting to swap a framework in or out of our web of belief is not purely about understanding it, but about accepting it. Related, certainly (it really helps to understand something in order to accept it), but not the same.

comment by RichardChappell · 2010-01-31T16:17:26.649Z · LW(p) · GW(p)

Deontology relies on things that do not happen after the act judged to judge the act. This leaves facts about times prior to and the time during the act to determine whether the act is right or wrong.

I'm not convinced that this 'backward-looking vs. forward-looking' contrast really cuts to the heart of the distinction. Note that consequentialists may accept an 'holistic' axiology according to which whether some future event is good or bad depends on what has previously happened. (For a simple example, retributivists may hold that it's positively good when those who are guilty of heinous crimes suffer. But then in order to tell whether we should relieve Bob's suffering, we need to look backwards in time to see whether he's a mass-murderer.) It strikes me as misleading to characterize this as involving a form of "overlap" with deontological theories. It's purely consequentialist in form; it merely has a more complex axiology than (say) hedonism.

The distinction may be better characterised in terms of the relative priority of 'the right' and 'the good'. Consequentialists take goodness (i.e. desirability, or what you ought to want) as fundamental, and thus have a teleological conception of action: the point of acting is to achieve some prior goal (which, again, needn't be purely forward-looking). Deontologists reverse this. They begin with a conception of how one ought to act (e.g. in ways that would be universalizable, or justifiable to others, or respects everyone's rights), and only subsequently derive the doppelganger's conception of the good (as you put it: "what would the world look like if I follow theory X").

An interesting consequence of this analysis is that so-called "rule consequentialism" turns out to be a borderline case: the good (what to want) is partly, but not entirely, prior to the right (how to act). I explain this in more detail in my post Analyzing Consequentialisms.

Replies from: Alicorn
comment by Alicorn · 2010-01-31T16:22:13.165Z · LW(p) · GW(p)

As I specify in my first footnote, consequentialism is wickedly hard to define. It may be that the teleological aspect is more important than the subsequence aspect, but either one leaves some things to be desired, and my post was already awfully long without going into "teleology".

I like your article, though!

comment by Zack_M_Davis · 2010-01-31T18:17:08.753Z · LW(p) · GW(p)

The deontologist wasn't thinking any of those things. The deontologist might have been thinking "because people have a right to the truth", or "because I swore an oath to be honest", or "because lying is on a magical list of things that I'm not supposed to do", or heck, "because the voices in my head told me not to". But the deontologist is not thinking anything with the terms "utility function" [...]

Right, but what about Dutch book-type arguments? Even if I agree that lying is wrong and not because of its likely consequences, I still have to make decisions under uncertainty. The reason for trying to bludgeon everything into being a utility function is not that "the rightness of something depends on what happens subsequently." It's that, well, we have these theorems that say that all coherent decisionmaking processes have to satisfy these-and-such constraints on pain of being value-pumped. Anything you might say about rights or virtues is fine qua moral justification, but qua decision process, it either has to be eaten by decision theory or it loses.

Replies from: Alicorn
comment by Alicorn · 2010-01-31T18:18:33.114Z · LW(p) · GW(p)

If you think that lying is just wrong, can't you just... not lie? I don't see the problem here.

Replies from: Zack_M_Davis, wedrifid, komponisto
comment by Zack_M_Davis · 2010-01-31T18:53:54.779Z · LW(p) · GW(p)

The problem with unbreakable rules is that you're only allowed to have one. Suppose I have a moral duty to tell the truth no matter what and a moral duty to protect the innocent no matter what. Then what do I do if I find myself in a situation where the only way I can protect the innocent is by lying?

More generally, real life finds us in situations where we are forced to make tradeoffs, and furthermore, real life is continuous in a way that is not well-captured by qualitative rules. What if I think I have a 98% chance of protecting the innocent by lying?---or a 51% chance, or a 40% chance? What if I think a statement is 60% probable but I assert it confidently; is that a "lie"? &c., &c.

"Lying is wrong because I swore an oath to be honest" or "Lying is wrong because people have a right to the truth" may be good summaries of more-or-less what you're trying to do and why, but they're far too brittle to be your actual decision process. Real life has implementation details, and the implementation details are not made out of English sentences.

Replies from: Eliezer_Yudkowsky, Douglas_Knight, Alicorn, Unknowns
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-31T18:58:36.223Z · LW(p) · GW(p)

The problem with unbreakable rules is that you're only allowed to have one.

I second the question. Is there a standard reply in deontology? The standard reply of a consequentialist, of course, is the utility function.

Replies from: wedrifid, None, Tyrrell_McAllister, drnickbone, Unknowns
comment by wedrifid · 2010-01-31T20:07:49.959Z · LW(p) · GW(p)

Is there a standard reply in deontology? The standard reply of a consequentialist, of course, is the utility function.

I don't know whether there is a standard reply in deontology but the appropriate reply is using a function equivalent to the utility function by a consequentialist.

  • Take the concept of the utility function
  • Rename it to something suitably impressive (but I'll just go with the bland 'deontological decision function')
  • Replace 'utility of this decision' with 'rightness of this decision'.
  • A primitive utility function may include a term for 'my bank balance'. A primitive deontological decision function would have a term for "Not telling a lie".

Obviously, the 'deontological decision function' sacrifices the unbreakable criteria. This is appropriate when making a fair comparison between consequentialist and deontological decisions. The utility function sacrifice absolute reliance on one particular desideratum in order to accommodate all the others.

For the sake of completeness I'll iterate what seem to be the only possible approaches that actually allow having multiple unbreakable rules.

1) Only allow unbreakable rules that never contradict each other. This involves making the rules more complex. For example:

  • Always rescue puppies.
  • Never lie, except if it saves the life of puppies.
  • Do not commit adultery unless you are prostituting yourself in order to donate to the (R)SPCA.

Such a system is results in an approximation of the continuous deontological decision function.

2) Just have a single unbreakable meta-rule. For example:

  • Always do the most right thing in the deontological decision function. Or,
  • Always maximise utility.

These responses amount to "Hack a deontological system with unbreakable rules to work around the spirit of either 'unbreakable' or 'deontological'" and I include them only for completeness. My main point is that a deontological approach can be practically the same as the consequentialist 'utility function' approach.

Replies from: wedrifid
comment by wedrifid · 2010-02-01T02:49:22.012Z · LW(p) · GW(p)

It disappoints me that this comment is currently at -1. Of all the comments I have made in the last week this was probably my favourite and it remains so now.

If "the standard reply of a consequentialist is the utility function" then the analogous reply of a deontologist is something very similar. It is unreasonable to compare consequentialism with a utility function with a deontological system in which rules are 'unbreakable'. The latter is an absurd caricature of deontological reasoning that is only worth mentioning at all because deontologists are on average less inclined to follow their undeveloped thoughts through to the natural conclusion.

Was my post downvoted because...?

  1. Someone disagrees that a 'function' system applies to deontology just as it applies to consiquentialism.
  2. I have missed the fact that conclusion is universally apparent and I am stating the obvious.
  3. I included an appendix to acknowledge the consequences of the 'universal rule' system and elaborate on what a coherent system will look like if this universality can not be let go.
Replies from: Alicorn
comment by Alicorn · 2010-02-01T02:55:20.217Z · LW(p) · GW(p)

I haven't voted on your comment. I like parts of it, but found other parts very hard to interpret, to the point where they might have altered the reading of the parts I like, and so I was left with no way to assess its content. If I had downvoted, it would be because of the confusion and a desire to see fewer confusing comments.

Replies from: wedrifid
comment by wedrifid · 2010-02-01T03:10:21.045Z · LW(p) · GW(p)

Thankyou. A reasonable judgement. Not something that is trivial to rectify but certainly not an objection to object to.

comment by [deleted] · 2010-02-01T08:42:25.967Z · LW(p) · GW(p)

I'm pretty sure the standard reply is, "Sometimes there is no right answer." These are rules for classifying actions as moral or immoral, not rules that describe the behavior of an always moral actor. If every possible action (including inaction) is immoral, then your actions are immoral.

comment by Tyrrell_McAllister · 2010-01-31T19:56:03.724Z · LW(p) · GW(p)

The problem with unbreakable rules is that you're only allowed to have one.

I second the question. Is there a standard reply in deontology?

In my experience, deontologists treat this as a feature rather than a bug. The absolute necessity that the rules never conflict is a constraint, which, they think, helps them to deduce what those rules must be.

comment by drnickbone · 2012-04-30T19:56:01.965Z · LW(p) · GW(p)

This assumes that deontological rules must be unbreakable, doesn't it? That might be true for Kantian deontology, but probably isn't true for Rossian deontology or situation ethics.

We can, for instance imagine a deontological system (moral code) with three rules A, B and C. Where A and B conflict, B takes precedence; where B and C conflict, C takes precedence; where C and A conflict, A takes precedence (and there are no circumstances where rules A, B and C all apply together). That would give a clear moral conclusion in all cases, but with no unbreakable rules at all.

True, there would be a complex, messy rule which combines A, B and C in such a way as not to create exceptions, but the messy rule is not itself part of the moral code, so it is not strictly a deontological rule.

comment by Unknowns · 2010-01-31T19:16:05.207Z · LW(p) · GW(p)

All unbreakable rules in a deontological moral system are negative; you would never have one saying "protect the innocent." But you can have "don't lie" and "don't murder" and so on.

And no, if you answer the question truthfully, failing to protect the innocent, they don't count that as murdering (unless there was some other choice that you could have made without either lying or failing to protect the person.)

Replies from: Alicorn
comment by Alicorn · 2010-01-31T20:32:46.508Z · LW(p) · GW(p)

This isn't necessarily the case. You can have positive requirements in a deontic system.

Replies from: Unknowns
comment by Unknowns · 2010-02-01T06:25:16.132Z · LW(p) · GW(p)

Yes, but not "unbreakable" ones. In other words there will be exceptions on account of some other positive or negative requirement, as in the objections above.

comment by Douglas_Knight · 2010-02-01T08:12:05.755Z · LW(p) · GW(p)

The problem with unbreakable rules is that you're only allowed to have one.

"Allowed"?

It is quite common for moral systems found in the field to have multiple unbreakable rules and for subscribers to be faced with the bad moral luck of having to break one of them. The moral system probably has a preference on the choice, but it still condemns the act and the person.

comment by Alicorn · 2010-01-31T20:36:11.864Z · LW(p) · GW(p)

A really clever deontic theory either doesn't permit those conflicts, or has a meta-rule that tells you what to do when they happen. (My favored solution is to privilege the null action.)

A deontic theory might take into account your probability assessments, or ideal probability assessments, regarding the likely outcome of your action.

And of course if you're going to fully describe what a rule means, you have to define things in it like "lie", just as to fully describe utilitarianism you have to define "utility".

comment by Unknowns · 2010-01-31T19:17:43.785Z · LW(p) · GW(p)

It's true that the detail of real life is an objection to deontology, but it is also an objection to every other moral system, for much the same reasons.

comment by wedrifid · 2010-01-31T18:59:06.073Z · LW(p) · GW(p)

If you think that lying is just wrong, can't you just... not lie? I don't see the problem here.

Yes. It may or may not cause the extinction of humanity but if you want to 'just... not lie' you can certainly do so.

comment by komponisto · 2010-01-31T18:54:30.825Z · LW(p) · GW(p)

Can a deontologist still care about consequences?

Suppose you believe that lying is wrong for deontic reasons. Does it follow that we should program an AI never to lie? If so, can a consequentialist counter with arguments about how that would result in destroying the universe and (assuming those arguments were empirically correct) have a hope of changing your mind?

Replies from: Alicorn
comment by Alicorn · 2010-01-31T20:31:26.531Z · LW(p) · GW(p)

A deontologist may care about consequences, of course. I think whether and how much you are responsible for the lies of an AI you create probably depends on the exact theory. And of course knowingly doing something to risk destroying the world would almost certainly be worse than lying-by-proxy, so such arguments could be effective.

comment by RobinZ · 2010-01-30T18:43:25.644Z · LW(p) · GW(p)

+10karma for you!

I have a bit of a negative reaction to deontology, but upon consideration the argument would be equally applicable to consequentialism: the prescriptions and proscriptions of a deontological morality are necessarily arbitrary, and likewise the desideratum and disdesideratum (what is the proper antonym? Edit: komponisto suggests "evitandum", which seems excellent) of a consequentialist morality are necessarily arbitrary.

...which makes me wonder if the all-atheists-are-nihilists meme is founded in deontological intuitions.

Replies from: komponisto, Breakfast
comment by komponisto · 2010-01-30T18:48:10.708Z · LW(p) · GW(p)

desideratum...(what is the proper antonym?)

"Evitandum"?

Sounds even better in the plural: "The evitanda of the theory..."

Replies from: Alicorn, Eliezer_Yudkowsky, RobinZ
comment by Alicorn · 2010-01-30T18:50:22.580Z · LW(p) · GW(p)

Oh, I like that, it's adorable.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-30T19:21:52.186Z · LW(p) · GW(p)

I initially associated this to "evidence" but I suppose it would be easy enough to learn.

comment by RobinZ · 2010-01-30T18:55:43.654Z · LW(p) · GW(p)

...how do you pronounce that? And what is the etymology? The only obvious source I can see is "evil", which is Germanic rather than Latinate.

(A carping complaint, to be sure, but even if I fold on this one, I still maintain that many mismatched combinations - particularly "ombudsperson" - are abominations unto good taste.)

Replies from: komponisto, Alicorn
comment by komponisto · 2010-01-30T19:00:20.109Z · LW(p) · GW(p)

What Alicorn said. "Evitare" is Latin for "to avoid"; if "X-are" is a Latin verb meaning "to Y", then an "X-andum" is a "thing to be Y-ed".

Replies from: ABranco
comment by ABranco · 2010-03-31T05:03:08.774Z · LW(p) · GW(p)

"Avoidum" (pl. "avoida") could be an alternative — but "evitandum", having more syllables, does sound better.

Replies from: JohnWittle
comment by JohnWittle · 2013-04-08T17:03:20.960Z · LW(p) · GW(p)

I never came across that word during my four years of studying latin. What declension is it?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-04-08T20:14:17.610Z · LW(p) · GW(p)

From my two years of studying Latin I know that evitandum is second declension neuter gender, being a gerund. In Latin the word can also be an adjective, in which case it is second declension and inflected for all genders.

Cf. the English word "inevitable" = unavoidable.

Replies from: JohnWittle
comment by JohnWittle · 2013-04-08T21:52:54.689Z · LW(p) · GW(p)

err, I meant 'Avoidum'

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2013-04-08T22:46:49.113Z · LW(p) · GW(p)

Ok, that's just a made-up mish-mash of English and Latin.

comment by Alicorn · 2010-01-30T18:57:31.840Z · LW(p) · GW(p)

From "evitable", which is the opposite of "inevitable" - so it means "thing to be avoided".

Replies from: RobinZ
comment by RobinZ · 2010-01-30T18:59:17.870Z · LW(p) · GW(p)

All is clear! Approved!

(Would have edited in, but no natural way to do so and preserve thread of conversation.)

(Edit: Have edited into the parenthetical.)

comment by Breakfast · 2010-01-31T16:38:38.909Z · LW(p) · GW(p)

Certainly, many theists immediately lump atheism, utilitarianism and nihilism together. There are heaps of popular depictions framing utilitarian reasoning as being too 'cold and calculating' and not having 'real heart'. Which follows from atheists 'not having any real values' and from accepting the nihilistic, death-obsessed Darwinian worldview, etc.

comment by DanielLC · 2010-05-20T06:30:16.657Z · LW(p) · GW(p)

I can perfectly understand the idea that lying is fundamentally bad, not just because of its consequences. My problem comes up for how that doesn't imply that something else can be bad because it leads to other people lying.

The only way I can understand it is that deontology is fundamentally egoist. It's not hedonist; you worry about things besides your well-being. But you only worry about things in terms of yourself. You don't care if the world descends into sin so long is you are the moral victor. You're not willing to murder one Austrian to save him from murdering six million Jews.

Am I missing something?

Replies from: Kevin, Alicorn, Jack
comment by Kevin · 2010-05-20T07:32:31.944Z · LW(p) · GW(p)

Hitler may not be the best example since it's not obvious to me that Hitler's death would have resulted in fewer lives lost during the genocides of the 20th century, because a universe without Hitler would have had a more powerful USSR.

Replies from: Strange7, Vladimir_M
comment by Strange7 · 2010-08-05T05:43:18.439Z · LW(p) · GW(p)

For that matter, Germany could've picked a different embittered, insane would-be dictator. They weren't in short supply.

comment by Vladimir_M · 2010-05-20T17:59:40.826Z · LW(p) · GW(p)

I don't think your assessment is accurate, because of the following facts:

  • USSR actually ended up more powerful, enlarged, and with greater prestige in 1945 -- for the exact reason that Germany, its main strategic rival, went on to pursue a suicidal attack against it under Hitler.

  • The German-Soviet war itself opened the opportunities for genocidal and near-genocidal campaigns by both sides, especially Germans, and it would have to have been an awfully large decrease in Soviet-perpetrated genocide to balance that.

  • Not counting the deaths related to the military operations, the overwhelming number of killings done by Stalin had already been finished by 1941. After that, the situation under him was of course awfully bad, but there was nothing like the enormous, Holocaust-scale mass killing projects he undertook in the 1930s.

comment by Alicorn · 2010-05-20T06:33:05.698Z · LW(p) · GW(p)

If I know he's going to murder six million Jews, that's relevant. If I stab him because he took my parking space and for this reason, he does not go on to murder six million Jews, I have achieved no moral victory.

Replies from: cousin_it
comment by cousin_it · 2010-05-20T06:41:58.813Z · LW(p) · GW(p)

I'm not sure this scenario enlightens me. It seems to be about available information rather than deontologism vs consequentialism. From the way you describe it, both the deontologist and the consequentialist will murder Hitler if they know he's going to become Hitler, and won't if they don't.

Replies from: Alicorn
comment by Alicorn · 2010-05-20T06:47:28.618Z · LW(p) · GW(p)

The consequentialist will not in fact kill Hitler if they don't know he's Hitler, but it's part of their theory that they should.

Replies from: None, cousin_it
comment by [deleted] · 2011-07-24T14:51:05.981Z · LW(p) · GW(p)

That seems like a fairly useless part of consequential theory. In particular, when retrospecting about one's previous actions, a consequentialist should give more weight to the argument "yes, he turned out to become Hitler, but I didn't know that, and the prior probability of the person who took my parking space being Hitler is so low I would not have been justified in stabbing him for that reason" than "oh no, I've failed to stab Hitler". It's just a more productive thing to do, given that the next person who takes the consequentialist's parking space is probably not Stalin.

Real-life morality is tricky. But when playing a video game, I am a points consequentialist: I believe that the right thing to do in the video game is that which maximizes the amount of points I get at the end.

Suppose one of my options is randomly chosen to lead to losing the game. I analyze the options and choose the one that has the lowest probability of being chosen. Turns out, I was unlucky and lost the game. Does that make my choice any less the right one? I don't believe that it does.

comment by cousin_it · 2010-05-20T07:01:23.056Z · LW(p) · GW(p)

Same for the consequentialist, no?

comment by Jack · 2010-05-20T15:53:20.448Z · LW(p) · GW(p)

In Kantian deontology the actions of someone else can generate a positive obligation. In particular, your are obligated to punish those who violate the Categorical Imperative. You're definitely obligated to punish Hitler after the fact. Beforehand is trickier (but this is more a metaphysics of time issue than a ethical issue, you could probably make a case for timeless punishment under Kantian deotology).

Replies from: DanielLC
comment by DanielLC · 2010-08-05T05:29:41.784Z · LW(p) · GW(p)

My point wasn't punishment. If I were to kill an innocent man to keep Hitler from getting into power, that would still save him from murdering millions, and there'd be a net decrease in the murder of innocent people.

If anything, this version is even more clear cut, since it's not clear if you're really saving someone if they end up dead.

The only reason you can justify not doing it is if you think it's more important for you to not be a murderer than Hitler.

Replies from: lessdazed
comment by lessdazed · 2011-07-23T21:22:15.468Z · LW(p) · GW(p)

murdering six million Jews.

My point wasn't punishment.

Your excellent comment would have been improved by instead saying "murdering 11 million in concentration camps" or better yet "beginning a war that led to over fifty million dead".

Many consequentialist theories might have a special term in which it is bad (for people, to be sure) if a culture or a people is targeted and destroyed. If Poland had ~35 million people, including ~3 million Jews, and Hitler killed ~6 million Polish civilians, including nearly all the Jews, he did worse than if he would have killed 6 million at random, as he killed ~6 million innocent people and one innocent people (sic). In another sense, there may have been similar suffering among Polish Jews and non-Jews (perhaps more aggregate suffering among non-Jews if the non-fatal suffering of the other Poles is included, but as a point of historical fact the average suffering by a Jew before death was probably greater than the average Pole's before death 1939-45). Perhaps killing a people isn't very bad, and our condemnation to it has to do with how hard it is to kill a people without killing people, the second of which is the important bad thing.

Similarly, the mode of death and capacity of the dead varies greatly among consequentialism and deontology, but a singular mention of the murdered somewhat indicates deontological thinking. How much worse is a murder than a killing (of a volunteer soldier? Of a draftee? Of a weapons manufacturing worker? Of a power plant worker? Of an apprentice florist who has nearly reached draft age)?

Broadly speaking, when generalizing over consequentialism I wouldn't focus on those murdered, but of those who died. Doing so would have more clearly indicated that your point wasn't punishment.

Replies from: DanielLC
comment by DanielLC · 2011-07-23T21:49:29.950Z · LW(p) · GW(p)

Your excellent comment would have been improved by instead saying "murdering 11 million in concentration camps" or better yet "beginning a war that led to over fifty million dead".

My reference to historical events would have been slightly more complete? Referencing historical events isn't important. I wasn't even so much referencing the event as referencing that it always gets referenced. Hitler just happened to end up in the middle of a popular thought experiment.

Many consequentialist theories might have a special term in which it is bad (for people, to be sure) if a culture or a people is targeted and destroyed....

So? My point is that, even if you accept a given action is inherently bad, if it's bad for anyone to do it, it may be worth while for you to do it. It only works out as Deontology if you assume that actions can only be bad if you're the one doing them. More specific thought experiments can show that it only works if they're only bad if you're doing them right at this very moment.

Broadly speaking, when generalizing over consequentialism I wouldn't focus on those murdered, but of those who died.

If it was just bad to die, no deontologist would argue that there's anything wrong with killing one guy to keep him from killing another. I was assuming for the sake of argument. it was just murder that was bad.

comment by sark · 2010-02-03T12:18:00.144Z · LW(p) · GW(p)

Deontology treats morality as terminal. Consequentialism treats morality as instrumental.

Is this a fair understanding of deontology? Or is this looking at deontology through a consequentialism lens?

Replies from: Alicorn, DanielLC
comment by Alicorn · 2010-02-03T14:10:44.264Z · LW(p) · GW(p)

This looks okay as an interpretation of deontology to me. This may be because it sounds like a nice thing to say about it, and a comparatively mean thing to say about consequentialism, but I can't claim to get consequentialism on an emotional level, so I guess I don't know what's considered mean to say about it.

Replies from: wedrifid, sark
comment by wedrifid · 2010-02-03T14:29:37.753Z · LW(p) · GW(p)

This may be because it sounds like a nice thing to say about it, and a comparatively mean thing to say about consequentialism

For comparison, as I read that it sounded like a mean thing to say about deontology and a neutral thing to say about consequentialism. This may be because I have internalized consequentialist thinking so consequentialist related things sound better. Or maybe it is because I naturally associate 'morality as terminal' with 'lies you tell people and stuff you try to force other people to do'.

Replies from: Alicorn
comment by Alicorn · 2010-02-03T14:34:52.623Z · LW(p) · GW(p)

That's very interesting. If it happened in one direction - if morality being instrumental started out sounding good to you and bad to me - that could explain a lot of the apparent disconnect between consequentialists and non-.

comment by sark · 2010-02-03T15:13:38.927Z · LW(p) · GW(p)

This looks okay as an interpretation of deontology to me. This may be because it sounds like a nice thing to say about it, and a comparatively mean thing to say about consequentialism

I'm a consequentialist, and treating morality as terminal seems to me like missing the point of morality entirely. I'm glad I got it right that deontologists think that way. But I can't understand why you would consider treating morality as terminal correct.

As a consequentialist I would say: "Morality concerns what you care about, not the fact that you care."

What does the deontologist think of that?

Replies from: Alicorn, Jack
comment by Alicorn · 2010-02-03T15:19:10.802Z · LW(p) · GW(p)

I'd say it doesn't matter if you care: you should do what's right anyway. Even psychopaths should do what's right.

Replies from: AndyWood, Jack, sark
comment by AndyWood · 2010-02-03T15:54:49.056Z · LW(p) · GW(p)

Does the question of "why" simply not enter into a deontologists thinking? My mind seems to leap to complete the statement "you should do what's right" with something along the lines of "because society will be more harmonious".

Also, I wish that psychopaths would do what's right, but what seems to be missing is any force of persuasion. And that seems important.

Replies from: Alicorn
comment by Alicorn · 2010-02-03T15:58:45.334Z · LW(p) · GW(p)

We can have "whys", but they look a little different. Mine look like "because people have rights", mostly. Or "because I am a moral agent", looking from the other direction.

Replies from: AndyWood
comment by AndyWood · 2010-02-03T16:41:03.439Z · LW(p) · GW(p)

I think one reason that so many people here are consequentialists is that these kinds of ideas do not hit bottom. I think LW attracts people who like to chase explanations down as far as possible to foundations. Do you yourself apply reductionism to morality?

Replies from: Alicorn
comment by Alicorn · 2010-02-03T16:59:54.991Z · LW(p) · GW(p)

"Reductionism" is one of those words that can mean about seventeen things. I think rights/moral agency both drop out of personhood, which is a function of cognitive capacities, which I take to be software instantiated on entirely physical bases - does that count as whatever you had in mind?

Replies from: AndyWood
comment by AndyWood · 2010-02-03T17:07:48.570Z · LW(p) · GW(p)

From Wikipedia:

an approach to understand the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things

The bold part is plenty close enough to what I have in mind. Your response definitely counts. Next question that immediately leaps to mind is: how do you determine which formulations of morality best respect the personhood and rights of others, and best fulfill your duty as a moral agent?

Replies from: Alicorn
comment by Alicorn · 2010-02-03T17:28:53.138Z · LW(p) · GW(p)

My theory isn't finished, so I can't present you with a list, or anything, but I've just summarized what I've got so far here.

comment by Jack · 2010-02-03T17:06:03.324Z · LW(p) · GW(p)

Have you given a description of your own ethical philosophy anywhere? If not, could you summarize your intuitions/trajectory? Doesn't need to be a complete theory or anything, I'm just informally polling the non-utilitarians here.

(Any other non-utilitarians who see this feel free to respond as well)

Replies from: Alicorn
comment by Alicorn · 2010-02-03T17:28:13.348Z · LW(p) · GW(p)

I feel like I've summarized it somewhere, but can't find it, so here it is again (it is not finished, I know there are issues left to deal with):

Persons (which includes but may not be limited to paradigmatic adult humans) have rights, which it is wrong to violate. For example, one I'm pretty sure we've got is the right not to be killed. This means that any person who kills another person commits a wrong act, with the following exceptions: 1) a rights-holder may, at eir option, waive any and all rights ey has, so uncoerced suicide or assisted suicide is not wrong; 2) someone who has committed a contextually relevant wrong act, in so doing, forfeits eir contextually relevant rights. I don't yet have a full account of "contextual relevance", but basically what that's there for is to make sure that if somebody is trying to kill me, this might permit me to kill him, but would not grant me license to break into his house and steal his television.

However, even once a right has been waived or forfeited or (via non-personhood) not had in the first place, a secondary principle can kick in to offer some measure of moral protection. I'm calling it "the principle of needless destruction", but I'm probably going to re-name it later because "destruction" isn't quite what I'm trying to capture. Basically, it means you shouldn't go around "destroying" stuff without an adequate reason. Protecting a non-waived, non-forfeited right is always an adequate reason, but apart from that I don't have a full explanation; how good the reason has to be depends on how severe the act it justifies is. ("I was bored" might be an adequate reason to pluck and shred a blade of grass, but not to set a tree on fire, for instance.) This principle has the effect, among others, of ruling out revenge/retribution/punishment for their own sakes, although deterrence and preventing recurrence of wrong acts are still valid reasons to punish or exact revenge/retribution.

In cases where rights conflict, and there's no alternative that doesn't violate at least one, I privilege the null action. (I considered denying ought-implies-can, instead, but decided that committed me to the existence of moral luck and wasn't okay.) "The null action" is the one where you don't do anything. This is because I uphold the doing-allowing distinction very firmly. Letting something happen might be bad, but it is never as bad as doing the same something, and is virtually never as bad as performing even a much more minor (but still bad) act.

I hold agents responsible for their culpable ignorance and anything they should have known not to do, as though they knew they shouldn't have done it. Non-culpable ignorance and its results is exculpatory. Culpability of ignorance is determined by the exercise of epistemic virtues like being attentive to evidence etc. (Epistemologically, I'm an externalist; this is just for ethical purposes.) Ignorance of any kind that prevents something bad from happening is not exculpatory - this is the case of the would-be murderer who doesn't know his gun is unloaded. No out for him. I've been saying "acts", but in point of fact, I hold agents responsible for intentions, not completed acts per se. This lets my morality work even if solipsism is true, or we are brains in vats, or an agent fails to do bad things through sheer incompetence, or what have you.

Replies from: Jack, lessdazed, AngryParsley, AndyWood, blacktrance, wedrifid
comment by Jack · 2010-02-03T17:57:50.909Z · LW(p) · GW(p)

Upvoted for spelling out so much, though I disagree with the whole approach (though I think I disagree with the approach of everyone else here too). This reads like pretty run of the mill deontology-- but since I don't know the field that well, is there anywhere you differ from most other deontologists?

Also, are rights axiomatic or is there a justification embedded in your concept of personhood (or somewhere else)?

Replies from: Alicorn
comment by Alicorn · 2010-02-03T18:03:12.204Z · LW(p) · GW(p)

The quintessential deontologist is Kant. I haven't paid too much attention to his primary sources because he's miserable to read, but what Kant scholars say about him doesn't sound like what goes through my head. One place I can think of where we'd diverge is that Kant doesn't forbid cruelty to animals except inasmuch as it can deaden humane intuitions; my principle of needless destruction forbids it on its own demerits. The other publicly deontic philosopher I know of is Ross, but I know him only via a two-minute unsympathetic summary which - intentionally or no - made his theory sound very slapdash, like he has sympathies to the "it's sleek and pretty" defense of utilitarianism but couldn't bear to actually throw in his lot with it.

The justification is indeed embedded in my concept of personhood. Welcome to personhood, here's your rights and responsibilities! They're part of the package.

Replies from: simplicio, Jack
comment by simplicio · 2010-05-21T19:54:40.712Z · LW(p) · GW(p)

Ross is an interesting case. Basically, he defines what I would call moral intuitions as "prima facie duties." (I am not sure what ontological standing he thinks these duties have.) He then lists six important ones: beneficence, honour, non-maleficence, justice, self-improvement and... goodness, I forget the 6th. But essentially, all of these duties are important, and one determines the rightness of an act by reflection - the most stringent duty wins and becomes the actual moral duty.

E.g., you promised a friend that you would meet them, but on the way you come upon the scene of a car crash. A person is injured, and you have first aid training. So basically Ross says we have a prima facie duty to keep the promise (honour), but also to help the motorist (beneficence), and the more stringent one (beneficence) wins.

I like about it that: it adds up to normality, without weird consequences like act utilitarianism (harvest the traveler's organs) or Kantianism (don't lie to the murderer).

I don't like about it that: it adds up to normality, i.e., it doesn't ever tell me anything I don't want to hear! Since my moral intuitions are what decides the question, the whole thing functions as a big rubber stamp on What I Already Thought. I can probably find some knuckle-dragging bigot within a 1-km radius who has a moral intuition that fags must die. He reads Ross & says: "Yeah, this guy agrees with me!" So there is a wrong moral intuition. On the other hand, before reading Peter Singer (a consequentialist), I didn't think it was obligatory to give aid to foreign victims of starvation & preventable disease; now I think it is as obligatory as, in his gedanken experiment, pulling a kid out of a pond right beside you (even though you'll ruin your running shoes). Ross would not have made me think of that; whatever "seemed" right to me, would be right.

I am also really, really suspicious of the a priori and the prima facie. It seems very handwavy to jump straight to these "duties" when the whole point is to arrive at them from something that is not morality - either consequences or through some sort of symmetry.

Replies from: AlexanderRM
comment by AlexanderRM · 2015-03-27T19:02:31.482Z · LW(p) · GW(p)

"The whole thing functions as a big rubber stamp on What I Already Thought"

Speaking as a (probably biased) consequentialist, I generally got the impression that this was pretty much the whole point of Deontology.

However, the example of Kant being against lying seems to go against my impression. Kantian deontology is based on reasoning things about your rules, so it seems to be consistent in that case.

Still, it seems to me that more mainstream Deontology allows you to simply make up new categories of acts (ex. lying is wrong, but lying to murderers is OK) in order to justify your intuitive response to a thought experiment. How common is it for Deontologists to go "yeah, this action has utterly horrific consequences, but that's fine because it's the correct action", the way it is for Consequentialists to do the reverse? (again noting that I've now heard about the example of Kant, I might be confusing Deontology with "inuitive morality" or "the noncentral fallacy".)

comment by Jack · 2010-02-04T05:55:42.460Z · LW(p) · GW(p)

So I think I have pretty good access to the concept of personhood but the existence of rights isn't obvious to me from that concept. Is there a particular feature of personhood that generates these rights?

Replies from: Alicorn
comment by Alicorn · 2010-02-04T14:09:28.208Z · LW(p) · GW(p)

That's one of my not-finished things, is spelling out exactly why I think you get there from here.

comment by lessdazed · 2011-07-24T01:13:22.623Z · LW(p) · GW(p)

Rather than take the "horrible consequences" tack, I'll go in the other direction. How possible is it that something can be deontologically right or wrong if that something is something no being cares about, nor do they care about any of its consequences, by any extrapolation of their wants, likes, conscious values, etc., nor should they think others care? Is it logically possible?

a rights-holder may, at eir option, waive any and all rights ey has, so uncoerced suicide or assisted suicide is not wrong...the would-be murderer who doesn't know his gun is unloaded.

Replies from: Alicorn, Peterdjones
comment by Alicorn · 2011-07-24T02:38:16.157Z · LW(p) · GW(p)

You seem to answer your own question in the quote you chose, even though it seems like you chose it to critique my inconsistent pronoun use. If no being cares about something, nor wants others to care about it, then they're not likely to want to retain their rights over it, are they?

The sentences in which I chose "ey" are generic. The sentences in which I used "he" are about a single sample person.

Replies from: lessdazed
comment by lessdazed · 2011-07-24T03:24:23.620Z · LW(p) · GW(p)

So if they want to retain without interruption their right to, say, not have a symmetrical spherical stone at the edge of their lawn rotated without permission, they perforce care whether or not it is rotated? They can't merely want a right? Or if they want a right, and have a right, and they don't care to exercise the right, but want to retain the right, they can't? What if the only reason they care to prohibit stone turning is to retain the right? Does that work? Is there a special rule saying it doesn't?

As part of testing theories to see when they fail rather than succeed, my first move is usually to try recursion.

not likely to want to retain their rights over it

Least convenient possible world, please.

Regardless, you seem to believe that some other forms of deontology are wrong but not illogical, and believe consequentialist theories wrong or illogical. For example, a deontology otherwise like yours that valued attentiveness to evidence more you would label wrong and not illogical. I ask if you would consider a deontological theory invalid if it ignored wants, cares etc. of beings, not whether or not that is part of your theory.

If it's not illogical and merely wrong, then is that to say you count that among the theories that may be true, if you are mistaken about facts, but not mistaken about what is illogical and not?

I think such a dentology would be illogical, but am to various degrees unsure about other theories, which is right and which wrong, and about the severity and number of wounds in the wrong ones. Because this deontology seems illogical, it makes me suspect of its cousin theories, as it might be a salient case exhibiting a common flaw.

I think it is more intellectually troubling than the hypothetical of committing a small badness to prevent a larger one, but as it is rarely raised presumably others disagree or have different intuitions.

I don't see the point of mucking with the English language and causing confusion for the sake of feminism if the end result is that singular sample murderers are gendered. It seems like the worst of both worlds.

Replies from: Alicorn
comment by Alicorn · 2011-07-24T03:53:23.840Z · LW(p) · GW(p)

I don't think people have the silly right you have described.

I don't think your attempt at "recursion" is useful unless you are interested in rigorously defining "want" and "care" and any other words you are tempted to employ in that capacity.

I don't think I have drawn on an especially convenient possible world.

I don't think you're reading me charitably, or accurately.

I don't think you're predicting my dispositions correctly.

I don't think you're using the words "invalid" or "illogical" to refer to anything I'm accustomed to using the words for.

I don't think you make very much sense.

I don't think I consulted you, or solicited your opinion about, my use of pronouns.

I don't think you're initiating this conversation in good faith.

Replies from: lessdazed
comment by lessdazed · 2011-07-24T04:16:50.514Z · LW(p) · GW(p)

I'm sorry you feel that way. I tried to be upfront about my positions that you would disfavor: a form of feminism and also deontology. Perhaps you interpreted as egregious malicious emphasis on differences what I intended as the opposite.

Also, I think what you're interpreting as predicting dispositions wrongly is what I see as trying to spell out all possible objections as a way to have a conversation with you, rather than a debate in which the truth falls out of an argument. That means I raise objections that we might anticipate someone with a different system would raise, rather than setting up to clash with you.

I think that when you say I am not reading you charitably or accurately, you have taken what was a very reasonable misreading of my first comment and failed to update based on my second. I'm not talking about your theory. I'm trying to ask how fundamental the problems are in a somewhat related theory. Whether your theory escapes its gravity well of wrongness depends on both the distance from the mass of doom and its size. I hope that analogy was clear, as apparently other stuff hasn't been. So you can probably imagine what I think, as it somewhat mirrors what you seem to think: you're not reading me charitably, accurately, etc. I know you're not innately evil, of course, that's obvious and foundational to communication.

Replies from: Alicorn
comment by Alicorn · 2011-07-24T04:28:54.616Z · LW(p) · GW(p)

I am exiting this conversation now. I believe it will net no good.

comment by Peterdjones · 2013-01-18T18:21:49.457Z · LW(p) · GW(p)

Kant's answer, greatly simplified, is that rational agents will care about following moral rules, because that is part of rationality.

comment by AngryParsley · 2010-02-03T19:14:52.631Z · LW(p) · GW(p)

Why those particular rights? It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions. Kind of like how biblical apologists have explanations that just happen to coincide with our current understanding of history and physics.

If you lived in a world where your system of rights didn't typically lead to beneficial consequences, would you still believe them to be correct?

Replies from: Alicorn, Jack, wedrifid
comment by Alicorn · 2010-02-03T19:22:12.776Z · LW(p) · GW(p)

Why those particular rights?

What do you mean, "these particular rights"? I haven't presented a list. I mentioned one right that I think we probably have.

It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions. Kind of like how biblical apologists have explanations that just happen to coincide with our current understanding of history and physics.

Oh, now, that was low.

If you lived in a world where your system of rights didn't typically lead to beneficial consequences, would you still believe them to be correct?

Do you mean: does Alicorn's nearest counterpart who grew up in such a world share her opinions? Or do you mean: if the Alicorn from this world were transported to a world like this, would she modify her ethics to suit the new context? They're different questions.

Replies from: AngryParsley
comment by AngryParsley · 2010-02-03T19:34:53.008Z · LW(p) · GW(p)

I haven't presented a list.

Yeah, but most people don't come up with a moral system that arrives at undesirable consequences in typical circumstances. Ditto for going against human intuitions/culture.

They're different questions.

Now I'm curious. Is your answer to them different? Could you please answer both of those hypotheticals?

ETA: If your answer is different, then isn't your morality in fact based solely on the consequences and not some innate thing that comes along with personhood?

Replies from: Alicorn
comment by Alicorn · 2010-02-03T20:13:13.562Z · LW(p) · GW(p)

does Alicorn's nearest counterpart who grew up in such a world share her opinions?

Almost certainly, she does not. Otherworldly-Alicorn-Counterpart (OAC) has a very different causal history from me. I would not be surprised to find any two opinions differ between me and OAC, including ethical opinions. She probably doesn't even like chocolate chip cookie dough ice cream.

if the Alicorn from this world were transported to a world like this, would she modify her ethics to suit the new context?

No. However: after an adjustment period in which I became accustomed to the new world, my epistemic state about the likely consequences of various actions would change, and that epistemic state has moral force in my system as it stands. The system doesn't have to change at all for a change in circumstance and accompanying new consequential regularities to motivate changes in my behavior, as long as I have my eyes open. This doesn't make my morality "based on consequences"; it just means that my intentions are informed by my expectations which are influenced by inductive reasoning from the past.

Replies from: AngryParsley
comment by AngryParsley · 2010-02-03T23:20:35.447Z · LW(p) · GW(p)

I guess the question I meant to ask was: In a world where your deontology would lead to horrible consequences, do you think it is likely for someone to come up with a totally different deontology that just happens to have good consequences most of the time in that world?

A ridiculous example: If an orphanage exploded every time someone did nothing in a moral dilemma, wouldn't OAC be likely to invent a moral system saying inaction is more bad than action? Wouldn't OAC also likely believe that inaction is inherently bad? I doubt OAC would say, "I privilege the null action, but since orphanages explode every time we do nothing, we have to weigh those consequences against that (lack of) action."

Your right not to be killed has a list of exceptions. To me this indicates a layer of simpler rules underneath. Your preference for inaction has exceptions for suitably bad consequences. To me this seems like you're peeking at consequentialism whenever the consequences of your deontology are bad enough to go against your intuitions.

Replies from: Alicorn
comment by Alicorn · 2010-02-03T23:39:21.824Z · LW(p) · GW(p)

I guess the question I meant to ask was: In a world where your deontology would lead to horrible consequences, do you think it is likely for someone to come up with a totally different deontology that just happens to have good consequences most of the time in that world?

It seems likely indeed that someone would do that.

If an orphanage exploded every time someone did nothing in a moral dilemma

I think that in this case, one ought to go about getting the orphans into foster homes as quickly as possible.

One thing that's very complicated and not fully fleshed out that I didn't mention is that, in certain cases, one might be obliged to waive one's own rights, such that failing to do so is a contextually relevant wrong act and forfeits the rights anyway. It seems plausible that this could apply to cases where failing to waive some right will lead to an orphanage exploding.

comment by Jack · 2010-02-04T06:35:09.425Z · LW(p) · GW(p)

It seems rather convenient that they mostly arrive at beneficial consequences and jive with human intuitions.

Agreed. It is also rather convenient that maximizing preference satisfaction rarely involves violating anyone's rights and mostly jives with human intuitions.

And thats because normative ethics is just about trying to come up with nice sounding theories to explain our ethical intuitions.

Replies from: AngryParsley
comment by AngryParsley · 2010-02-04T12:54:49.965Z · LW(p) · GW(p)

Umm... torture vs dust specks is both counterintuitive and violates rights. Utilitarian consequentialists also flip the switch in the trolley problem, again violating rights.

It doesn't sound nice or explain our intuitions. Instead, the goal is the most good for the most people.

Replies from: Jack
comment by Jack · 2010-02-04T19:39:28.757Z · LW(p) · GW(p)

I said:

maximizing preference satisfaction rarely involves violating anyone's rights and mostly jives with human intuitions.

Those two examples are contrived to demonstrate the differences between utilitarianism and other theories. They hardly represent typical moral judgments.

comment by wedrifid · 2010-02-03T19:36:36.161Z · LW(p) · GW(p)

Why those particular rights?

Because she says so. Which is a good reason. Much as I have preferences for possible worlds because I say so.

comment by AndyWood · 2010-02-03T18:41:58.180Z · LW(p) · GW(p)

Thanks for writing this out. I think you'll be unsurprised to learn that this substantially matches my own "moral code", even though I am (if I understand the terminology correctly) a utilitarian.

I'm beginning to suspect that the distinction between these two approaches comes down to differences in background and pre-existing mental concepts. Perhaps it is easier, more natural, or more satisfying for certain people to think in these (to me) very high abstractions. For me, it is easier, more natural, and more satisfying to break down all of those lofty concepts and dynamics again, and again, until I've arrived (at least in my head) at the physical evolution of the world into successive states that have ranked value for us.

EDIT: FWIW, you have actually changed my understanding of deontology. Instead of necessarily involving unthinking adherence to rules handed down from on-high/outside, I can now see it as proceeding from more basic moral concepts.

comment by blacktrance · 2014-02-11T20:53:29.415Z · LW(p) · GW(p)

I find myself largely in agreement with most of this, despite being a consequentialist (and an egoist!).

Replies from: Alicorn
comment by Alicorn · 2014-02-11T21:00:24.215Z · LW(p) · GW(p)

Where's the point of disagreement that makes you a consequentialist, then?

Replies from: blacktrance
comment by blacktrance · 2014-02-11T21:08:52.981Z · LW(p) · GW(p)

Because while I agree that people have rights and that it's wrong to violate them, rights are themselves derived from consequences and preferences (via contractarian bargaining), and also that "rights" refers to what governments ought to protect, not necessarily what individuals should respect (though most of the time, individuals should respect rights). For example, though in normal life, justice requires you* to not murder a shopkeeper and steal his wares, murder would be justified in a more extreme case, such as to push a fat man in front of a trolley, because in that case you're saving more lives, which is more important.

My main disagreement, though, is that deontology (and traditional utilitarianism, and all agent-neutral ethical theories in general) is that it fails to give a sufficient explanation of why we should be moral.

.* By which I mean something like "in order to derive the benefits of possessing the virtue of justice". I'm also a virtue ethicist.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-02-11T21:21:53.071Z · LW(p) · GW(p)

Consequentialism can override rules just where consequences can be calculated...which is very rarely.

comment by wedrifid · 2010-02-03T18:04:14.127Z · LW(p) · GW(p)

Wow. You would try to stop me from saving the world. You are evil. How curious.

Replies from: Alicorn
comment by Alicorn · 2010-02-03T18:06:40.572Z · LW(p) · GW(p)

Why, what wrong acts do you plan to commit in attempting to save the world?

Do you believe that the world's inhabitants have a right to your protection? Because if they do, that'll excuse some things.

Replies from: wedrifid
comment by wedrifid · 2010-02-03T18:49:26.530Z · LW(p) · GW(p)

Why, what wrong acts do you plan to commit in attempting to save the world?

Evil and cunning. No! I'll shall not be revealing my secret anti-diabolical plans. Now is the time for me to assert with the utmost sincerity my devotion to a compatible deontological system of rights (and then go ahead and act like a consequentialist anyway).

Do you believe that the world's inhabitants have a right to your protection? Because if they do, that'll excuse some things.

Absolutely!

Ok, give me some perspective here. Just how many babies worth of excuse? Consider this counterfactual:

Robin has been working in secret with a crack team of biomedical scientists in his basement. He has fully functioning brain uploading and emulating technology at his fingertips. He believes wholeheartedly that releasing em technology into the world will bring about some kind of economist utopia, a 'subsistence paradise'. The only chance I have to prevent the release is to beat him to death with a cute little puppy. Would that be wrong?

Perhaps a more interesting question is would it be wrong for you not to intervene and stop me from beating Robin to death with a puppy?

Does it matter whether you have been warned of my intent? Assume that all you knew was that I assign a low utility to the future Robin seeks, Robin has a puppy weakness and I have just discovered that Robin has completed his research. Would you be morally obliged to intervene?

Now, Robin is standing with his hand poised over the button, about to turn the future of our species into a hardscrapple dystopia. I'm standing right behind him wielding a puppy in a two handed grip and you are right there with me. Would you kill the puppy to save Robin?

Replies from: Alicorn
comment by Alicorn · 2010-02-03T18:58:30.385Z · LW(p) · GW(p)

Evil and cunning.

Aw, thanks...?

If there in fact something morally wrong about releasing the tech (your summary doesn't indicate it clearly, but I'd expect it from most drastic actions Robin seems like he would be disposed to take), you can prevent it by, if necessary, murderously wielding a puppy, since attempting to release the tech would be a contextually relevant wrong act. Even if I thought it was obligatory to stop you, I might not do it. I'm imperfect.

Replies from: wedrifid, wedrifid
comment by wedrifid · 2010-02-03T19:25:47.856Z · LW(p) · GW(p)

Even if I thought it was obligatory to stop you, I might not do it. I'm imperfect.

That is promising. Would you let me kill Dave too?

Replies from: Alicorn
comment by Alicorn · 2010-02-03T20:06:02.857Z · LW(p) · GW(p)

If you're in the room with Dave, why wouldn't you just push the AI's reset button yourself?

Replies from: wedrifid
comment by wedrifid · 2010-02-04T02:22:36.578Z · LW(p) · GW(p)

See link. Depends on how I think he would update. I would kill him too if necessary.

comment by wedrifid · 2010-02-03T19:11:37.536Z · LW(p) · GW(p)

If there in fact something morally wrong about releasing the tech

I don't know about morals, but I hope it was clear that the consequences were assigned a low expected utility. The potential concern would be that your morals interfered with me seeking desirable future outcomes for the planet.

comment by sark · 2010-02-03T16:19:11.304Z · LW(p) · GW(p)

So morality is one shard of desire from godshatter which deontologists think matters a lot?

Replies from: Alicorn, wedrifid
comment by Alicorn · 2010-02-03T16:20:05.335Z · LW(p) · GW(p)

I really don't think desire enters into it, except in certain very indirect ways.

comment by wedrifid · 2010-02-03T16:48:44.649Z · LW(p) · GW(p)

So morality is one shard of desire from godshatter which deontologists think matters a lot?

Yes, although a deontologist will likely not want to describe it that way. It makes their entire ethical framework look like a cludgy hack.

comment by Jack · 2010-02-03T17:03:29.068Z · LW(p) · GW(p)

I'm a consequentialist, and treating morality as terminal seems to me like missing the point of morality entirely. I'm glad I got it right that deontologists think that way. But I can't understand why you would consider treating morality as terminal correct.

I can't understand how anyone could think there was a fact of the matter about this. How possibly could I decide which was a better way to treat morality?

Replies from: AndyWood
comment by AndyWood · 2010-02-03T17:18:03.165Z · LW(p) · GW(p)

I don't think there has to be an objective fact of the matter about it, in order for a person to find one way better. I don't endorse the use of the word correct here, but I would say that consequentialism seems to be more fundamental. Deontology seems to stop too soon. Because I prefer reductionist explanations in general, it's easy for me to decide that some form of consequentialism is "better", given my preference. I'm interested to learn the reasons why others find deontology more appealing.

Replies from: Jack
comment by Jack · 2010-02-03T17:25:03.709Z · LW(p) · GW(p)

Got it.

Fwiw, it seems to me that reduction is for natural objects not social constructions. You shouldn't try to apply reductionism to the rules of a sporting event, for example. (I'm not a deontologist but I don't think it is any less appealing than consequentialism.)

Replies from: AndyWood
comment by AndyWood · 2010-02-03T17:39:08.261Z · LW(p) · GW(p)

It may be a different flavor of reductionism from finding out how a clock works, but I still apply a kind of reductionism to social constructions, and well, pretty much everything. Social constructions, for example, have histories - origins and evolution - that I greatly enjoy digging into. You can read about how basketball originated, and what its creator was thinking when he selected the rules.

Replies from: Jack
comment by Jack · 2010-02-03T18:01:28.198Z · LW(p) · GW(p)

Sure, I see what you mean. But you can't just change the rules of basketball because you don't think they fit what the creator was trying to do. Similarly, the relevant reduction for ethics is cultural evolution and evolutionary psychology, but those fields of study won't tell you how to act.

ETA: Can't, not can. Oops.

comment by DanielLC · 2010-05-20T06:20:14.488Z · LW(p) · GW(p)

I, a consequentialist, consider morality terminal. I take actions that result in moral consequences.

Replies from: sark
comment by sark · 2010-05-21T11:17:23.614Z · LW(p) · GW(p)

OK, but what does this morality you consider terminal consist of? And why do you take it to be the way you take it to be?

comment by Kaj_Sotala · 2010-01-31T08:58:10.590Z · LW(p) · GW(p)

You can then extensionally define "renate" as "has a spinal column"

But what "renate" means intensionally has to do with kidneys, not spines.

I don't think this has been covered here yet, so for those not familiar with these two terms: inferring something extensionally means you infer something based on the set in which an object belongs to. Inferring something intensionally means you infer something based on the actual properties of the object.

Wikipedia formulates these as

An extensional definition of a concept or term formulates its meaning by specifying its extension, that is, every object that falls under the definition of the concept or term in question.

For example, an extensional definition of the term "nation of the world" might be given by listing all of the nations of the world, or by giving some other means of recognizing the members of the corresponding class.*

and

an intensional definition gives the meaning of a term by specifying all the properties required to come to that definition, that is, the necessary and sufficient conditions for belonging to the set being defined.

For example, an intensional definition of "bachelor" is "unmarried man." Being an unmarried man is an essential property of something referred to as a bachelor. It is a necessary condition: one cannot be a bachelor without being an unmarried man. It is also a sufficient condition: any unmarried man is a bachelor

Rule of thumb in case you get forget which is which: EXTEnsion refers to "external" properties, like the group you happen to belong into, while INTEnsion refers to internal properties.

Replies from: Vladimir_Nesov, arbimote
comment by Vladimir_Nesov · 2010-01-31T11:35:44.421Z · LW(p) · GW(p)

It was covered in Extensions and Intensions.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-01-31T15:21:19.755Z · LW(p) · GW(p)

You're right, I'd forgotten about that.

comment by arbimote · 2010-02-01T07:06:42.710Z · LW(p) · GW(p)

Are extensional and intensional definitions related to outside views and inside views? I suppose extensional definitions and outside view are about drawing conclusions from a class of things, while the intensional and inside use specific details more unique to the thing in question.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2010-02-03T16:46:49.123Z · LW(p) · GW(p)

It seems to me that they are at least somewhat related. Recently, I've been wondering to which degree extensional/intensional definitions, the outside/inside view and the near/far view might be different ways of looking at the same two modes of reasoning.

(I had a longer post of it in mind, and thought I had come up with something important, but now I've forgotten what the important part of it was. :-( )

comment by Paul Crowley (ciphergoth) · 2010-01-30T18:46:49.208Z · LW(p) · GW(p)

It seems to me that this addresses two very different purposes for moral judgments in one breath.

When trying to draw a moral judgment on the act of another, what they knew at the time and their intentions will play a big role. But this is because I'm generally building a predictive model of whether or not they're going to do good in the future. By contrast, when I'm trying to assess my own future actions, I don't see what need concern me except whether act A or act B bring about more good.

Replies from: Nisan, RobinZ
comment by Nisan · 2010-01-31T08:46:12.721Z · LW(p) · GW(p)

It seems to me that this addresses two very different purposes for moral judgments in one breath.

A possible role for deontic morality is assigning blame or approbation. For this it is necessary to take the agent's intent and level of knowledge into consideration. Blaming or approving people for their actions is part of the enforcement process in society.

I'm not trying to justify deontology; I'm observing that this feature and others (like the use of reference classes and the bit about keeping promises) make deontology well-suited to (or a natural feature of) prerational societies.

Replies from: Alicorn
comment by Alicorn · 2010-01-31T15:09:58.396Z · LW(p) · GW(p)

It's usually a province of consequentialism to separate rightness/wrongness from praiseworthiness/blameworthiness. They come together in other accounts. Appropriating deontic rules for only the latter purpose isn't using deontic morality proper, it's using a deontic-esque structure for blame and praise alone.

Replies from: AndyWood
comment by AndyWood · 2010-02-03T17:57:47.597Z · LW(p) · GW(p)

This point seems very important to me. I wonder how much disagreement is due to this, which I see as conflation.

How should I act? and How should I assign blame/praise? are very different questions. For one thing, when asking how to assign blame/praise, the framework for deciding blame/praiseworthiness is obviously key. However, when asking how oneself should act, the agent will have any number of considerations, and how praise or blame will be assigned by others may be a small or non-existent factor, depending on the situation.

In general, it seems like praisers and blamers will tend to be in a position of advocating for society, and actors will tend to be in a position of advocating for their individual interests.

Is there some motivation for wanting to unify these differing angles under one framework?

Replies from: Alicorn
comment by Alicorn · 2010-02-03T18:05:18.587Z · LW(p) · GW(p)

Perhaps germane to the distinction, at least for me, is that I find myself more interested in how to avoid blame and seek praise, so conflating them lets me figure out how to do that and also how to do what I should do simultaneously.

How one should assign blame/praise and how those things will in fact be assigned, however, are almost completely unrelated. One could be both blamed and unblameworthy, or praised and unpraiseworthy.

comment by RobinZ · 2010-01-30T18:49:02.099Z · LW(p) · GW(p)

By contrast, when I'm trying to assess my own future actions, I don't see what need concern me except whether act A or act B bring about more good.

You are a consequentialist. Your reply is precisely accurate, complete, and well-reasoned from a consequentialist perspective, but misses the essential difference between consequentialism and deontology.

Edit: Quoting the OP:

If a deontologist says "lying is wrong", and you mentally add something that sounds like "because my utility function has a term in it for the people around believing accurate things. Lying tends to decrease the extent to which they do so, but if I knew that somebody would believe the opposite of whatever I said, then to maximize the extent to which they believed true things, I would have to lie to them. And I would also have to lie if some other, greater term in my utility function were at stake and I could only salvage it with a lie. But in practice the best I can do is to maximize my expected utility, and as a matter of fact I will never be as sure that lying is right as I'd need to be for it to be a good bet."... you, my friend, have missed the point.

comment by Wei Dai (Wei_Dai) · 2010-02-01T13:28:55.006Z · LW(p) · GW(p)

For those curious about what kind of case can be made for deontology vs. consequentialism:

Replies from: drnickbone
comment by drnickbone · 2012-04-30T19:15:47.451Z · LW(p) · GW(p)

A big issue I have with act utilitarianism is the way it self-destructs pragmatically.

It looks like better consequences will arise if we teach a form of deontology, reward or punish people who (respectively) follow or break the moral rules, call it "right" to follow the rules and "wrong" to break them etc. So a true act consequentialist will encourage everyone to become a deontologist (and to the extent others copy him, will act like a deontologist). "Rule utilitarianism" seems immune from this problem, though arguably rule utilitarianism is a form of deontology; it just has an underlying rationale for selecting a particular set of rules (i.e. the optimal moral code).

A different objection is that it is simply too demanding: the best way for me to maximize utility is to give nearly all my money to humanitarian charities, so why aren't I doing that? (Answer, because my personal utility function has very weak correlation with a global additive or average utility function; though it does seem to have a strongly weighted component towards me personally following deontological rules. Funny that.)

comment by Stuart_Armstrong · 2010-02-01T11:55:31.313Z · LW(p) · GW(p)

Deontological arguments (apart from helping with "running on corrupted hardware") are useful for the compression of moral values. It's much easier to check your one-line deontology, than to run a complicated utility function through your best estimate of the future world.

A simple "do not murder" works better, for most people, than a complex utilitarian balancing of consequences and outcomes. And most deontological aguments are not rigid; they shade towards consequentialism when the consequences get too huge:

  • Freedom of speech is absolute - but don't shout fire in a crowded theatre.
  • Don't murder - except in war, or in self-defence.
  • Do not lie - but tell the nazis your cellars are empty of jews.

Where denotology breaks down is when the situation is unusual: where simply adding extra patches doesn't work. When dealing with large quantities of odd minds in odd universes; or the human race over the span of eons; then it's time to break out the consequentialism, and accept that our moral intutions can no longer be deontologically compressed.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-02-01T12:38:49.380Z · LW(p) · GW(p)

I think what you're talking about isn't deontology, but rule utilitarianism.

Replies from: thomblake, Stuart_Armstrong
comment by thomblake · 2010-02-01T14:27:20.892Z · LW(p) · GW(p)

"rule utilitarianism" collapses into deontology or regular utilitarianism when pushed; otherwise, it's inconsistent. Though it is generally accepted by utilitarians that acting according to general rules will in practice generate more utility than trying to reason about every situation anew.

comment by Stuart_Armstrong · 2010-02-01T12:58:40.860Z · LW(p) · GW(p)

Possibly; are they distinguished in practice?

comment by RichardChappell · 2010-01-31T05:23:47.925Z · LW(p) · GW(p)

extensional definitions are terribly unsatisfactory

True enough, but it's worth noting that what we have here (between a deontological theory and its 'consequentialized' doppelganger) is necessary co-extension. Less chordates and renates, more triangularity and trilaterality. And it's philosophically controversial whether there can be distinct but necessarily co-extensive properties. (I think there can be; but I just thought this was worth flagging.)

Replies from: Jack
comment by Jack · 2010-01-31T05:36:33.717Z · LW(p) · GW(p)

Good point. If we do another survey (and it is about time) I'd like to know how people here stand on the existence of abstract objects (universals, types, etc.)

Replies from: CannibalSmith
comment by CannibalSmith · 2010-01-31T10:02:37.021Z · LW(p) · GW(p)

Abstract object exist in my mind. The end.

Replies from: wedrifid
comment by wedrifid · 2010-01-31T10:22:23.260Z · LW(p) · GW(p)

No they don't. The end.

comment by CannibalSmith · 2010-01-31T09:56:20.064Z · LW(p) · GW(p)

What was the difference between hedonism and preferentialism again?

Replies from: wedrifid
comment by wedrifid · 2010-01-31T10:26:47.087Z · LW(p) · GW(p)

Give people pleasure vs give people what what they really want.

comment by Morendil · 2010-01-30T18:05:41.403Z · LW(p) · GW(p)

Typo: "prise apart" not "prize apart".

EDIT: another typo: "tl;dr" at the start of the post. Please consider getting rid of this habit. Your writing is, as a rule, improved by moving your main point to the top, and this reader appreciates your doing that; the cutesy Internetism is an needless distraction.

Replies from: wedrifid, Alicorn, arbimote, Alicorn, MrHen, CannibalSmith
comment by wedrifid · 2010-01-31T10:14:48.006Z · LW(p) · GW(p)

Typo: "a" not "an".

I too find "tl;dr" irritating. It is entirely unintuitive and looks like a rendering error. Too Obfuscated; Don't Decode.

ETA Typo: missing 'is' ;)

comment by Alicorn · 2010-01-30T18:23:32.756Z · LW(p) · GW(p)

Is there a suitable substitute for tl;dr that you would find less distracting? I do want to signal "this is an ultra-short summary" to avoid people interpreting it as part of the "flow" of the whole article.

Replies from: Morendil, Vladimir_Golovin, HughRistik, komponisto, k3nt
comment by Morendil · 2010-01-30T18:31:34.615Z · LW(p) · GW(p)

Signaling might not be necessary, as your summary normally serves as a "hook" to draw readers into the body of the article.

That said, you could italicize or bold (my preference) the summary, or set it off from the body with a horizontal rule.

Replies from: Alicorn
comment by Alicorn · 2010-01-30T18:35:20.356Z · LW(p) · GW(p)

Italicized. Thanks for your input.

comment by Vladimir_Golovin · 2010-01-31T13:17:58.804Z · LW(p) · GW(p)

Is there a suitable substitute for tl;dr that you would find less distracting?

I recently had a need for such substitute to summarize a long email to an extremely busy non-chatty high-status person. I went with "In a nutshell", and it worked -- I got a nice reply.

(TL;DR is perfectly fine with me, but I don't think it's appropriate when addressing people who are unlikely to keep up with the latest Internet slang.)

comment by HughRistik · 2010-01-31T05:08:54.102Z · LW(p) · GW(p)

The more academic substitute is "abstract."

Not that I have anything against good ol' TL;DR.

comment by komponisto · 2010-01-30T18:38:51.396Z · LW(p) · GW(p)

How about "(Ultra-Short) Summary:..."?

comment by k3nt · 2010-02-02T02:50:40.877Z · LW(p) · GW(p)

tl;dr to me indicates something you say about somebody else's post (which you didn't bother to read because you found it too long). Used w/r/t one's own post it's very confusing.

I use "Shorter me:"

for what that's worth.

comment by arbimote · 2010-02-01T07:13:26.829Z · LW(p) · GW(p)

I personally don't mind "tl;dr", but I agree that where practical it is best to use language that will be understood by as wide an audience as possible. (Start using "tl;dr" again when it becomes mainstream :) )

Replies from: wedrifid
comment by wedrifid · 2010-02-01T07:24:47.032Z · LW(p) · GW(p)

Start using "tl;dr" again when it becomes mainstream :)

Please don't. I need to budget my downvotes!

comment by Alicorn · 2010-01-30T18:08:14.152Z · LW(p) · GW(p)

Thanks, I'll fix it ^^

comment by MrHen · 2010-01-31T17:24:39.426Z · LW(p) · GW(p)

"Prise" is a variant spelling of "prize" in my dictionary. Are we looking for the word "pry"?

Replies from: Morendil
comment by Morendil · 2010-01-31T17:44:55.947Z · LW(p) · GW(p)

Dunno - I didn't actually check the dictionary, just a Google search for relative frequency of "prize apart" which I found jarring, vs. "prise apart" which sounded no alarm. The first mostly appears with "prize" being a noun not a verb, so I supposed my gut feel was correct. Call that the Language Log method. ;)

The dictionary method does suggest "prize apart" is also correct, if less common. Looks like I made a wrong call.

Replies from: MrHen
comment by MrHen · 2010-01-31T17:59:07.783Z · LW(p) · GW(p)

I looked at it in more detail and it appears that "prise" is a valid variant of "prize" only when using it as a synonym of "pry." So... that is a little confusing but now I know something new. :)

Dictionary.com

comment by CannibalSmith · 2010-01-31T10:00:16.289Z · LW(p) · GW(p)

This is the Internet. Nobody says "abstract" or "summary" on the Internet.

comment by Jake_Nebel · 2010-02-01T23:07:46.504Z · LW(p) · GW(p)

I think your definition of consequentialism (and deontology) is too broad because it makes some contractarian theories consequentialist. In "Equality," Nagel argues that the rightness of an act is determined by the acceptability of its consequences for those to whom they are most unacceptable. This is similar to Rawls's view that inequalities are morally permissible if they result in a net-benefit to the most disadvantaged members of society. These views are definitely deontological (and self-labeled as such), and since consequentialism and deontology are mutually exclusive and exhaustive, they are non-consequentialist. But since they determine the rightness of an act by its consequences, they would be considered forms of consequentialism under your definition; thus, your definition cannot be correct.

My counter-interpretation of consequentialism would be this: An act is morally permissible if it (or if its rule, under RC) yields results that are maximally good. (This comes from Vallentyne, "Consequentialism.") Deontology is the belief that something can be morally permissible even if it does not promote the best consequences. This may be due to options, special relationships, constraints, agent-relative values, or something altogether different. This definition makes the two theories mutually exclusive and exhaustive, and it puts contractarian views like Rawls's and Nagel's where they belong.

Replies from: Alicorn
comment by Alicorn · 2010-02-01T23:27:06.334Z · LW(p) · GW(p)

Rawls's view that inequalities are morally permissible if they result in a net-benefit to the most disadvantaged members of society.

I haven't made a close study of Rawls, but what I know inclines me towards an interpretation under which the difference principle is a prediction about what agents would agree to behind the veil of ignorance, and only via their agreeing upon it does it gain moral force.

consequentialism and deontology are mutually exclusive and exhaustive

I don't think they are necessarily either of these things. You can have considerable overlap - even doppelgangers blur the lines - and you're neglecting virtue ethics, which doesn't have a clear allegiance with either.

n act is morally permissible if it (or if its rule, under RC) yields results that are maximally good.

This neglects satisficing theories, and (depending on how strict you mean this to be) theories that talk about things other than acts or rules.

Deontology is the belief that something can be morally permissible even if it does not promote the best consequences.

Defining deontology in terms of consequentialism is something I'd like to avoid.

comment by loqi · 2010-01-30T22:05:34.533Z · LW(p) · GW(p)

If I understand you, you're claiming that the "justification" for a deontological principle need not be phrased in terms of consequences, and consequentialists fail to acknowledge this. But can't it always be re-phrased this way?

I prefer to inhabit worlds where I don't lie [deontological]. Telling a lie causes the world to contain a lying version of myself [definition of "cause"]. Therefore, lying is wrong [consequentialist interpretation of preference violation].

This transformation throws away the original justification, but from a consequentialist perspective that justification is only as relevant as an evolutionary explanation for current human preferences - the preference is what matters, its origin is incidental.

If you've ever run across the concept of a meta-circular interpreter, this seems akin to "bootstrapping" a new language using an existing one. The first interpreter you write is a complete throw-away, as its only purpose is to boost you up to another, self-sustaining level of abstraction.

Replies from: Alicorn
comment by Alicorn · 2010-01-30T22:15:18.359Z · LW(p) · GW(p)

Yes, you can doppelganger any deontic theory you want. And from the perspective of a consequentialist who doesn't care about annoying eir deontologist friends, the doppelganger is just as good, probably better. But it misses the deontologist's point.

Replies from: Wei_Dai, loqi
comment by Wei Dai (Wei_Dai) · 2010-01-31T03:05:07.087Z · LW(p) · GW(p)

And from the perspective of a consequentialist who doesn't care about annoying eir deontologist friends, the doppelganger is just as good, probably better.

As someone who has no deontologist friends, should I bother reading this post?

Replies from: Tyrrell_McAllister, wedrifid
comment by Tyrrell_McAllister · 2010-01-31T05:50:37.077Z · LW(p) · GW(p)

Deontologists are common. Someday, you may need to convince a deontologist on some matter where their deontology affects their thinking. If you are ignorant about an important factor in how their mind works, you will be less able to bring their mind to a state that you desire.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-01-31T07:07:09.098Z · LW(p) · GW(p)

I find this answer strange. There are lots of Christians, but we don't do posts on Christian theology in case we might find it useful to understand the mind of a Christian in order to convince them to do something.

Come on, why did Alicorn write a post on deontology without giving any explanation why we should learn about it? What am I missing here? If she (or anyone else) thinks that we should put some weight into deontology in our moral beliefs, why not just come out and say that?

Replies from: Alicorn, Tyrrell_McAllister, Jack
comment by Alicorn · 2010-01-31T15:14:31.732Z · LW(p) · GW(p)

Well, apart from the fact that it looked like people wanted me to write it, I'm personally irritated by the background assumption of consequentialism on this site, especially since it usually seems to come from incomprehension more than anything else. People phrasing things more neutrally, or at least knowing exactly what it is they're discarding, would be nice for me.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-01-31T19:55:51.079Z · LW(p) · GW(p)

Thanks. I suggest that you write a bit about the context and motivation for a post in the post itself. I skipped most of the cryonics threads and never saw the parts where people talked about deontology, so your post was pretty bewildering to me (and to many others, judging from the upvotes my questions got).

comment by Tyrrell_McAllister · 2010-01-31T08:11:02.753Z · LW(p) · GW(p)

I find this answer strange. There are lots of Christians, but we don't do posts on Christian theology in case we might find it useful to understand the mind of a Christian in order to convince them to do something.

How often do you need to convince a Christian to do something where their Christianity in particular is important. That is, how often does it matter that their worldview is Christian specifically, rather than some other mysticism? The more often you need to do that, the more helpful it is to understand the Christian mindset specifically. But I can easily imagine that you will never need to do that.

In contrast, it seems much more likely that you will someday need to convince a deontologist to do something that they perceive as somehow involving duty. You will be better able to do that if you better understand how their concept of duty works.

The purpose of this site is to refine the art of human rationality. That requires knowing how humans are, and many humans are deontologists.

If there were something specific to Christianity that made certain techniques of rationality work on it and only it, then time spent understanding Christianity would be time well-spent. It seems to me, though, that general remedies, such as avoiding mysterious answers to mysterious questions, do as well as anything targeted specifically at Christianity. So it happens that there is little to be gained from discussing the particulars of the Christian worldview.

Deontology, however, seems more like the illusion of free will* than like Christianity. Deontology has something to do with how a large number of people conceive of human action at a very basic level. Part of refining human rationality is improving how humans conceive of their actions. Since so many of them conceive of their action deontologically, we should understand how deontology works.


*. . . the illusion of the illusory kind of free will, that is.

comment by Jack · 2010-01-31T08:13:40.834Z · LW(p) · GW(p)
  1. I'm pretty sure I remember a couple of comments suggesting this topic.

  2. I can't speak for alicorn but I'll come out and say that I think the metaethics sequence is the weakest of the sequences and the widespread preference utilitarianism here has not been well justified. I'm not a deontologist but I think understanding the deontologist perspective will probably lead to less wrong thinking about ethics.

Replies from: Alicorn
comment by Alicorn · 2010-01-31T15:12:51.390Z · LW(p) · GW(p)

Yes, there was some enthusiasm about the topic here.

comment by wedrifid · 2010-01-31T07:39:22.577Z · LW(p) · GW(p)

As someone who has no deontologist friends, should I bother reading this post?

Yes. If (for example) some well meaning fool makes a utilitarian-friendly AI then there will be a super-intelligence at large that will be maximizing "aggregative total equal-consideration preference consequentialism" across all living humans. Being able to understand how deontologist's think will better enable you to predict how their deontological beliefs will be resolved to preferences by the utilitarian AI. It may be that the best preference translation of a typical deontologist belief system turns out to be something that gives rise in aggregate to a dystopia. If that is the case you should engage in the mass murder of deontologists before the run button is pressed on the AI.

I also note that as I wrote "you should engage in mass murder" I felt bad. This is despite the fact that the act has extremely good expected consequences in the hypothetical situation. Part of the 'bad feeling' I get for saying that is due to inbuilt deontological tendencies and part is because my intuitions anticipate negative social consequences for making such a statement due to the deontological ethical beliefs being more socially rewarded. Both of these are also reasons that reading the post and understanding the reasoning that deontologists use may turn out to be useful.

comment by loqi · 2010-01-30T23:04:51.943Z · LW(p) · GW(p)

I didn't think this was the sort of doppelgangering you were talking about. I'm not trying to ascribe additional consequentialist justifications, I'm just jettisoning the entire justification and calling a preference a spade. If the deontologist's point is that (some of) their preferences somehow possess extra justification, then they've already succeeded in annoying me with their meaningless moral grandstanding.

If Anton Chigurh delivers an eloquent defense of his personal philosophy, it won't change my opinion of his moral status. This doesn't seem related to my consequentialist outlook - if your position is that "murder is always wrong, all of the time", I would expect a similar reaction.

I feel like I'm still missing whatever it is that your post is trying to convey about the "deontologist's point". What is the point of deontological justification? The vertebrate/renate example doesn't do it for me, because there's a clear way to distinguish between the intensional and extensional definitions: postulate a creature with a spine and no kidneys. Such an organism seems at least conceivable. But I don't see what analogous recourse a deontologist has when attempting to make this distinction. It all just reduces to a chain of "because if"s that terminates with preferences. Even in the case of "X is only wrong if the agent performing X is aware it leads to outcome Y", a preference over the rituals of cognition employed by another agent is still a preference. It just seems like an awfully weird one.

Replies from: Alicorn, bogus
comment by Alicorn · 2010-01-30T23:14:25.551Z · LW(p) · GW(p)

I find your complaints a bit slippery to get ahold of, so I'm going to say some things that floated into my brain while I read your comment and see if that helps.

A preference is one sort of thing that a deontic theory can take into account when evaluating an action. For instance, one could hold that a moral right can be waived by its holder at eir option: this takes into account someone's preference. But it is only one type of thing that could be included.

There is no special reason to privilege preferences as an excellent place to stop when justifying a moral theory. They're unusually actionable, which makes theories that stop there more usable than theories that stop in some other places, but they are not magic. The fact that stopping in the places deontologists like to stop (I'm fond of "personhood", myself) does not come naturally to you does not make deontology an inherently bizarre system in comparison to consequentialism.

Replies from: loqi
comment by loqi · 2010-01-30T23:34:34.828Z · LW(p) · GW(p)

There is no special reason to privilege preferences as an excellent place to stop when justifying a moral theory.

But I don't see preference as justifying a moral theory, I see it as explaining a moral theory. I don't see how a moral theory could possibly be justified, the concept appears nonsensical to me. About the closest thing I can make sense of would be soundly demonstrating that one's theory doesn't contradict itself.

Put another way, I can imagine invalidating a moral theory by demonstrating the lack of a necessary condition (like consistency), but I can't imagine validating the theory by demonstrating the presence of a "sufficient" condition.

Replies from: Alicorn
comment by Alicorn · 2010-01-30T23:41:47.357Z · LW(p) · GW(p)

Perhaps you can tell me a little about your ethical beliefs so I know where to start when trying to explain?

Replies from: loqi
comment by loqi · 2010-01-31T07:53:17.000Z · LW(p) · GW(p)

No real framework to speak of. Hanson's efficiency criterion appeals to me as a sort of baseline morality. It's hard to imagine a better first-order attack on the problem than "everyone should get as much of what they want as possible", but of course one can imagine an endless stream of counter-examples and refinements. I presumably have most standard human "pull the child off the tracks" sorts of preferences.

I'm not sure I know what you're looking for. Unusual moral beliefs or ethical injuctions? I think lying is simultaneously

  • Despicable by default
  • Easily justified in the right context
  • Usually unpleasant to perform even when feeling justified in doing so, but occasionally quite enjoyable

if that helps.

Replies from: Alicorn
comment by Alicorn · 2010-01-31T14:52:42.939Z · LW(p) · GW(p)

I'm not sure what to do with that as stated at all, I'm afraid. But "as possible" seems like a load-bearing phrase in the sentence "everyone should get as much of what they want as possible", because this isn't literally possible for everyone simultaneously (two people could simultaneously desire the same thing, such that it is possible that either of them get it), and you have to have some kind of mechanism to balance contradictory desires. What mechanism looks right to you?

Replies from: loqi
comment by loqi · 2010-02-01T07:18:52.076Z · LW(p) · GW(p)

Agreed, "as possible" is quite heavy, as is "everyone". But it at least slightly refines the question "what's right?" to "what's fair?". Which is still a huge question.

The quasi-literal answer to your question is: a voronoi diagram. It looks right - I don't quite know what it means in practice, though.

In general, the further a situation is from my baseline intuitions concerning fairness and respect for apparent volition, the weaker my moral apprehension of it is. Life is full of trade-offs of wildly varying importance and difficulty. I'd be suspicious of any short account of them.

comment by bogus · 2010-01-31T14:15:49.056Z · LW(p) · GW(p)

I'm just jettisoning the entire justification and calling a preference a spade.

Good point. There is a lot of fuzziness around "preferences", "ethics", "aesthetics", "virtues" etc. Ultimately all of these seem to involve some axiological notion of "good", or "the good life", or "good character" or even "goods and services".

For instance, what should we make of the so-called "grim aesthetic"? Is grimness a virtue? Should it count as an ethic? If not, why not?

Replies from: loqi
comment by loqi · 2010-02-01T07:38:08.014Z · LW(p) · GW(p)

The second virtue is relinquishment:

Relinquish the emotion which rests upon a mistaken belief, and seek to feel fully that emotion which fits the facts.

I think the necessary and sufficient conditions for "grimness" are found there.

comment by k3nt · 2010-02-02T02:55:25.857Z · LW(p) · GW(p)

I very much appreciated reading this article.

As a general comment, I think that this forum falls a bit too much into groupthink. Certain things are assumed to be correct that have not been well argued. A presumption that utilitarianism of some sort or another is the only even vaguely rational ethical stance is definitely one of them.

Not that groupthink is unusual on the internet, or worse here than elsewhere! Au contraire. But it's always great to see less of it, and to see it challenged where it shows up.

Thanks again for this, Mr. Corn.

Replies from: Jack, mattnewport
comment by Jack · 2010-02-02T02:59:02.058Z · LW(p) · GW(p)

Thats Ms. Corn.

Replies from: Alicorn
comment by Alicorn · 2010-02-02T03:02:28.765Z · LW(p) · GW(p)

Please, call me Ali. Ms. Corn is my mother.

...No, seriously, folks, it's a word, abbreviating it doesn't make sense. "Alicorn".

Replies from: k3nt, None, MrHen, MrHen
comment by k3nt · 2010-02-02T03:09:55.398Z · LW(p) · GW(p)

I was making a silly foolish joke and didn't even think about how obviously I would be opening myself up to charges (by myself if not others) of implicit sexism. Sigh. I'm so busted.

comment by [deleted] · 2013-02-09T19:37:46.446Z · LW(p) · GW(p)

No, seriously, folks, it's a word, abbreviating it doesn't make sense.

What else does one abbreviate?

Replies from: Alicorn
comment by Alicorn · 2013-02-09T19:41:01.647Z · LW(p) · GW(p)

Phrases. Names. Long words.

comment by MrHen · 2010-02-02T04:43:00.327Z · LW(p) · GW(p)

I have a similar reaction when people call me Mr. Hen. My name isn't actually Hen. I just thought it was a funny oxymoron and periods aren't normally exacted in usernames.

And I meant "accepted," not "exacted." I think I need some sleep.

comment by MrHen · 2010-02-02T03:38:29.164Z · LW(p) · GW(p)

Shucks.

(Get it? Eh? Eh? Aw... nevermind. I sorry.)

comment by mattnewport · 2010-02-02T02:57:05.267Z · LW(p) · GW(p)

We're not all utilitarians. It does seem to be a bafflingly popular view here but there are dissenting voices.

Replies from: Jack
comment by Jack · 2010-02-02T03:01:51.995Z · LW(p) · GW(p)

Where do you stand? (If you can explain without a full page essay). I'm something of a utilitarian skeptic as well... I'd like to see if the rest of us have overlapping views.

Replies from: Morendil, mattnewport
comment by Morendil · 2010-02-02T08:43:08.253Z · LW(p) · GW(p)

My own ethical position is easy to state: confused. ;)

There should be a post, intended for people about to embark on the topic of ethics and metaethics, providing guidance on even figuring out what your current intuitions are, and where they position you on the map of the standard debates.

My (post-school) readings on the topic have included Singer's Practical Ethics and Rawls' Theory of Justice. I was definitely more impressed and influenced by the latter. If pressed, I would call myself a contractarian. (Being French, I had early encounters with Rousseau, but I don't remember those with any precision.)

I'm skeptical of the way "utility function" is often used, as a lily-gilding equivalent of "what I want". I'm skeptical that interpersonal comparisons of utility have any value, such that "my utility function", assuming there is such a thing, can be meaningfully aggregated with "your utility function". Thus I'm skeptical that utility provides a useful guide to moral decisions.

comment by mattnewport · 2010-02-03T19:44:28.472Z · LW(p) · GW(p)

I'll try to summarize but my position isn't fully worked out so this is just a rough outline.

I think it's important to distinguish the descriptive and prescriptive/normative elements of any moral or ethical theory. That distinction sometimes seems to get lost in discussions here.

Descriptively, I think that what human morality actually is is a system of biologically and culturally evolved rules, principles and dispositions that have tended to lead to reproductive success. The details of what those rules are is largely an empirical question but most people have some reasonable insight into them based on being human and living in human society.

There is a naive view of evolution that fails to understand how behaviour that we would generally call altruistic or that is not immediately obviously self-interested can be explained in such a framework. Hopefully most people here don't explicitly hold such a view but it seems that remnants of it still infect the moral thinking of people who 'know better'. I think if you look at human society from a game-theoretic / evolutionarily stable strategy perspective it becomes fairly clear that most of what we call altruism or non-self-interested behaviour makes good sense for self-interested agents. I don't believe there is any mystery about such behaviour that needs to be explained by invoking some view of morality or ethics that is not ultimately rooted in evolutionary success.

Prescriptively, I think that people should behave so as to maximize their own self interest, where self interest is understood in the broad sense that can account for altruism, self-sacrifice and other elements that a naive interpretation of self interest would miss. In this sense I am something of an ethical egoist. That's a simple basis for morality but it does not necessarily produce a simple answer to any particular moral question.

On top of this basic principle of morality, I have an ethical framework that I believe would tend to produce good results for everyone if we could all agree to adopt it. This is partly an empirical question and I am open to revising it in light of new evidence. I'm basically a libertarian and agree with most of the reasoning of natural rights libertarians but largely because I think the non-agression principle is the simplest self-consistent basis for a workable Nash equilibrium {1} for organizing human society and not because I think the natural rights are some kind of inviolable moral fact. I do have a personal somewhat-deontological moral preference for freedom that arguably goes beyond what I can justify by an argument based on game theory. I have seen some research that indicates that people's political and ethical beliefs correlate with personality types and that placing high values on personal freedom is connected with 'openness to experience' which may help explain my own personal ethical leanings.

To the extent that I view ethics as an empirical issue I could also be called a consequentialist libertarian. I think the evidence shows that societies that follow more libertarian principles tend to be more prosperous and I think prosperity is a very good thing (it seems odd to have to qualify that but some people appear not to).

My biggest issues with utilitarianism as I generally understand it are the emphasis on some kind of global utility function that defines what is moral (incompatible with my belief that people should act in their own best interests and I believe unrealistic and undesirable in practice) and the general lack of recognition of the computational intractability of the problem. Together these objections mean I think that utilitarianism is neither desirable nor possible, which is a somewhat damning indictment I suppose...

{1} I use Nash equilibrium in a non-technical, allegorical sense - proving a true Nash equilibrium for human society is almost certainly a computationally intractable problem.

comment by pjeby · 2010-01-31T06:35:19.201Z · LW(p) · GW(p)

And if you take one of these deontic reasons, and mess with it a bit, you can be wrong in a new and exciting way: "because the voices in my head told me not to, and if I disobey the voices, they will blow up Santa's workshop, which would be bad" has crossed into consequentialist territory. (Nota bene: Adding another bit - say, "and I promised the reindeer I wouldn't do anything that would get them blown up" - can push this flight of fancy back into deontology again. And then you can put it back under consequentialism again: "and if I break my promise, the vengeful spirits of the reindeer will haunt me, and that would make me miserable.")

So, if I understand this correctly, a deontologist is someone who insists that their cached applause/boo lights have some mysteriously-derived meaning, independent of how their brains physically learned or otherwise came to produce those judgments. This seems hopelessly deluded to me, though, as it requires one to assume that non-physical things exist.

Physical human brains, after all, are inherently consequentialist. They cache the goodness or badness of our acquired (as opposed to inbuilt/derived) "moral" rules. At the bottom of our moral reasoning, there is always an element of feeling, and feelings are inherently consequentialist predictions.

Because, except for our inbuilt bits of moral reasoning, we learn those feelings by experiencing consequences, like "Breaking promises is bad because mommy promised to take me to Disneyland and it made me sad when we didn't go," or "Thinking of my own desires is bad because when I asked daddy for something he yelled at me for being selfish." (These are approximate examples of stuff I've helped people dig up and discard from their brains, btw.)

So, we can build all sorts of sophisticated reasoning on top of these basic feelings, but they're still at the physical root of the process. Or, to put it another way, every deontological claim (AFAICT) ultimately reduces to "I would feel bad if I broke this rule."

So, I don't see how anyone can claim to be a deontologist and a rationalist at the same time, unless they are merely claiming to be ignorant of the source and contents of their brain's cached consequence information.

That, of course, is the normal state of most people; it takes a fair amount of practice and patience to dig up the information behind the cached feelings, and hardly anyone ever bothers.

One reason being, of course, that if they did, they wouldn't be able to get the good feelings cached in association with the idea that you should do the right thing regardless of consequences... which is itself a moral rule that must be learned by experience!

(IOW, if you believe that mysterious = good or ignorance = innocence where moral rules are concerned, then it's very likely you will apply these cached feelings to deontology itself.)

Of course, to be fair, there are symmetrical moral rules that lead other people to feel that consequentialism is good, too!

My point, however, is not that either one is good or bad, just that only one of them is actually implemented in human brains. The other (AFAICT) can only be pretended to via ignorance.

comment by gmcrews · 2010-01-31T13:44:16.002Z · LW(p) · GW(p)

How about a post on understanding consequentialism for us deontologists? :-)

The Wikipedia defines deontological ethics as "approach to ethics that judges the morality of an action based on the action's adherence to a rule or rules."

This definition implies that the Scientific method is a deontological ethic. It's called the "scientific method" after all. Not the "scientific result."

The scientific method is rule based. Therefore, if there is not a significant overlap between the consequentialist and deontologist approaches, then consequentialism must be non-scientific.

And if a consequentialist is non-scientific, then how can she reliability predict consequences and thus know what is the ethical or moral thing to do?

Who is the "real" doppelganger?

Replies from: wedrifid, RobinZ
comment by wedrifid · 2010-01-31T16:02:34.246Z · LW(p) · GW(p)

The scientific method is rule based. Therefore, if there is not a significant overlap between the consequentialist and deontologist approaches, then consequentialism must be non-scientific.

And if a consequentialist is non-scientific, then how can she reliability predict consequences and thus know what is the ethical or moral thing to do?

Before anyone replies to this could you please confirm whether you are actually trying to make a serious point or if you are just trying to be facetious? You are conflating issues all over the place in ways that don't really seem to make sense.

comment by RobinZ · 2010-01-31T14:25:27.417Z · LW(p) · GW(p)

How about a post on understanding consequentialism for us deontologists? :-)

Most of the vocal population here are consequentialists - if there proves to be widespread interest, such a post may appear at a later date.