Are Deontological Moral Judgments Rationalizations?

post by lukeprog · 2011-08-16T16:40:53.568Z · LW · GW · Legacy · 170 comments

Contents

  Utilitarian and Deontological Processes
  Cognition and Emotion
  Emotion and Deontological Judgments
  Summing Up
  Notes
  References
None
170 comments

In 2007, Chris Matthews of Hardball interviewed David O'steen, executive director of a pro-life organization. Matthews asked:

I have always wondered something about the pro-life movement. If you believe that killing [a fetus] is murder, why don't you bring murder charges or seek a murder penalty against a woman who has an abortion? Why do you let her off, if you really believe it's murder?1

O'steen replied that "we have never sought criminal penalties against a woman," which isn't an answer but a re-statement of the reason for the question. When pressed, he added that we don't know "how she‘s been forced into this." When pressed again, O'steen abandoned these responses and tried to give a consequentialist answer. He claimed that implementing "civil penalties" and taking away the "financial incentives" of abortion doctors would more successfully "protect unborn children."

But this still doesn't answer the question. If you believe that killing a fetus is murder, then a woman seeking an abortion pays a doctor to commit murder. Why don't abortionists want to change the laws so that abortion is considered murder and a woman who has an abortion can be charged with paying a doctor to commit murder? Psychologist Robert Kurzban cites this as a classic case of moral rationalization.2

Pro-life demonstrators in Illinois were asked a similar question: "If [abortion] was illegal, should there be a penalty for the women who get abortions illegally?" None of them (on the video) thought that women who had illegal abortions should be punished as murders, an ample demonstration of moral rationalization. And I'm sure we can all think of examples where it looks like someone has settled on an intuitive moral judgment and then invented rationalizations later.3

More controversially, some have suggested that rule-based deontological moral judgments generally tend to be rationalizations. Perhaps we can even dissolve the debate between deontological intuitions and utilitarian intuitions if we can map the cognitive algorithms that produce them.

Long-time deontologists and utilitarians may already be up in arms to fight another war between Blues and Greens, but these are empirical questions. What do the scientific studies suggest?

 

Utilitarian and Deontological Processes

A runaway trolley is about to run over and kill five people, but you can save them by hitting a switch that will put the trolley on a side track where it will only kill one person. Do you throw the switch? When confronted with this switch dilemma, most people say it is morally good to divert the trolley,4 thereby achieving the utilitarian 'greater good'.

Now, consider the footbridge dilemma. Again, a runaway trolley threatens five people, and the only way to save them is to push a large person off a footbridge onto the tracks, which will stop the trolley but kill the person you push. (Your body is too small to stop the trolley.) Do you push the large person off the bridge? Here, most people say it's wrong to trade one life for five, allowing a deontological commitment to individual rights to trump utilitarian considerations of the greater good.

Researchers presented subjects with a variety of 'impersonal' dilemmas (including the switch dilemma) and 'up-close-and-personal' dilemmas (including the footbridge dilemma). Personal dilemmas preferentially engaged brain areas associated with emotion. Impersonal dilemmas preferentially engaged the regions of the brain associated with working memory and cognitive control.5

This suggested a dual-process theory of moral judgment, according to which the footbridge dilemma elicits a conflict between emotional intuition ("you must not push people off bridges!") and utilitarian calculation ("pushing the person off the bridge will result in the fewest deaths"). In the footbridge case, emotional intuition wins out in most people.

But now, consider the crying baby dilemma from the final episode of M.A.S.H:

It's wartime. You and your fellow villagers are hiding from nearby enemy soldiers in a basement. Your baby starts to cry, and you cover your baby's mouth to block the sound. If you remove your hand, your baby will cry loudly, and the soldiers will hear. They will find you... and they will kill all of your. If you do not remove your hand, your baby will smother to death. Is it morally acceptable to smother your baby to death in order to save yourself and the other villagers?6

Here, people take a long time to answer, and they show no consensus in their answers. If the dual-process theory of moral judgment is correct, then people considering the crying baby dilemma should exhibit increased activity in the ACC (a region associated with response conflict), and in regions associated with cognitive control (for overriding a potent emotional response with utilitarian calculation). Also, those who eventually choose the characteristically utilitarian answer (save the most lives) over the characteristically deontological answer (don't kill the baby) should exhibit comparatively more activity in brain regions associated with working memory and cognitive control. All three predictions turn out to be true.7

Moreover, patients with two different kinds of dementia or lesions that cause "emotional blunting" are disproportionately likely to approve of utilitarian action in the footbridge dilemma,8 and cognitive load manipulations that keep working memory occupied slow down utilitarian judgments but not deontological judgments.9

Studies of individual differences also seem to support the dual-process theory. Individuals who are (1) high in "need for cognition" and low in "faith in intuition", or (2) score well on the Cognitive Reflection Test, or (3) have unusually high working memory capacity... all give more utilitarian judgments.10

This leads us to Joshua Greene's bold claim:

...deontological judgments tend to be driven by emotional responses, and... deontological philosophy, rather than being grounded in moral reasoning, is to a large extent an exercise in moral rationalization. This is in contrast to consequentialism, which, I will argue, arises from rather different psychological processes, ones that are more 'cognitive,' and more likely to involve genuine moral reasoning...

[Psychologically,] deontological moral philosophy really is... an attempt to produce rational justifications for emotionally driven moral judgments, and not an attempt to reach moral conclusions on the basis of moral reasoning.11

 

Cognition and Emotion

Greene explains the difference between 'cognitive' and 'emotional' processes in the brain (though both involve information processing, and so are 'cognitive' in a broader sense):

...'cognitive' processes are especially important for reasoning, planning, manipulating information in working memory, controlling impulses, and 'higher executive functions' more generally. Moreover, these functions tend to be associated with certain parts of the brain, primarily the dorsolateral surfaces of the prefrontal cortex and parietal lobes... Emotion, in contrast, tends to be associated with other parts of the brain, such as the amygdala and the medial surfaces of the frontal and parietal lobes... And while the term 'emotion' can refer to stable states such as moods, here we will primarily be concerned with emotions subserved by processes that in addition to being valenced, are quick and automatic, though not necessarily conscious.

Since we are concerned with two kinds of moral judgment (deontological and consequentialist) and two kinds of neurological process (cognitive and emotional), we have four empirical possibilities:

First, it could be that both kinds of moral judgment are generally 'cognitive', as Kohlberg’s theories suggest (Kohlberg, 1971). At the other extreme, it could be that both kinds of moral judgment are primarily emotional, as Haidt’s view suggests (Haidt, 2001). Then there is the historical stereotype, according to which consequentialism is more emotional (emerging from the 'sentimentalist' tradition of David Hume (1740) and Adam Smith (1759) while deontology is more 'cognitive' [including the Kantian 'rationalist' tradition: see Kant (1785)]. Finally, there is the view for which I will argue, that deontology is more emotionally driven while consequentialism is more 'cognitive.'

We have already seen the neuroscientific evidence in favor of Greene's view. Now, let us turn to further evidence from the work of Jon Haidt.

 

Emotion and Deontological Judgments

Haidt & colleagues (1993) presented subjects with a sequence of harmless actions, for example:

  1. A son promises his dying mother that he will visit her grave every day after she has died, but then doesn’t because he is busy.
  2. A woman uses an old American flag to clean the bathroom.
  3. A family eats its dog after it has been killed accidentally by a car.
  4. A brother and sister kiss on the lips.
  5. A man masturbates using a dead chicken before cooking and eating it.

For each action, subjects were asked questions like: Is this action wrong? Why? Does it hurt anyone? If someone did this, would it bother you? Greene summarizes the results:

When people say that such actions are wrong, why do they say so? One hypothesis is that these actions are perceived as harmful, whether or not they really are... Kissing siblings could cause themselves psychological damage. Masturbating with a chicken could spread disease, etc. If this hypothesis is correct, then we would expect people’s answers to the question "Does this action hurt anyone?" to correlate with their degree of moral condemnation... Alternatively, if emotions drive moral condemnation in these cases, then we would expect people’s answers to the question "If you saw this, would it bother you?" to better predict their answers to the moral questions posed.

If you're following along, it may not surprise you that emotions seemed to be driving the deontological condemnation of harmless actions. Moreover, both education and adulthood were correlated with more consequentialist judgments. (Cognitive control of basic emotional reactions is something that develops during adolescence.12) Greene reminds us:

These... findings make sense in light of the model of moral judgment we have been developing, according to which intuitive emotional responses drive prepotent moral intuitions while 'cognitive' control processes sometimes rein them in.

But there is more direct evidence of the link between emotion and the deontological condemnation of harmless actions.

Wheatley & Haidt (2005) gathered hypnotizable subjects and gave some of them a hypnotic suggestion to feel disgust upon reading the word 'often', while giving others a hypnotic suggestion to feel disgust upon reading the word 'take'. The researchers then showed these subjects a variety of scenarios, some of them involving no harm. (For example, two second cousins have a relationship in which they "take weekend trips to romantic hotels" or else "often go on weekend trips to romantic hotels".) As expected, subjects who received the wordings they had been primed to feel disgust toward judged the couple's actions as more morally condemnable than other subjects did.

In a second experiment, Wheatley and Haidt used the same technique and had subjects respond to a scenario in which a person did nothing remotely wrong: a student "often picks" or "tries to take up" broad topics of discussion at meetings. Still, many subjects who were given the matching hypnotic suggestion rated the student's actions as morally wrong. When asked why, they invented rationalizations like "It just seems like he’s up to something" or "It just seems so weird and disgusting" or "I don’t know [why it’s wrong], it just is."

In other studies, researchers implemented a disgust condition by placing some subjects at a dirty desk or in the presence of fart spray. As before, those in the disgust condition were more likely to rate harmless actions as morally wrong than other subjects were.13

Finally, consider that the dual-process theory of moral judgment predicts that deontological judgments will be quicker than utilitarian ones, because deontological judgments use emotional and largely unconscious brain modules while utilitarian judgments require slow, conscious calculation. Suter & Hertwig (2011) presented subjects with a variety of moral dilemmas and alternatively prodded them to give their judgments quickly or take their time to deliberate thoroughly. As predicted, faster responses predicted more deontological judgments.

 

Summing Up

We are a species prone to emotional moral judgment, and to rationalization ('confabulation'). And, Greene writes,

What should we expect from creatures who exhibit social and moral behavior that is driven largely by intuitive emotional responses and who are prone to rationalization of their behaviors? The answer, I believe, is deontological moral philosophy...

Whether or not we can ultimately justify pushing the man off the footbridge, it will always feel wrong. And what better way to express that feeling of non-negotiable absolute wrongness than via the most central of deontological concepts, the concept of a right: You can’t push him to his death because that would be a violation of his rights.

Deontology, then, is a kind of moral confabulation. We have strong feelings that tell us in clear and uncertain terms that some things simply cannot be done and that other things simply must be done. But it is not obvious how to make sense of these feelings, and so we, with the help of some especially creative philosophers, make up a rationally appealing story: There are these things called 'rights' which people have, and when someone has a right you can’t do anything that would take it away. It doesn’t matter if the guy on the footbridge is toward the end of his natural life, or if there are seven people on the tracks below instead of five. If the man has a right, then the man has a right. As John Rawls... famously said, "Each person possesses an inviolability founded on justice that even the welfare of society as a whole cannot override"... These are applause lines because they make emotional sense.

Of course, utilitarian moral judgment is not emotionless. Emotion is probably what leads us to label harm as a 'bad' thing, for example. But utilitarian moral judgment is, as we've seen, particularly demanding of 'cognitive' processes: calculation, the weighing of competing concerns, the adding and averaging of value, and so on. Utilitarian moral judgment uses the same meso-limbic regions that track a stimulus' reward magnitude, reward probability, and expected value.14

This does not prove the case that deontological moral judgments are usually rationalizations. But many lines of converging evidence make this a decent hypothesis. And now we can draw our neural map:15

And up until March 18th of this year, Greene had a pretty compelling case for his position that deontological judgments are generally just rationalizations.

And then, Guy Kahane et al. (2011) threw Greene's theory into doubt by testing separately for the content (deontological vs. utilitarian) and the intuitiveness (intuitive vs. not-intuitive) of moral judgments. The authors summarize their results:

Previous neuroimaging studies reported that utilitarian judgments in dilemmas involving extreme harm were associated with activation in the DLPFC and parietal lobe (Greene et al., 2004). This finding has been taken as evidence that utilitarian judgment is generally driven by controlled processing (Greene, 2008). The behavioural and neural data we obtained suggest instead that differences between utilitarian and deontological judgments in dilemmas involving extreme harm largely reflect differences in intuitiveness rather than in content.

...When we controlled for content, these analyses showed considerable overlap for intuitiveness. In contrast, when we controlled for intuitiveness, only littleif anyoverlap was found for content. Our results thus speak against the influential interpretation of previous neuroimaging studies as supporting a general association between deontological judgment and automatic processing, and between utilitarian judgment and controlled processing.

[This evidence suggests...] that behavioural and neural differences in responses to such dilemmas are largely due to differences in intuitiveness, not to general differences between utilitarian and deontological judgment.

So we'll have to wait for more studies to unravel the mystery of whether deontological moral judgments are generally rationalizations.

By email, Greene told me he suspected Kahane's 'alternative theory' wasn't much of an alternative to what he (Greene) was proposing in the first place. In his paper, Greene discussed the passage where Kant says it's wrong to lie to prevent a madman from killing someone, and cites this as an example of a case in which a deontological judgment might be more controlled, while the utilitarian judgment is more automatic. Greene's central claim is that when there's a conflict between rights and duties on the one hand, and promoting the greater good on the other, it's typically controlled cognition on the utilitarian side and emotional intuition on the other. 

Update: Greene's full reply to Kahane et al. is now available.

But even if Greene's theory is right, humans may still need to use deontological rules because we run on corrupted hardware.

 

 

Notes

1 Hardball for November 13, 2007. Here is the transcript.

2 Kurzban (2011), p. 193.

3 Also see Jon Haidt's unpublished manuscript on moral dumfounding, and Hirstein (2005).

4 Petrinovich et al. (1993); Petrinovich & O’Neill (1996).

5 Greene et al. (2001, 2004).

6 Greene (2009).

7 Greene et al. (2004).

8 Mendez et al. (2005); Koenigs et al. (2007); Ciaramelli et al. (2007).

9 Greene et al. (2008).

10 Bartels (2008); Hardman (2008); Moore et al. (2008).

11 The rest of the Joshua Greene quotes from this article are from Greene (2007).

12 Anderson et al. (2001); Paus et al. (1999); Steinburg & Scott (2003).

13 Schnall et al. (2004); Baron & Thomley (1994).

14 See Cushman et al. (2010).

15 From Greene (2009).

 

References

Anderson, Anderson, Northam, Jacobs, & Catroppa (2001). Development of executive functions through late childhood and adolescence in an Australian sample. Developmental Neuropsychology, 20: 385-406.

Baron & Thomley (1994). A Whiff of Reality: Positive Affect as a Potential Mediator of the Effects of Pleasant Fragrances on Task Performance and Helping. Environment and Behavior, 26: 766-784.

Bartels (2008). Principled moral sentiment and the flexibility of moral judgment and decision making. Cognition, 108: 381-417.

Ciaramelli, Muccioli, Ladavas, & di Pellegrino (2007). Selective deficit in personal moral judgment following damage to ventromedial prefrontal cortex. Social Cognitive and Affective Neuroscience, 2: 84-92.

Cushman, Young, & Greene (2010). Multi-system moral psychology. In Doris (ed.), The Moral Psychology Handbook (pp. 47-71). Oxford University Press.

Greene, Sommerville, Nystrom Darley, & Cohen (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293: 2105-2108.

Greene, Nystrom, Engell, Darley, & Cohen (2004). The neural bases of cognitive conflict and control in moral judgment. Neuron, 44: 389-400.

Greene (2007). The secret joke of Kant's soul. In Sinnott-Armstrong (ed.), Moral Psychology Vol. 3: The Neuroscience of Morality (pp. 35-79). MIT Press.

Greene, Morelli, Lowenberg, Nystrom, & Cohen (2008). Cognitive load selectively interferes with utilitarian moral judgment. Cognition, 107: 1144-1154.

Greene (2009). The cognitive neuroscience of moral judgment. In Gazzaniga (ed.), The Cognitive Neurosciences, Fourth Edition (pp. 987–999). MIT Press.

Haidt, Koller, & Dias (1993). Affect, culture, and morality, or is it wrong to eat your dog? Journal of Personality and Social Psychology 65: 613-628.

Hardman (2008). Moral dilemmas: Who makes utilitarian choices. In Hare (ed.), Hare Psychopathy Checklist--Revised (PCL-R): 2nd Edition. Multi-Health Systems, Inc.

Haidt (2001). The emotional dog and its rational tail: A social intuitionist approach to moral judgment. Psychological Review, 108: 814-834.

Hirstein (2005). Brain Fiction: Self-Deception and the Riddle of Confabulation. MIT Press.

Hume (1740). A Treatise of Human Nature.

Kahane, Wiech, Shackel, Farias, Savulescu, & Tracey (2011). The neural basis of intuitive and counterintuitive moral judgmentSocial Cognitive & Affective Neuroscience.

Kant (1785). Groundwork of the Metaphysics of Morals.

Koenigs, Young, Cushman, Adolphs, Tranel, Damasio, & Hauser (2007). Damage to the prefrontal cortex increases utilitarian moral judgements. Nature, 446: 908–911.

Kohlberg (1971). From is to ought: How to commit the naturalistic fallacy and get away with it in the study of moral development. In Mischel (ed.), Cognitive development and epistemology (pp. 151–235). Academic Press.

Kurzban (2011). Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind. Princeton University Press.

Mendez, Anderson, & Shapira (2005). An investigation of moral judgment in fronto-temporal dementia. Cognitive and Behavioral Neurology, 18: 193–197.

Moore, Clark, & Kane (2008). Who shalt not kill?: Individual differences in working memory capacity, executive control, and moral judgment. Psychological Science, 19: 549-557.

Paus, Zijdenbos, Worsley, Collins, Blumenthal, Giedd, Rapoport, & Evans (1999). Structural maturation of neural pathways in children and adolescents: In vivo study. Science, 283: 1908-1911.

Petrinovich, O'Neill, Jorgensen (1993). An empirical study of moral intuitions: Toward an evolutionary ethics. Journal of Personality and Social Psychology, 64: 467-478.

Petrinovich & O’Neill (1996). Influence of wording and framing effects on moral intuitions. Ethology and Sociobiology, 17: 145-171.

Schnall, Haidt, & Clore (2004). Irrelevant disgust makes moral judgment more severe, for those who listen to their bodies. Unpublished manuscript.

Smith (1759). The Theory of Moral Sentiments

Steinburg & Scott (2003). Less guilty by reason of adolescence: Developmental immaturity, diminished responsibility, and the juvenile death penalty. American Psychologist, 58: 1009-1018.

Suter & Hertwig (2011). Time and moral judgment. Cognition, 119: 454-458.

Valdesolo & DeSteno (2006). Manipulations of emotional context shape moral judgment. Psychological Science, 17: 476-477.

Wheatley & Haidt (2005). Hypnotically induced disgust makes moral judgments more severe. Psychological Science, 16: 780-784.

170 comments

Comments sorted by top scores.

comment by [deleted] · 2011-08-17T01:14:46.964Z · LW(p) · GW(p)

I used to think I was a very firm deontologist, but that was mainly because I didn't want ethical rules to be bent willy-nilly to maximize something simple like "number of lives saved." I didn't, for example, want torture to be legal. I wanted to live in a world with "rights" -- that is, ethical rules that ought not to be broken even when the circumstances change, for all possible circumstances with non-negligible probability. You don't want to live in a world where people are constantly reconsidering "Hm, is it worth it at this moment to not steal Sarah's property?" You want to live in a world where people understand that stealing is wrong and that's that. You want some rigidity.

I think a lot of self-identified deontologists think along these lines. They associate utilitarianism with "the greatest good for the greatest number," and then imagine things like "it is for the good of this great Nation that you be drafted to dig ditches this year" and they shudder.

That shudder isn't necessarily a "confabulation." The reason you shudder at the thought of a moral rule to "maximize utility" is that there is no definition of utility or "human value," simple enough to state in one sentence, that wouldn't result in a hell-world if you systematically maximized it. Human value is complicated, as this site has been at pains to tell us. Pick something (like "number of lives saved") and optimize for that, and you won't like the results.

People come up with deontological constraints, I think, to deal with the fact that "maximizing utility," when you visualize it, looks very, very bad. Modeling utilitarianism to low precision looks bad. Adding more subtlety to the model might not be so bad. Adding in terms like sympathy, respect for life, and so on as positive goods, so that throwing someone off a trolley is not a clear win. Or you could model human value by appealing to rights. Either way you haven't really put your finger on what you mean by "moral." If we could define morality rigorously, life would be easy, and it isn't.

Replies from: MBlume, fubarobfusco, Wei_Dai, lionhearted, torekp, Dreaded_Anomaly
comment by MBlume · 2011-08-17T06:07:11.248Z · LW(p) · GW(p)

This sounds like two-tier consequentialism -- "as it happens, when you take second- and third- and fourth- order consequences into account, the utility-maximizing course looks a hell of a lot like respecting some set of inherent rights of individuals"

comment by fubarobfusco · 2011-08-17T06:00:53.477Z · LW(p) · GW(p)

I've sometimes thought of deontological rules as something like a sanity check on utilitarian reasoning.

If, as you are reasoning your way to maximum utility, you come up with a result that ends, "... therefore, I should kill a lot of innocent people," or for that matter "... therefore, I'm justified in scamming people out of their life savings to get the resources I need," the role of deontological rules against murder or cheating is to make you at least stop and think about it really hard. And, almost certainly, find a hole in your reasoning.

It is imaginable — I wouldn't say likely — that there are "universal moral laws" for human beings, which take the following form: "If you come to the conclusion 'Utility is maximized if I murder these innocent people', then it is more likely that your human brain has glitched and failed to reason correctly, than that your conclusion is correct." In other words, the probability of a positive-utility outcome from murder is less than the probability of erroneous reasoning leading to the belief in that outcome.

A consequence of this is that the better predictor you are, the more things can be moral for you to do if you conclude they maximize utility. It is imaginable that no human can with <50% probability of error arrive at the conclusion "I should push that fat guy in front of the trolley", but that some superhuman predictor could.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-08-17T07:03:59.782Z · LW(p) · GW(p)

It is imaginable — I wouldn't say likely — that there are "universal moral laws" for human beings, which take the following form: "If you come to the conclusion 'Utility is maximized if I murder these innocent people', then it is more likely that your human brain has glitched and failed to reason correctly, than that your conclusion is correct." In other words, the probability of a positive-utility outcome from murder is less than the probability of erroneous reasoning leading to the belief in that outcome.

Obligatory link to relevant sequence.

comment by Wei Dai (Wei_Dai) · 2011-08-20T07:16:25.695Z · LW(p) · GW(p)

They associate utilitarianism with "the greatest good for the greatest number," and then imagine things like "it is for the good of this great Nation that you be drafted to dig ditches this year" and they shudder.

That shudder isn't necessarily a "confabulation."

I don't think Luke or Greene is saying that the shudder is confabulation. The shudder is the intuitive emotional response. What they're calling "confabulation" is making up a deontological rule, such as "everyone has a right not to be drafted for anything except defense", or something like that, to explain/justify the shudder.

Replies from: Eugine_Nier, lukeprog
comment by Eugine_Nier · 2011-08-21T22:45:43.412Z · LW(p) · GW(p)

What they're calling "confabulation" is making up a deontological rule, such as "everyone has a right not to be drafted for anything except defense", or something like that, to explain/justify the shudder.

If you don't make a deontological rule and insist that it have no exceptions, in any particular case you will be tempted to find an excuse why it doesn't apply. As Eliezer said in his post The Ends Don't Justify the Means:

And so we have the bizarre-seeming rule: "For the good of the tribe, do not cheat to seize power even when it would provide a net benefit to the tribe."

Indeed it may be wiser to phrase it this way: If you just say, "when it seems like it would provide a net benefit to the tribe", then you get people who say, "But it doesn't just seem that way - it would provide a net benefit to the tribe if I were in charge."

comment by lukeprog · 2011-08-20T17:21:08.919Z · LW(p) · GW(p)

Correct.

comment by lionhearted (Sebastian Marshall) (lionhearted) · 2011-08-18T14:52:41.231Z · LW(p) · GW(p)

Very good reply here. I used to firmly identify as a deontologist for that reason - I actually wrote a post rejecting the trolley game for ignoring secondary effects. It got a very mixed response, but I stand strongly by one of the points on there -

... everything creates secondary effects. If putting people involuntarily in harm's way to save others was an acceptable result, suddenly we'd all have to be really careful in any emergency. Imagine living in a world where anyone would be comfortable ending your life to save other people nearby - you'd have to not only be constantly checking your surroundings, but also constantly on guard against do-gooders willing to push you onto the tracks.

So I used to think I was a deontologist - "no, I wouldn't push someone onto the tracks to save others, because it's not a good idea to live in a world where people are comfortable ending each other's lives when they deem it for the greater good."

However, after a conversation with a very intelligent person with lots of training in philosophy, I was convinced I'm actually a "rules-based consequentialist" - that I want rules and protocols that produce a general set of consistently good effects rather than running the math every time a trolley is out of control (or a plane is going to crash, or a suspect you're really darn sure did it is in custody but you've got flimsy evidence...)

comment by torekp · 2011-08-21T00:15:40.594Z · LW(p) · GW(p)

I like this reply, but I feel it doesn't take the next logical step. What kind of considerations could make utilitarianism correct, given that, as you suggest, a good society needs some firmer rules?

So, just suppose for a moment, a bunch of rational human beings (rational, as human beings go) get together and agree to live by some rather rigid rules. They do so with the best available evidence in front of them, after thinking things through as well as could possibly be expected. They tell each other that you shouldn't push the fat man in front of the trolley, and act accordingly.

What possible sense can it make to say that nevertheless, really and truly, the morally right thing to do is push the fat man? What would "morally right" mean in that sentence, when we have already stipulated that pro-social codes of conduct and recognized virtues, rationally agreed to in open honest well-informed discussion, recommend something else? There is nothing else for morality to "really and truly" be about.

Replies from: Nick_Tarleton, lessdazed
comment by Nick_Tarleton · 2011-08-22T07:01:09.567Z · LW(p) · GW(p)

What possible sense can it make to say that nevertheless, really and truly, the morally right thing to do is push the fat man?

There is a (maybe not totally coherent, but mostly coherent and natural) way of construing the situation in a vacuum such that pushing the fat man is the right decision.

Alternately, the correct method of analysis that, when carried through to nth order, outputs morality, when carried through to zeroth order recommends pushing the fat man.

comment by lessdazed · 2011-08-21T00:41:05.303Z · LW(p) · GW(p)

What possible sense can it make to say that nevertheless, really and truly, the morally right thing to do is push the fat man?

In the same sense that I can say it is morally--human wrong, or morally--dog+ wrong for a dog to eat a homeless guy who smells like bacon.

It isn't morally--dog wrong - because it's just a dog. And I agree it does make little sense to talk about whether a dog's actions are morally--human wrong. But if the dog knew of a way to have a better morality according to morality--dog standards, and it was moral--dog to improve one's self in that way when there was no or little cost, then the failure to improve to morality--dog+ is a failure according to morality--dog, and so in an important sense are the dog's actions that violate morality--dog+, though they do not violate morality--dog directly.

Replies from: torekp
comment by torekp · 2011-08-21T15:22:38.613Z · LW(p) · GW(p)

This looks like a sketch toward an argument that utilitarianism could be right for humans+. Traditionally, utilitarianism was supposed to be about what is right for us to do.

Replies from: lessdazed
comment by lessdazed · 2011-08-21T20:45:35.631Z · LW(p) · GW(p)

I can't speak to what is traditional and I don't mind declaring all historical utilitarians wrong in all their debates with non-utilitarians, though I wouldn't mind saying the opposite, either.

Human morality demands a certain amount of thought, and many actions demand moral consideration or their being "good" is no more than fate, and their being bad is negligence.

Upon thinking about it, one realizes that those who think about it should (shouldthosewhothinkaboutit) push the fat man. Those who don't think about it shouldn't (shouldn'tthosewhodon'tthinkaboutit) push the fat man, but should (shouldthosewhodon'tthinkaboutit) think about it.

To ask about unclarified "should" is as to ask about unclarified "sound".

It is important to bear in mind that blame is something humans spray paint onto the unalterable causality of the world, and not to think that either the paint is unalterable because causality is, or that causality is alterable because the paint is.

We can blame humans fully, partially, or not at all for the consequences when they are unthinking, they do what unthinking people should do, there are negative consequences, thinking people should have done a different thing, and those humans should have been thinking people but weren't.

Everything has been explained. There is nothing left in asking if a person really should have done what a thinking person should have done had he or she have been thinking, when the person should have been thinking, and unthinking people were not obligated to do thing.

Replies from: torekp, Eugine_Nier
comment by torekp · 2011-08-23T02:48:27.515Z · LW(p) · GW(p)

Upon thinking about it, one realizes that those who think about it should (shouldthosewhothinkaboutit) push the fat man.

One of us hasn't thought enough about it, because I think it takes more than thinking about it. One would also have to know oneself to be largely immune to various biases, which make most humans more prone to rationalize false conclusions about the need to kill someone for the greater good, than to correctly grasp a true utilitarian Trolley Problem. One would have to be human+, if not human(+N). (I think one would also have to live in a human+ or human(+N) community, but never mind about that.)

Note that Greene and other cognitive scientists rarely if ever spell out an airtight case, where the actions save either one life or five lives and magically have no further consequences, and where the utilitarian calculus is therefore clear. Greene simply describes the case more or less as Luke does above, and then leaves the subjects to infer or not infer whatever consequences they might.

comment by Eugine_Nier · 2011-08-21T22:50:57.004Z · LW(p) · GW(p)

The point is that blame can itself have the effect of decreasing the frequency of the behavior that is receiving the blame. So the right question to ask is would having been more exposed to the idea that one should be blamed for doing X have prevented the person from doing X.

Replies from: lessdazed
comment by lessdazed · 2011-08-21T23:08:11.761Z · LW(p) · GW(p)

So the right question to ask is would having been more exposed to the idea that one should be blamed for doing X have prevented the person from doing X.

My point is that sometimes the answer to "would having been more exposed to the idea that one should be blamed for doing X have prevented the person from doing X?" is no and the answer to "would having been more exposed to the idea that one should be blamed for not trying to improve morally have prevented the person from not improving morally?" is yes and the answer to "would having been more exposed to the idea that one should be blamed for doing X have prevented the person they would be had they tried to improve morally from doing X?" is yes.

Thank you for succinctly stating a good question to ask. The answer to that question may be "no" while the answers to two similar questions are both "yes". Yet by "morally right" many people seem to mean not just situations where the answer to the question you put is "yes", but those in which the answer to the first question is "no" but the answers to to related questions are both "yes". Others mean only cases in which the answer to the first question is "yes", period.

I think I know what people mean by phrases such as "What possible sense can it make to say that nevertheless, really and truly, the morally right thing to do is push the fat man?" or "It is morally right to push the fat man", I think I know why other people are confused, and I do not feel confused by the question, but rather an impulse to unpack it and explain why I think it is confusing.

Person 1: Is it morally right to hit random people you encounter in the street?

Person 2: No. We blame everyone who does that, and consequently people don't do that, even when they want to.

Person 3: I agree.

P1: To kidnap and eat people?

P2: Same answer as to the first question.

P3: Likewise.

P1: For a tiger to kidnap and eat people?

P2: It's not "immoral" because neither the tiger nor anyone else is affected by blame. We guard against tigers, and defend ourselves, and even seek out and kill tigers that have acquired a taste for humans, but we do not castigate tigers.

P3: I agree.

P1: An alien spacecraft has begun abducting and experimenting on people. Who the aliens abduct seems random: sometimes they go to great lengths to reach an individual, but there is no pattern at all among abductees. Every abducted person has had part of their brain removed, and has an apparently irresistible desire to eat people. It seems all abductees must be monitored or restrained for the rest of their natural lives. Is it wrong for them to eat people?

P2: No, they are like the tigers.

P3: I agree.

P1: But previously you both said it was morally wrong to eat people!

P2: It depends on the effects of blame. There is no effect of blaming the abducted cannibals. It's not even like with people who have hidden brain based biological disorders, when failing to blame them weakens the social condemnation for everyone, and there is a weighing to do. The alien case is so one-sided and distinguishable that we can easily tell that the right thing to do is to not blame the abductees, but to blame conventional cannibals.

P3: I agree

P1: Actually, I left something out when describing the aliens. They have kidnapped millions of people, but never anyone wearing anything red, or who have red tatoos, or were in red cars, or were within a few feet of anything not biological that was red. The aliens said as much when they arrived, beaming this information directly into everyone's skulls in their native language several times a day. Are people truly morally responsible for kidnapping and eating people?

P2: Obviously not if they have been abducted and altered. Blame has no effect at all on the behavior of abductees, so they are not "morally responsible". that's the true meaning of "morally responsible", just check the dictionary!

P3: I disagree. Blame may not affect abductees, but it does affect whether or not people get abducted, because the safety measure is so low-cost and easy to implement. Abductees are to blame for eating people, and are truly "morally responsible".

Person 4: If you want to define the term "morally responsible" so that it's simply the answer to one hypothetical question, I don't blame you and I'm willing to play that game. If you think the term naturally covers blaming the abductees for eating people, that's fine too. But Person 1: don't get confused and lose sight of the relationship between blame and people's actions, don't think there is only a small link or no link at all between blame and the number of cannibals just because they are not "truly morally responsible" as you define it. And Person 2: don't get confused by your having a single term such that you might think that if only we blame abductees in the right way, they will stop eating people whenever they can.

comment by Dreaded_Anomaly · 2011-08-20T04:05:06.668Z · LW(p) · GW(p)

This post sums up my own position much more eloquently than I have so far been able to phrase it mentally. Thanks.

comment by Vladimir_M · 2011-08-17T00:33:26.542Z · LW(p) · GW(p)

I think this whole "utilitarian vs. deontological" setup is a misleading false dichotomy. In reality, the way people make moral judgments -- and I'd also say, any moral system that is really usable in practice -- is best modeled neither by utilitarianism nor by deontology, but by virtue ethics.

All of the puzzles listed in this article are clarified once we realize that when people judge whether an act is moral, they ask primarily what sort of person would act that way, and consequently, whether they want to be (or be seen as) this sort of person and how people of this sort should be dealt with. Of course, this judgment is only partly (and sometimes not at all) in the form of conscious deliberation, but from an evolutionary and game-theoretical perspective, it's clear why the unconscious processes would have evolved to judge things from that viewpoint. (And also why their judgment is often covered in additional rationalizations at the conscious level.)

The "fat man" variant of the trolley problem is a good illustration. Try to imagine someone who actually acts that way in practice, i.e. who really goes ahead and kills in cold blood when convinced by utilitarian arithmetic that it's right to do so. Would you be comfortable working or socializing with this person, or even just being in their company? Of course, being scared and creeped out by such a person is perfectly rational: among the actually existing decision algorithms implemented by human brains, there are none (or at least very few) that would make the utilitarian decision in the fat man-trolley problem and otherwise produce reasonably predictable, cooperative, and non-threatening behavior.

It's similar with the less dramatic examples discussed by Haidt. In all of these, the negative judgment, even if not explicitly expressed that way, is ultimately about judging what kind of person would act like that. (And again, except perhaps for the ideologically polarized flag example, it is true that such behaviors signal that the person in question is likely to be otherwise weird, unpredictable, and threatening.)

I'd also add that when it comes to rationalizations, utilitarians should be the last ones to throw stones. In practice, utilitarianism has never been much more than a sophisticated framework for constructing rationalizations for ideological positions on questions where correct utilitarian answers are at worst just undefined, and at best wildly intractable to calculate. (As is the case for pretty much all questions of practical interest.)

Replies from: multifoliaterose, Bongo, shokwave, DanielLC
comment by multifoliaterose · 2011-08-17T18:38:52.841Z · LW(p) · GW(p)

I'd also add that when it comes to rationalizations, utilitarians should be the last ones to throw stones. In practice, utilitarianism has never been much more than a sophisticated framework for constructing rationalizations for ideological positions on questions where correct utilitarian answers are at worst just undefined, and at best wildly intractable to calculate. (As is the case for pretty much all questions of practical interest.)

The phenomenon of utilitarianism serving as a sophisticated framework for constructing rationalizations for ideological positions exists and is perhaps generic. But there's an analogous phenomenon of virtue ethics being rhetorically (think about both sides of the abortion debate). I strongly disagree that utilitarianism is ethically useless in practice. Do you disagree that VillageReach's activity has higher utilitarian expected value per dollar than that of the Make A Wish Foundation?

Yes, there are plenty of situations where game theoretic dynamics and coordination problems make utilitarian style analysis useless, but your claim seems overly broad and sweeping.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-08-17T21:48:22.740Z · LW(p) · GW(p)

I agree that I have indulged in a bit of a rhetorical excess above. What I had in mind is primarily welfare economics -- as I indicated in another comment, I think it's quite evident that this particular kind of formalized utilitarianism is regularly used to construct arguments for various ideological positions that are seemingly rigorous but in fact clearly rationalizations.

I also agree that non-utilitarian theories of ethics are fertile grounds for rationalizations too. I merely wanted to emphasize that given all the utilitarian rationalizations being thrown around, the idea of utilitarian thinking being somehow generally less prone to rationalizations is a non-starter, under any reasonable definitions of these terms.

As for the issues of charity, I think they are also more complicated than they seem, but this is a quite complex topic in its own right, which unfortunately I don't have the time to address right now. I do agree that this area can be seen as a partial counterexample to my general thesis about uselessness of utilitarianism. (But less so than the strong proponents of utilitarian charity commonly claim.)

comment by Bongo · 2011-08-17T08:30:49.760Z · LW(p) · GW(p)

So I guess the takeaway is that if you care more about your status as a predictable, cooperative, and non-threatening person than about four innocent lives, don't push the fat man.

Replies from: Nick_Tarleton, Vladimir_M, Nick_Tarleton
comment by Nick_Tarleton · 2011-08-17T19:38:22.157Z · LW(p) · GW(p)

http://lesswrong.com/lw/v2/prices_or_bindings/

(Also, please try to avoid sentences like "if you care about X more than innocent lives" — that comes across to me as sarcastic moral condemnation and probably tends to emotionally trigger people.)

comment by Vladimir_M · 2011-08-17T18:21:14.202Z · LW(p) · GW(p)

It's not just about what status you have, but what you actually are. You can view it as analogous to the Newcomb problem, where the predictor/Omega is able to model you accurately enough to predict if you're going to take one or two boxes, and there's no way to fool him into believing you'll take one and then take both. Similarly, your behavior in one situation makes it possible to predict your behavior in other situations, at least with high statistical accuracy, and humans actually have some Omega-like abilities in this regard. If you kill the fat man, this predicts with high probability that you will be non-cooperative and threatening in other situations. This is maybe not necessarily true in the space of all possible minds, but it is true in the space of human minds -- and it's this constraint that gives humans these limited Omega-like abilities for predicting each others' behavior.

(Of course, in real life this is further complicated by all sorts of higher-order strategies that humans employ to outsmart each other, both consciously and unconsciously. But when it comes to the fundamental issues like the conditions under which deadly violence is expected, things are usually simple and clear.)

And while these constraints may seem like evolutionary baggage that we'd best get rid of somehow, it must be recognized that they are essential for human cooperation. When dealing with a typical person, you can be confident that they'll be cooperative and non-threatening only because you know that their mind is somewhere within the human mind-space, which means that as long as there are no red flags, cooperative and non-threatening behavior according to the usual folk-ethics is highly probable. All human social organization rests on this ability, and if humans are to self-modify into something very different, like utility-maximizers of some sort, this is a fundamental problem that must be addressed first.

Replies from: nerzhin, lessdazed
comment by nerzhin · 2011-08-17T19:51:15.004Z · LW(p) · GW(p)

Another way of saying this (I think - Vladimir_M can correct me):

You only have two choices. You can be the kind of person who kills the fat mat in order to save four other lives and kills the fat man in order to get a million dollars for yourself. Or you can be the kind of person who refuses to kill the fat man in both situations. Because of human hardware, those are your only choices.

Replies from: Vladimir_M, lessdazed, Bongo
comment by Vladimir_M · 2011-08-17T20:18:12.115Z · LW(p) · GW(p)

I don't mean to imply that the kind of person who would kill the fat man would also kill for profit. The only observation that's necessary for my argument is that killing the fat man -- by which I mean actually doing so, not merely saying you'd do so -- indicates that the decision algorithms in your brain are sufficiently remote from the human standard that you can no longer be trusted to behave in normal, cooperative, and non-dangerous ways. (Which is then correctly perceived by others when they consider you scary.)

Now, to be more precise, there are actually two different issues there. The first is whether pushing the fat man is compatible with otherwise cooperative and benevolent behavior within the human mind-space. (I'd say even if it is, the latter is highly improbable given the former.) The second one is whether minds that implement some such utilitarian (or otherwise non-human) ethic could cooperate with each other the way humans are able to thanks to the mutual predictability of our constrained minds. That's an extremely deep and complicated problem of game and decision theory, which is absolutely crucial for the future problems of artificial minds and human self-modification, but has little bearing on the contemporary problems of ideology, ethics, etc.

Replies from: atucker, lessdazed
comment by atucker · 2011-08-23T13:35:34.043Z · LW(p) · GW(p)

It seems like you can make similar arguments for virtue ethics and acausal trade.

If another agent is able to simulate you well, then it helps them to coordinate with you by knowing what you will do without communicating. When you're not able to have a good prediction of what other people will do, it takes waaay more computation to figure out how to get what you want, and if its compatible with them getting what they want.

By making yourself easily simulated, you open yourself up to ambient control, and by not being easily simulated you're difficult to trust. Lawful Stupid seems to happen when you have too many rules enforced too inflexibly, and often (in literature) other characters can take advantage of that really easily.

comment by lessdazed · 2011-08-18T22:58:27.711Z · LW(p) · GW(p)

The second one is whether minds that implement some such utilitarian (or otherwise non-human) ethic could cooperate with each other the way humans are able to thanks to the mutual predictability of our constrained minds.

But we normally seem to see "one death as a tragedy, a million as a statistic" due to scope insensitivity, availability bias etc.

Why not trust that people only directly dealing with numbers are normal when they implement cold-blooded utilitarianism? Why not have many important decisions made abstractly by such people? Is wanting to make decisions this way, remote from the consequences and up a few meta-levels, a barbaric thing to advocate?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-08-19T05:12:13.922Z · LW(p) · GW(p)

Why not trust that people only directly dealing with numbers are normal when they implement cold-blooded utilitarianism? Why not have many important decisions made abstractly by such people? Is wanting to make decisions this way, remote from the consequences and up a few meta-levels, a barbaric thing to advocate?

During the 20th century some societies have attempted to implement more-or-less that policy. The results certainly justify the adjective barbaric.

Replies from: lessdazed
comment by lessdazed · 2011-08-19T11:17:01.135Z · LW(p) · GW(p)

But most of the people remained relatively normal throughout. So virtue ethics needs a huge patch to approximate consequentialism.

You are providing a consequentialist argument for a base of virtue ethics plus making sure no one makes abstract decisions, but I don't see how preventing people from making abstract decisions emerges naturally from virtue ethics at all.

I agree with your comment in one sense and was trying to imply it, as the bad results are not prevented by virtue ethics alone. On the other hand, you have provided a consequentialist argument that I think valid and was hinting towards.

comment by lessdazed · 2011-08-18T22:52:39.009Z · LW(p) · GW(p)

Spreading this meme, even by a believing virtue ethicist, would seem to reduce the lifespan of fat men with bounties on their heads much faster than it would spare the crowds tied to the train tracks.

U: "Ooo look, a way to rationalize killing for profit!"

VE: "No no no, the message is that you shouldn't kill the fat man in either ca-"

U: "Shush you!"

Of course, one may want to simply be the sort who tells the truth, consequences to fat men be damned.

comment by Bongo · 2011-08-18T21:17:57.926Z · LW(p) · GW(p)

You only have two choices. You can be the kind of person who kills the fat mat in order to save four other lives and kills the fat man in order to get a million dollars for yourself. Or you can be the kind of person who refuses to kill the fat man in both situations. Because of human hardware, those are your only choices.

This seems obviously false.

comment by lessdazed · 2011-08-18T06:54:37.817Z · LW(p) · GW(p)

It's not just about what status you have, but what you actually are.

Is "what you actually are" equivalent to status of yourself, to yourself?

Replies from: TheOtherDave
comment by TheOtherDave · 2011-12-08T18:54:11.658Z · LW(p) · GW(p)

No, I don't think so. "What I actually am", if I'm understanding Vladimir correctly, refers to the actual actions I take under various situations.

For example, if I believe I'm the sort of person who would throw the fat man under the train, but in fact I would not throw the fat man under the train, then I've successfully signaled to myself my status as a fat-man-under-train-thrower (I wonder if that's an allowed construction in German), but I am not actually a fat-man-under-train-thrower.

comment by Nick_Tarleton · 2011-08-17T19:36:02.539Z · LW(p) · GW(p)

http://lesswrong.com/lw/v2/prices_or_bindings/

(Also, your comment reads to me — deliberately or not — as sarcastic moral opprobrium directed at Vladimir's position. Please try to avoid that.)

comment by shokwave · 2011-08-17T06:57:46.688Z · LW(p) · GW(p)

I am torn on virtue ethics.

On one level it's almost akin to what a Bayesian calculation (taking "weird but harmless behaviour" as positive evidence of "weird and harmful") would feel like from the inside, and in that respect I can see the value in virtue ethics (even though it strikes me as a mind projection issue of creating a person's ethical 'character' when all you need is the likelihood of them performing this act or that).

But on another level, I can see it is as a description of a sort of hard-coded irrationality that we have evolution to thank for. All things being equal, we prefer to associate with people who will never murder us, rather than people who will only murder us when it would be good to do so - because we personally calculate good with a term for our existence. People with an irrational, compelling commitment are more trustworthy than people compelled by rational or utilitarian concerns (Schelling's Strategy of Conflict), because we are aware that there exists situations where the best outcome overall is not the best outcome personally.

So I am torn between lumping virtue ethics in with deontological ethics as "descriptions of human moral behaviour" and repairing it into a usable set of prescriptions for human moral behaviour.

Replies from: Nick_Tarleton, Eugine_Nier, None
comment by Nick_Tarleton · 2011-08-17T20:01:14.233Z · LW(p) · GW(p)

(even though it strikes me as a mind projection issue of creating a person's ethical 'character' when all you need is the likelihood of them performing this act or that).

Character just is a compressed representation of patterns of likely behavior and the algorithms generating them.

All things being equal, we prefer to associate with people who will never murder us, rather than people who will only murder us when it would be good to do so - because we personally calculate good with a term for our existence. People with an irrational, compelling commitment are more trustworthy than people compelled by rational or utilitarian concerns (Schelling's Strategy of Conflict), because we are aware that there exists situations where the best outcome overall is not the best outcome personally.

This connotes that wanting others to self-bind comes from unvirtuous selfishness, which seems like the wrong connotation to apply to a phenomenon that enables very general and large Pareto improvements (yay!).

In particular (not maximally relevant in this conversation, but particularly important for LW), among fallible (including selfishly biased in their beliefs) agents that wish to pursue common non-indexical values, self-binding to cooperate in the epistemic prisoner's dilemma enables greater group success than a war of all against all who disagree, or mere refusal to cooperate given strategic disagreement.

As to "irrational" (and, come to think of it, also cooperation), see Bayesians vs. Barbarians.

So I am torn between lumping virtue ethics in with deontological ethics as "descriptions of human moral behaviour" and repairing it into a usable set of prescriptions for human moral behaviour.

Why not do both? Treat naive virtue ethics as a description of human moralizing verbal behavior, and treat the virtue-ethical things people do as human game-theoretic behavior, and, because behaviors tend to have interesting and not completely insane causes, look for any good reasons for these behaviors that you aren't already aware of and craft a set of prescriptions from them.

comment by Eugine_Nier · 2011-08-17T07:26:33.220Z · LW(p) · GW(p)

(even though it strikes me as a mind projection issue of creating a person's ethical 'character' when all you need is the likelihood of them performing this act or that).

It's not a fallacy if the thing your projecting onto is an actual human with an actual human mind. Another way to see this is as using the priors on how humans tend to behave that evolution has provided you.

But on another level, I can see it is as a description of a sort of hard-coded irrationality that we have evolution to thank for. All things being equal, we prefer to associate with people who will never murder us, rather than people who will only murder us when it would be good to do so - because we personally calculate good with a term for our existence. People with an irrational, compelling commitment are more trustworthy than people compelled by rational or utilitarian concerns (Schelling's Strategy of Conflict), because we are aware that there exists situations where the best outcome overall is not the best outcome personally.

The definition of "rational" you're using in that paragraph has the problem that it will cause you to regret your rationality. If having an "irrational" commitment helps you be more trusted and thus achieve your goals, it's not irrational. See the articles about decision theory for more details on this.

Replies from: handoflixue
comment by handoflixue · 2011-08-17T18:41:52.689Z · LW(p) · GW(p)

It's not a fallacy if the thing your projecting onto is an actual human with an actual human mind. Another way to see this is as using the priors on how humans tend to behave that evolution has provided you.

That only works if you're (a) not running in to cultural differences and (b) not dealing with someone who has major neurological differences. Using your default priors on "how humans work" to handle an autistic or a schizophrenic is probably going to produce sub-par results. Same if you assume that "homosexuality is wrong" or "steak is delicious" is culturally universal.

It's unlikely that you'll run in to someone who prioritizes prime-sized stacks of pebbles, but it's entirely likely you'll run in to people who thinks eating meat is wrong, or that gay marriage ought to be legalized :)

Replies from: Eugine_Nier, Kaj_Sotala
comment by Eugine_Nier · 2011-08-18T02:59:48.073Z · LW(p) · GW(p)

Using your default priors on "how humans work" to handle an autistic or a schizophrenic is probably going to produce sub-par results.

They're going to produce the result that this human's brain is wired strangely and thus he's liable to exhibit other strange and likely negative behaviors. Which is more-or-less accurate.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-08-18T06:08:41.647Z · LW(p) · GW(p)

Why on Earth is this comment getting downvoted?

Replies from: handoflixue, lessdazed
comment by handoflixue · 2011-08-18T18:52:59.152Z · LW(p) · GW(p)

Because his comment is evidence for the hypothesis that he has a divergent neurology from mine, and is therefor liable to exhibit negative behaviors :P

comment by lessdazed · 2011-08-18T07:09:38.767Z · LW(p) · GW(p)

My guess is it's in response to the phrase "negative behaviors" describing a non-neurotypical person's behavior.

comment by Kaj_Sotala · 2011-08-18T14:16:39.177Z · LW(p) · GW(p)

(a) not running in to cultural differences and

Indeed, and it probably needs to be emphasized that nations are not monocultures. Americans reading mainly utilitarian blogs and Americans reading mainly deontologist blogs live in different cultures, for instance. (To say nothing about Americans reading atheist blogs and Americans reading fundamentalist blogs, let alone Americans reading any kinds of blogs and Americans who don't read period.)

comment by [deleted] · 2011-08-18T01:50:48.144Z · LW(p) · GW(p)

(even though it strikes me as a mind projection issue of creating a person's ethical 'character' when all you need is the likelihood of them performing this act or that).

This may be part of the reason many virtue ethical theories are prescriptions on what one should do themselves, and usually disapprove of trying to apply it to other human's.On this level, it's poor for predicting, but wonderful for meaningful signalling of cooperative intent. I tend to consider virtue ethics as my low-level compressed version of consequentialist morality; it gives me the ability to develop actions for snap situations that I'd want to take for consequentialist reasons.

comment by DanielLC · 2011-09-03T18:29:07.185Z · LW(p) · GW(p)

As is the case for pretty much all questions of practical interest.

Is it a good idea to spend money on yourself (rather than donating it)?

I don't see how you could possibly rationalize that, and the inconvenience of it would seem to outweigh any benefit it gives to rationalizing other things.

comment by utilitymonster · 2011-08-16T20:05:05.716Z · LW(p) · GW(p)

A recent study by folks at the Oxford Centre for Neuroethics suggests that Greene et. al.'s results are better explained by appeal to differences in how intuitive/counterintuitive a moral judgment is, rather than differences in how utilitarian/deontological it is. I had a look at the study, and it seems reasonably legit, but I don't have any expertise in neuroscience. As I understand it, their findings suggest that the "more cognitive" part of the brain gets recruited more when making a counterintuitive moral judgment, whether utilitarian or deontological.

Also, it is worth noting that attempts to replicate the differences in response times have failed (this was the result with the Oxford Center for Neuroethics study as well).

Here is an abstract:

Neuroimaging studies on moral decision-making have thus far largely focused on differences between moral judgments with opposing utilitarian (well-being maximizing) and deontological (duty-based) content. However, these studies have investigated moral dilemmas involving extreme situations, and did not control for two distinct dimensions of moral judgment: whether or not it is intuitive (immediately compelling to most people) and whether it is utilitarian or deontological in content. By contrasting dilemmas where utilitarian judgments are counterintuitive with dilemmas in which they are intuitive, we were able to use functional magnetic resonance imaging to identify the neural correlates of intuitive and counterintuitive judgments across a range of moral situations. Irrespective of content (utilitarian/deontological), counterintuitive moral judgments were associated with greater difficulty and with activation in the rostral anterior cingulate cortex, suggesting that such judgments may involve emotional conflict; intuitive judgments were linked to activation in the visual and premotor cortex. In addition, we obtained evidence that neural differences in moral judgment in such dilemmas are largely due to whether they are intuitive and not, as previously assumed, to differences between utilitarian and deontological judgments. Our findings therefore do not support theories that have generally associated utilitarian and deontological judgments with distinct neural systems.

An important quote from the study:

To further investigate whether neural differences were due to intuitiveness rather than content of the judgment [utilitarian vs. deontological], we performed the additional analyses....When we controlled for content, these analyses showed considerable overlap for intuitiveness. In contrast, when we controlled for intuitiveness, only little--if any--overlap was found for content. Our results thus speak against the influential interpretation of previous neuroimaging studies as supporting a general association between deontological judgment and automatic processing, and between utilitarian judgment and controlled processing.” (p. 7 my version)

Where to find the study (subscription only):

Kahane, G., K. Wiech, N. Shackel, M. Farias, J. Savulescu and I. Tracey, ‘The Neural Basis of Intuitive and Counterintuitive Moral Judgement’, forthcoming in Social, Cognitive and Affective Neuroscience.

Link on Guy Kahane's website: http://www.philosophy.ox.ac.uk/members/research_staff/guy_kahane

Replies from: lukeprog, lukeprog, lukeprog, lukeprog
comment by lukeprog · 2011-08-16T20:49:46.138Z · LW(p) · GW(p)

PDF of paper. Well done! This is a much better counter-argument to Greene's position than the ones presented in 2007. I shall update the original post accordingly.

comment by lukeprog · 2011-08-16T22:25:09.593Z · LW(p) · GW(p)

Is this page broken for anyone else? When trying to load it, I just get a "Less Wrong broke!" message. I can still see the preview of it here, and I can even hit the 'edit' button from there and successfully update the post, and I can post new comments by replying to comments, but I can't actually load the page that contains this post! Is that happening for anyone else? It's been like this for me for more than an hour now.

Replies from: GreenRoot, Vladimir_Nesov
comment by GreenRoot · 2011-08-16T22:43:52.139Z · LW(p) · GW(p)

It's broken for me too, in exactly the way you describe. One of the variants on the error page invites me to buy a reddit t-shirt.

comment by Vladimir_Nesov · 2011-08-16T22:32:57.587Z · LW(p) · GW(p)

Broken for me too, including when logged out. So it's probably broken for everyone.

(I hope trike has automatic notification of these page-generation-crashed events, so that there is no point in contacting them manually. A message to this effect (or to the contrary) on the "page crashed" page would be nice.)

Replies from: lukeprog
comment by lukeprog · 2011-08-16T23:29:40.587Z · LW(p) · GW(p)

Huh. It's back!

Replies from: whpearson
comment by whpearson · 2011-08-16T23:47:18.661Z · LW(p) · GW(p)

But it has been purged of the letter c for some reason.

Replies from: lukeprog
comment by lukeprog · 2011-08-16T23:48:55.474Z · LW(p) · GW(p)

And of blockquotes. Anybody getting this phenomenon on other pages, too?

comment by lukeprog · 2013-12-15T19:21:20.983Z · LW(p) · GW(p)

Update: Greene's reply to Kahane et al. is here.

Replies from: joaolkf
comment by joaolkf · 2013-12-21T05:41:02.438Z · LW(p) · GW(p)

"Kahane et al. (2012) claim to have constructed cases that reverse this pattern, UI dilemmas in which the utilitarian response is more intuitive and the deontological response is more counter-intuitive. We have raised doubts about the behavioral and fMRI evidence presented in support of this claim. More importantly, we have provided positive evidence against it." Hehe, things seems to be heating up in the field of moral psychology.

comment by lukeprog · 2011-08-18T01:36:45.635Z · LW(p) · GW(p)

I've now added a paragraph at the end after discussing the Kahane paper with Greene.

Replies from: utilitymonster
comment by utilitymonster · 2011-08-18T02:47:26.861Z · LW(p) · GW(p)

Cool. Glad this turned out to be helpful.

comment by JackEmpty · 2011-08-17T14:24:10.009Z · LW(p) · GW(p)

A small nitpick, and without having read the other comments, so please excuse me if this has been mentioned before.

The 5 actions listed under the heading "Emotion and Deontological Judgments" squick me. But they don't disgust me.

From Urban Dictionary:

The concept of the "squick" differs from the concept of "disgust" in that "squick" refers purely to the physical sensation of repulsion, and does not imply a moral component.

Stating that something is "disgusting" implies a judgement that it is bad or wrong. Stating that something "squicks you" is merely an observation of your reaction to it, but does not imply a judgement that such a thing is universally wrong.

It may be useful to add this to our collective vocabulary. Some might argue it's adding unnecessary labels to too-similar a concept, but I think the distinction is useful.

Please, let me know if something like this has been explored already?

Replies from: shokwave, Desrtopa, lukeprog, None
comment by shokwave · 2011-08-18T13:19:04.471Z · LW(p) · GW(p)

Wow. I have the practice (common to sci-fi readers, I have heard) of taking unfamiliar words in my stride, attempting to figure them out in context, and taking it on faith that if I can't figure it out now, more context will soon be given. So that is how I approach new words on the internet (like 'squick'). This is only important because my internal definition for squick had developed into something very much like saying "eww" or the word disgust. It didn't have that crucial 'no moral component' tag for me. Interesting!

Replies from: JackEmpty, Kaj_Sotala
comment by JackEmpty · 2011-08-18T13:39:50.857Z · LW(p) · GW(p)

Likewise, but I think I have a bit of an obsession with learning obscure jargon... to the point of reading through the provided dictionaries in SF&F books a half dozen times, then referring to it when the words come up. And reading through online lists of terminology for fictional universes and technical activities.

But yes, searching for "squick" on here, I have seen it used as "eww", but I'm not quite sure from the brief glance if it had that particular tag, at least not explicitly.

comment by Kaj_Sotala · 2011-08-18T14:18:56.937Z · LW(p) · GW(p)

because my internal definition for squick had developed into something very much like saying "eww" or the word disgust.

Same here.

comment by Desrtopa · 2011-08-22T03:17:25.397Z · LW(p) · GW(p)

I think this is more of a prescriptive than descriptive definition of squick. In my experience, people who use the term do not necessarily mean that they make no moral judgment, and in fact, many people, including those who use the term, do not seem to acknowledge a difference between "this gives me a physical sensation of repulsion" and "this is morally wrong."

comment by lukeprog · 2011-08-17T18:08:12.794Z · LW(p) · GW(p)

Cool word!

comment by [deleted] · 2011-08-18T14:21:10.819Z · LW(p) · GW(p)

That Urban Dictionary definition entails that "disgust" does imply a moral component or a judgement that something is universally wrong. However, in my experience, it does not. I can easily imagine a little kid, or a grown adult, declaring a given food or smell or sight "disgusting" without having any objection to its existence. (I can, of course, also imagine a news article in which people interviewed describe someone's immoral behavior as disgusting.) The OED Online describes the word mainly as a visceral reaction and only in passing says it may be brought about by a "disagreeable action".

Instead of creating a new word for what "disgust" currently means and making "disgust" mean something else, perhaps we should leave "disgust" as it is and come up with a word for "moral revulsion". Something like "consternation" or "appallment".

Replies from: JackEmpty, soreff
comment by JackEmpty · 2011-08-18T15:14:00.573Z · LW(p) · GW(p)

Yeah, it does seem to be phrased such as to imply that.

I can easily imagine a little kid, or a grown adult, declaring a given food or smell or sight "disgusting" without having any objection to its existence. (I can, of course, also imagine a news article in which people interviewed describe someone's immoral behavior as disgusting.)

So the denotative meaning only very mildly indicates a potential for moral revulsion. But used in certain contexts, it does have heavy (heavier) connotations of moral revulsion. I think it's useful to have words for both the physical reaction side and for the moral reaction side, but I disagree with the UD definition in that "disgust" can be more of a generic umbrella term.

So... in other words, use "disgusted" when it's clear, or you mean both. Use "squicked" when it's unclear, and you want to only imply a physical reaction. And use "appalled" when you want to heavily imply moral reaction.

This is all just speculation and suggestion, but I do still hold that the word is useful.

Replies from: None
comment by [deleted] · 2011-08-18T16:39:24.905Z · LW(p) · GW(p)

So... in other words, use "disgusted" when it's clear, or you mean both. Use "squicked" when it's unclear, and you want to only imply a physical reaction. And use "appalled" when you want to heavily imply moral reaction.

Yes, I think I agree completely.

comment by soreff · 2011-08-18T15:12:26.078Z · LW(p) · GW(p)

I'd guess that there is at least one more variation: Sufficiently bad programming practices (e.g. hard coding "magic numbers" all over the source code) tends to inspire a feeling with a component of disgust in whoever has to maintain the code... Does this generalize? E.g. does discovering that part of the structure of a car is dependent on duct tape lead to similar reactions?

comment by Spurlock · 2011-08-16T18:45:18.976Z · LW(p) · GW(p)

As expected, subjects who received the wordings they had been primed to feel disgust toward judged the couple's actions as more morally condemnable than other subjects did.

I would just like to point out that this seems like fantastic training material for Rationalist Boot Camp and related projects.

Is your studied, practiced, meticulously crafted rationality enough to overcome these really dumb post-hypnotic suggestions? Surely if you can't convince yourself that your moral disgust is irrational in clear cut situations like these, your chances of tackling your own biases in more complex and emotionally charged issues are pretty slim.

Obviously there's some disclaimer to be attached when talking about hypnosis, but still it seems like a hell of a starting point.

comment by Wei Dai (Wei_Dai) · 2011-08-16T21:46:47.102Z · LW(p) · GW(p)

Indeed, it may turn out to be the case that we can dissolve the debate between deontological intuitions and utilitarian intuitions if we can map the cognitive algorithms that produce them.

Suppose it's an empirical fact that when people engage in consequentialist-type cognition, they typically use a model of the world that is ontologically crazy (for example, one with irreducible mental entities). Would that be an argument against consequentialism in general? In one sense it is, since it means that we can't straightforwardly translate naive consequentialism into a correct moral philosophy, so the consequentialist approach to moral philosophy is at least more difficult than it might first appear. But surely this empirical fact would not "dissolve" the debate with the conclusion that no form of consequentialism can be right, and therefore the whole approach should be abandoned.

Similarly, I suggest that empirical facts about how people typically form deontological moral judgements can't dissolve the debate between consequentialism vs deontology. A deontologist could still claim, for example, that while the typical deontological rules people naively come up with to explain their intuitive emotional judgements are not very good, intuitive emotional judgements are still the only source of "morality" that we have, and some set of more sophisticated deontological rules fits those intuitions better than any other moral philosophy, including any form of consequentialism.

ETA: Perhaps it would make more sense is to say that the specific deontological intuitions that people tend to naively hold (such as "lying is always wrong") can be dissolved with more scientific self-knowledge. On the other hand, it seems plausible that many of our consequentalist-type "values" can also be dissolved the same way. (See my previous comment.)

Replies from: lukeprog
comment by lukeprog · 2011-08-16T23:46:26.483Z · LW(p) · GW(p)

Like Eliezer, I see solving the question (or proving that it's a bad question) as a separate project from 'dissolving the question' by uncovering the cognitive algorithms that generate the question in the first place.

surely this empirical fact would not "dissolve" the debate with the conclusion that no form of consequentialism can be right, and therefore the whole approach should be abandoned.

No, but it shouldn't expect to arrive at correct results by engaging in human consequentialist reasoning. Perhaps we'd need to use therapy or drugs or neuroscientific tools to fix our brains so they can do consequentialist thinking without craziness, or else we'd have computers do our consequentialist thinking for us.

A deontologist could still claim, for example, that while the typical deontological rules people naively come up with to explain their intuitive emotional judgements are not very good, intuitive emotional judgements are still the only source of "morality" that we have, and some set of more sophisticated deontological rules fits those intuitions better than any other moral philosophy, including any form of consequentialism.

Yes. If deontology is to be fully killed off, one must pair a 'refutation of a mistake' with a 'dissolution to algorithm' that explains how we could have made the mistake in the first place. The present post only suggests the second part.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-08-17T00:30:16.674Z · LW(p) · GW(p)

Like Eliezer, I see solving the question (or proving that it's a bad question) as a separate project from 'dissolving the question' by uncovering the cognitive algorithms that generate the question in the first place.

I thought "dissolving the question" meant:

At the end, I hope, there was no question left - not even the feeling of a question.

Semantics aside, would you say that we can, now or in the foreseeable future, kill off deontology so completely that there is "no question left" (even if that's not the goal of this post)?

Replies from: lukeprog
comment by lukeprog · 2011-08-17T00:33:31.248Z · LW(p) · GW(p)

Hmmm. I'm not sure. It may depend on how our cognitive algorithms work, and I haven't decoded them yet. Do you have an intuition on the matter?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-08-17T07:27:28.656Z · LW(p) · GW(p)

Hmmm. I'm not sure. It may depend on how our cognitive algorithms work, and I haven't decoded them yet.

Do you expect they can ever be "decoded"? After all, we can only form high-level understanding of what's going on, while what's really going on includes all the unsummarizeable details that no human can comprehend. There are no simple laws underlying all of human moral cognition, the way it actually works.

comment by Alicorn · 2011-08-16T16:57:28.573Z · LW(p) · GW(p)

If you believe that killing a fetus is murder, then a woman seeking an abortion pays a doctor to commit murder. Why shouldn't she be convicted of that?

Suppose I believe that soldiers killing others in wartime is murder. Can you think of a reason why I wouldn't press criminal charges? I can. Because it's not illegal. Criminal charges aren't how we punish each other for moral infractions if they don't happen to also be against the law.

the deontological answer (don't kill the baby)

The situation as you presented it admits of nuances that do not make this "the deontological answer". I'm personally inclined to declare babies non-persons anyway, and your scenario even paints the baby as unsalvageable (if your group is found, it will die with the rest of you). This does not make me a non-deontologist. If you made it a screaming eight-year-old too hysterical to shut up; and specified that he or she alone would be safe from the enemy should we be found; and for some reason blocked the option of just knocking the kid out or gagging him or her; then I would be in more of a pickle - but please do note that there is some subtlety here. Deontologists do not just prohibit things for the hell of it.

I have superpowers of applied luminosity and will refrain from taking this article personally, but I would have expected more charity from you, and I'm disappointed.

applause lines

Think you mean "lights".

Replies from: lukeprog
comment by lukeprog · 2011-08-16T17:12:08.991Z · LW(p) · GW(p)

Think you mean "lights".

No, Greene uses the phrase "applause lines."

The situation as you presented it admits of nuances that do not make this "the deontological answer"... I would have expected more charity from you, and I'm disappointed.

The point you raise about 'the deontological answer' is discussed by Greene in his article. I had to do a lot of cutting to keep this article as short as it is, and it's still pretty long. I can't pre-respond to every possible objection. Perhaps you could raise the issue and allow me to respond instead of assuming I haven't considered the points you raise and am therefore worthy of your disappointment?

Greene and I are just trying to capture 'characteristic' deontological moral judgments, treating deontology and consequentialism and psychological kinds, so that we have an empirical construct to test. Here's what Greene writes:

Deontology is defined by its emphasis on moral rules, most often articulated in terms of rights and duties. Consequentialism, in contrast, is the view that the moral value of an action is in one way or another a function of its consequences alone. Consequentialists maintain that moral decision makers should always aim to produce the best overall consequences for all concerned, if not directly then indirectly. Both consequentialists and deontologists think that consequences are important, but consequentialists believe that consequences are the only things that ultimately matter, while deontologists believe that morality both requires and allows us to do things that do not produce the best possible consequences. For example, a deontologist might say that killing one person in order to save several others is wrong, even if doing so would maximize good consequences (S. Kagan, 1997).

This is a standard explanation of what deontology and consequentialism are and how they differ. In light of this explanation, it might seem that my thesis is false by definition. Deontology is rule-based morality, usually focused on rights and duties. A deontological judgment, then, is a judgment made out of respect for certain types of moral rules. From this it follows that a moral judgment that is made on the basis of an emotional response simply cannot be a deontological judgment, although it may appear to be one from the outside. Kant himself was adamant about this, at least with respect to his own brand of deontology. He notoriously claimed that an action performed merely out of sympathy and not out of an appreciation of one’s duty lacks moral worth (Kant, 1785/1959, chap. 1; Korsgaard, 1996a, chap. 2).

The assumption behind this objection—and as far as I know it has never been questioned previously—is that consequentialism and deontology are, first and foremost, moral philosophies. It is assumed that philosophers know exactly what deontology and consequentialism are because these terms and concepts were defined by philosophers. Despite this, I believe it is possible that philosophers do not necessarily know what consequentialism and deontology really are.

How could this be? The answer, I propose, is that the terms “deontology” and “consequentialism” refer to psychological natural kinds. I believe that consequentialist and deontological views of philosophy are not so much philosophical inventions as they are philosophical manifestations of two dissociable psychological patterns, two different ways of moral thinking, that have been part of the human repertoire for thousands of years. According to this view, the moral philosophies of Kant, Mill, and others are just the explicit tips of large, mostly implicit, psychological icebergs. If that is correct, then philosophers may not really know what they’re dealing with when they trade in consequentialist and deontological moral theories, and we may have to do some science to find out.

Because I am interested in exploring the possibility that deontology and consequentialism are psychological natural kinds, I will put aside their conventional philosophical definitions and focus instead on their relevant functional roles. As noted earlier, consequentialists and deontologists have some characteristic practical disagreements. For example, consequentialists typically say that killing one person in order to save several others may be the right thing to do, depending on the situation. Deontologists, in contrast, typically say that it’s wrong to kill one person for the benefit of others, that the “ends don’t justify the means.” Because consequentialists and deontologists have these sorts of practical disagreements, we can use these disagreements to define consequentialist and deontological judgments functionally. For the purposes of this discussion, we’ll say that consequentialist judgments are judgments in favor of characteristically consequentialist conclusions (e.g., “Better to save more lives”) and that deontological judgments are judgments in favor of characteristically deontological conclusions (e.g., “It’s wrong despite the benefits”).

Still, in this case it may be that I want to lengthen my post even more and put this clarification back into the original post. I'll add the word 'characteristically' for now.

Finally, your other point:

Suppose I believe that soldiers killing others in wartime is murder. Can you think of a reason why I wouldn't press criminal charges? I can. Because it's not illegal. Criminal charges aren't how we punish each other for moral infractions if they don't happen to also be against the law.

I think the point of Matthews' question is clear: Why not seek the murder charges in court so that abortions come to be considered murder? Why not seek to change the laws so that women who commit abortions will be convicted of murder, or at least for paying a doctor to commit murder?

Also, I'm curious: Do you disagree with the dual-process theory of moral judgment proposed by Greene? If so, why?

Replies from: Alicorn
comment by Alicorn · 2011-08-16T17:41:16.180Z · LW(p) · GW(p)

The points you raise about 'the deontological answer' are discussed by Greene in his article. I had to do a lot of snipping to keep this article as short as it is, and it's still pretty long. I can't pre-respond to every possible objection. Perhaps you could raise the question and allow me to answer instead of assuming I haven't considered the points you raise and am therefore worthy of your disappointment?

I am aware that it's from a quote. It's from a quote you chose, inserted into your article, and moved on from without caveat in a way characteristic of authors who wish to borrow positions in others' words (as opposed to more critical uses of quotations). Yes, you identify the quotes as not belonging to you, but your article is structured in such a way as to claim them; I have just gone over it again and can't find any place where you disclaim more than to the extent that you admit you didn't write those bits.

Adding "characteristically" helps, but I have to wonder what your target audience is here. You could have written about what parts of the brain light up when people make deontological or consequentialistic judgments in a far more neutral style with less sniping if you're only here to inform us about an interesting subsection of neuroscience. You've certainly failed to present a compelling and charitable enough case that a (representative?) deontologist in the audience is swayed. The consequentialists will be predisposed to believe the nice things you have to say about them; perhaps we have an especially good quality consequentialist here and they won't be subject to that bias, but regardless you're not going to change their views. What is your point?

Am I mistaken in thinking that anti-abortion activists do seek to make abortion illegal, and just don't do it by charging women with crimes the scope of which does not legally apply? Usually? (I seem to remember a news story about some bill in the works that would make killing a pregnant woman a double homicide. That's a reasonable step towards making abortion illegal as a form of murder where it currently is not.) Is there actually a precedent of charging people with nearby crimes when morally offended? Am I legally permitted to charge people with, say, littering, if they get near me while smelly (it's like litter, in that it makes the environment unpleasant)? This would surprise me but if you have non-abortion examples of this happening I'll buy it that anti-abortionists are behaving oddly by eschewing the tactic.

Replies from: lessdazed, lukeprog
comment by lessdazed · 2011-08-16T18:03:39.750Z · LW(p) · GW(p)

I am aware that it's from a quote. It's from a quote you chose

I enthusiastically agree that there should generally be much higher responsibility for content placed upon quoters.

deontologist...consequentialists...What is your point?

The unpersuaded middle? Those who had never considered the question? Error theorists?

Am I mistaken in thinking that anti-abortion activists do seek to make abortion illegal, and just don't do it by charging women with crimes the scope of which does not legally apply? Usually?

IIRC I once saw a youtube video in which a journalist or filmmaker or whoever interviewed people picketing against a clinic of some kind. Many had never even considered what punishments there should be for doctors or women. One woman's gut response to the question was to propose making abortion a special illegal category of murder without imposing any legal penalties, others had different initial responses but a great many hadn't considered the question at all.

Less shocking was that those who had at least considered it had very superficial responses, not at a very deep level of thought by even their standards.

please do note that there is some subtlety here.

I do not think it matters how well one draws a boundary if one ultimately has to bite the bullet (?) and say that some things adjacent in idea space are categorically different from each other, which is a very important way for things to be different, while other things very distant in idea space do not differ from yet other things very far from them.

Replies from: MixedNuts, lukeprog, SilasBarta
comment by MixedNuts · 2011-08-17T16:07:35.132Z · LW(p) · GW(p)

They weren't lawyers or sociologists or anything. Of course they're much better at figuring out what is wrong than how society should react to wrong things. They want fewer abortions to happen, and it's completely legitimate that they'd hand over the problem to whoever can optimise for that (the state is a possibility, but so are doctors and pregnant people). They're only working on setting it as a goal.

Replies from: lessdazed
comment by lessdazed · 2011-08-17T16:29:40.506Z · LW(p) · GW(p)

I would give a response similar to those of most of the people in the video if an important question had never occurred to me. If it had occurred to me and I was open to whatever answer was optimal, I would have "I don't know" available. Possibly the guy at 2:20 is in the latter category, but its not clear.

Because of the reaction to evidence showing things such as that sex education would reduce abortions, I'm disinclined to think that the anti-abortion movement's actions resemble a coordinated effort to reach a least bad end and am more inclined to think of it as a collection of local responses against anything they see as at all bad.

In other words, if the following is true:

They want fewer abortions to happen, and it's completely legitimate that they'd hand over the problem to whoever can optimise for that

...why do so many oppose sex education and safe sex?

comment by lukeprog · 2011-08-16T18:06:54.165Z · LW(p) · GW(p)

IIRC I once saw a youtube video in which a journalist or filmmaker or whoever interviewed people picketing against a clinic of some kind. Many had never even considered what punishments there should be for doctors or women. One woman's gut response to the question was to propose making abortion a special illegal category of murder without imposing any legal penalties, others had different initial responses but a great many hadn't considered the question at all.

Video.

Also added this to the original post; thanks for reminding me of it! Obviously highly relevant.

comment by SilasBarta · 2011-08-24T17:32:36.351Z · LW(p) · GW(p)

IIRC I once saw a youtube video in which a journalist or filmmaker or whoever interviewed people picketing against a clinic of some kind. Many had never even considered what punishments there should be for doctors or women. One woman's gut response to the question was to propose making abortion a special illegal category of murder without imposing any legal penalties, others had different initial responses but a great many hadn't considered the question at all.

Interesting -- I just watched the series Battlestar Galactica which has an issue on abortion, and it does the same kind of sidestep. The president decides to ban it on the (questionable[1]) grounds of need to repopulate the fleet, but in her speech announcing it only says that mother or medical practition would be subject to "criminal penalties" and nothing more specific.

Edit: I know, I know, "fictional evidence". But it's interesting in that it seems the writers must have had a hard time thinking up what penalties the president would find appropriate.

[1] I say "questionable" because can I think of about a thousand better policies to promote population growth than using "the stick" against women who don't want the child they're pregnant with.

comment by lukeprog · 2011-08-16T17:52:25.566Z · LW(p) · GW(p)

I am aware that it's from a quote. It's from a quote you chose, inserted into your article, and moved on from without caveat in a way characteristic of authors who wish to borrow positions in others' words (as opposed to more critical uses of quotations). Yes, you identify the quotes as not belonging to you, but your article is structured in such a way as to claim them; I have just gone over it again and can't find any place where you disclaim more than to the extent that you admit you didn't write those bits.

That's not the issue for me. I do, basically, claim the same argument that Greene makes. I was only trying to say that I can't add every qualification and clarification without the post ballooning to something like 40 pages. But I'm happy to respond to individual questions and objections outside the main body of the post, as I did above.

I have to wonder what your target audience is here. You could have written about [things] in a far more neutral style...

So now your objection is to my tone? That's only DH2 on the disagreement heirarchy. I'll take another look at my tone, but it's not much of a disagreement if we're disagreeing about tone.

Am I mistaken in thinking that anti-abortion activists do seek to make abortion illegal, and just don't do it by charging women with crimes the scope of which does not legally apply? Usually?

I already responded to this in my last comment. The point isn't that consistent pro-lifers would charge abortionists with murder even though the current laws don't consider abortion to be murder. The point is that consistent pro-lifers who think abortion is murder would seek to change the laws so that abortion would legally be considered murder and abortionists could legitimately be charged with committing murder (or with paying a doctor to commit murder).

Edit: I did notice some confusing language in my fourth paragraph, which I've updated thanks to your comments.

Replies from: jsalvatier, Alicorn
comment by jsalvatier · 2011-08-16T18:08:21.532Z · LW(p) · GW(p)

Luke, I think you often come across as defensive. I think it is difficult to avoid since you write a lot and thus put yourself out there for people to criticize and people do often comment in an aggressive fashion, but I think you should be aware of it anyway. I think avoiding seeming defensive would be useful to you because seeming defensive seems to make discussions more adversarial.

The phrase that gives me that impression here is

So now your objection is to my tone? You've reached DH2 on the disagreement heirarchy. I'll take another look at my tone, but it's not much of a disagreement if we're disagreeing about tone.

I am a neutral observer of this conversation; I've only read the last two comments.

Replies from: lukeprog
comment by lukeprog · 2011-08-16T18:22:18.958Z · LW(p) · GW(p)

Thanks for your feedback. For whatever reason, this turned out to be one of the most impacting comments I've received this year.

Replies from: jsalvatier
comment by jsalvatier · 2011-08-23T16:52:41.353Z · LW(p) · GW(p)

Glad to be of service :)

comment by Alicorn · 2011-08-16T18:05:25.592Z · LW(p) · GW(p)

I have established to my satisfaction that you will not engage with the criticisms I intended to present without at least one of us putting in more effort than we want to. Good day.

Replies from: Kaj_Sotala, lukeprog, komponisto
comment by Kaj_Sotala · 2011-08-16T23:21:14.464Z · LW(p) · GW(p)

I think the downvotes to this comment are coming from people who are interpreting it to say "this discussion has convinced me that you are low status, and not worth engaging with anymore". I believe the intended meaning is (Alicorn correct me if I'm wrong) closer to "we are not communicating effectively, and synchronizing our methods of communication would take too much effort given the low importance of the disagreement", with much less (if any) implication of blame.

Replies from: Alicorn
comment by Alicorn · 2011-08-16T23:43:47.557Z · LW(p) · GW(p)

Your interpretation is correct.

comment by lukeprog · 2011-08-16T18:18:20.366Z · LW(p) · GW(p)

Huh? Each time, I quoted each of your objections separately and responded to them directly. I also updated my post twice in response to your comments.

comment by komponisto · 2011-08-16T18:15:02.642Z · LW(p) · GW(p)

criticisms I intended to present

Were there additional criticisms that you did not state already above? If so, I would appreciate at least knowing what they are (without necessarily engaging in a discussion about whether they are sound), so that I have an idea of where the strongest sources of doubt lie (in order to arrive at an appropriate level of confidence in the article's thesis, which I am predisposed to believe).

Replies from: Alicorn, lukeprog
comment by Alicorn · 2011-08-16T19:46:15.079Z · LW(p) · GW(p)

My objections, summarized from my comments and made more direct, are:

  • It is unfair to impugn someone's commitment to a moral position based on the fact that they do not attempt to prosecute violations as crimes, when the violations are not, legally, crimes.

  • There is not a simplistic "the deontological answer". To suggest that there is (or to quote someone who does and move on without disclaimer) is to be uncharitable to deontologists.

Luke's responses, summarized as I understand them:

  • His article was too long anyway and I should read the rest of the quoted author, rather than expecting that Luke would mention it or abridge his selected quotes differently if he was aware of the issues therewith and wished to disclaim them.

  • The mentioned simplistic deontology is "characteristic" of deontology, and it is reasonable to draw conclusions about it as a category therefrom.

  • Why aren't anti-abortionists trying to press murder charges against abortionists and the women who hire them?

Me:

  • The way Luke presents quotes implies that he endorses them as they stand, and it is cheating to defer responsibility for their content when they were originally presented so plainly.

  • The article snipes at deontologists, and this appears to serve no purpose (does not present neuroscience more interestingly; does not convince deontologists; does not convince consequentialists more).

  • Aren't anti-abortionists trying to make abortion illegal, just not through the silly method Luke proposes? Are there other cases of people using said silly method to make other things illegal?

Luke:

  • Yes, he does endorse his quotes, but they are complicated and he didn't want his article to be too long, so he's only responding to specific complaints about them rather than quoting more cautiously in the first place.

  • It is wrong of me to complain about his tone. (Introduction of the word "tone" is his.)

  • He "already responded to" the abortion thing. (No he didn't.)

Replies from: lukeprog
comment by lukeprog · 2011-08-16T20:12:18.597Z · LW(p) · GW(p)

Alicorn,

I'm pretty confused by how you're interpreting my words. Three examples:

ONE

What i said, direct quote:

The point you raise about 'the deontological answer' is discussed by Greene in his article. I had to do a lot of cutting to keep this article as short as it is, and it's still pretty long. I can't pre-respond to every possible objection. Perhaps you could raise the issue and allow me to respond instead of assuming I haven't considered the points you raise and am therefore worthy of your disappointment?

What you heard me say (disconnects from what I actually said, in italics):

His article was too long anyway and I should read the rest of the quoted author, rather than expecting that Luke would mention it or abridge his selected quotes differently if he was aware of the issues therewith and wished to disclaim them.

TWO

What I said, direct quote:

So now your objection is to my tone? That's only DH2 on the disagreement heirarchy. I'll take another look at my tone, but it's not much of a disagreement if we're disagreeing about tone.

What you heard me say (disconnects from what I actually said, in italics):

It is wrong of me to complain about his tone.

THREE

I'm similarly confused with the abortion thing. Here's the play-by-play as I see it above:

  1. You point out that one reason pro-lifers wouldn't press charges against a woman who has an abortion is that it's not illegal.

  2. I ask:

    Why not seek the murder charges in court so that abortions come to be considered murder? [...because of a court overruling the previous laws that make abortions not count as murder; was this the part that was unclear?] Why not seek to change the laws so that women who commit abortions will be convicted of murder, or at least for paying a doctor to commit murder?

  3. You repeat the previous point about not "charging women with crimes the scope of which does not legally apply", even though I had just moved the question to: "Why not change the laws so that the murder charge does apply to women who commit abortions (either via a courts victory or new laws passing)?"

  4. I point out that I've already moved beyond the point that the 'murder' charge doesn't apply (because abortion isn't currently illegal or counted as murder) by asking instead why pro-lifers don't seek to make abortion illegal (and murder).

  5. You claim I still haven't responded to your point.

...Is one or more of us just too tired to follow a conversation or something? Outside help wanna chip in?

Replies from: komponisto, Alicorn
comment by komponisto · 2011-08-16T20:37:55.730Z · LW(p) · GW(p)

On the third matter, one problem is that you seem to display confusion about the structure of (U.S.) law:

Why not seek the murder charges in court so that abortions come to be considered murder? [...because of a court overruling the previous laws that make abortions not count as murder; was this the part that was unclear?] Why not seek to change the laws so that women who commit abortions will be convicted of murder, or at least for paying a doctor to commit murder?

Laws are not created in courts by means of attempted prosecutions being successful; they are created by a vote in a legislative body (and then sometimes struck down by courts as violating a meta-law). Currently, there is no law against abortion, so a prosecution could not be brought (not just that it wouldn't be successful). Furthermore, there is also currently a meta-law against making abortion illegal -- so it would be futile for pro-lifers to simply petition their legislatures to do so. What they would have to do is first attempt to get the meta-law reversed, and if you follow American politics at all, you will be aware that there has indeed been an active movement to do this for a number of decades now.

Having said all that, I consider your basic point to stand in light of this.

Replies from: lukeprog
comment by lukeprog · 2011-08-16T21:09:39.708Z · LW(p) · GW(p)

I'm thinking of an (overreaching) Supreme Court decision.

Replies from: komponisto
comment by komponisto · 2011-08-16T21:53:47.638Z · LW(p) · GW(p)

This response, while not uninformative about your thinking, suggests to me that you merely skimmed my comment rather than reading it carefully and integrating the detailed information it provided.

If I had to identify the source of this impression, it would be your apparent failure to recognize that the Supreme Court had been specifically referenced (albeit not by name) -- as was the fact (seemingly not as familiar to you as I would have expected) that pro-lifers have indeed been actively seeking a decision in their favor (this is the primary reason that judicial nominations are usually controversial in contemporary America).

I don't mean to be critical (I much appreciated the post), but I just hate it when people underestimate the information content of my words.

Replies from: lukeprog
comment by lukeprog · 2011-08-16T22:07:30.457Z · LW(p) · GW(p)

Having grown up a midwestern evangelical Christian, I assure you I'm familiar with the decades-long attempt to overturn Roe v. Wade, and indeed once signed a petition in support of such an overturn. What I'm saying is that overturning Roe v. Wade with a new Supreme Court decision wouldn't be the same as an even greater overreaching Supreme Court decision that set a precedent for considering abortion to be murder, with those committing abortion being subject to the usual punishments for murder.

Replies from: komponisto
comment by komponisto · 2011-08-16T22:52:55.384Z · LW(p) · GW(p)

Having grown up a midwestern evangelical Christian, I assure you I'm familiar with the decades-long attempt to overturn Roe v. Wade,

That's what I would have thought! Thanks for the clarification. However, you did seem to be wondering why pro-lifers don't try to pursue their goals in court; and seeking to overturn Roe is the only way they can do that.

an even greater overreaching Supreme Court decision that set a precedent for considering abortion to be murder

Well, the Supreme Court could use the "murder" rationale to reverse Roe, if it wanted to do so; and were the decision to be reconsidered in a new case, do you have any doubt that pro-life groups would file amicus briefs urging them to do just that?

Replies from: lukeprog
comment by lukeprog · 2011-08-16T23:05:01.618Z · LW(p) · GW(p)

the Supreme Court could use the "murder" rationale to reverse Roe, if it wanted to do so; and were the decision to be reconsidered in a new case, do you have any doubt that pro-life groups would file amicus briefs urging them to do just that?

Yes, I doubt they would do this, given the fact that I haven't found anyone yet who actually wants women who abort fetuses to be punished on a par with, shall we say, 'other kinds of murderers'; multiple decades of imprisonment, or life imprisonment, or death.

Replies from: Davorak, komponisto
comment by Davorak · 2011-08-22T09:05:49.311Z · LW(p) · GW(p)

I have heard people talk of punishing abortion on par with other kinds of murder. This view point has the real potential to alienate people. It makes sense that people with that view point and realize this are not shouting it to the world or filing court cases. Instead they judge small changes are the best way to get what they want in the long term and fight those intermediary battles instead of taking it straight on.

comment by komponisto · 2011-08-16T23:35:19.793Z · LW(p) · GW(p)

Interesting. You may be right, at least about the most mainstream groups (though the fringe would also participate, surely).

I won't trouble you further on this, since I have an attack to fend off over in Discussion. :-)

Thanks for replying to me and others.

comment by Alicorn · 2011-08-16T20:28:49.718Z · LW(p) · GW(p)

I've stated that I don't want to continue having this conversation with you. The summary in the grandparent was for komponisto.

Replies from: Davorak
comment by Davorak · 2011-08-22T08:57:46.829Z · LW(p) · GW(p)

For the people down who would down vote this, is it better if she did not respond to lukeprog's post at all? Acknowledging someone when they attempt to communicate to you is considered polite. It often serves the purpose communicating a lack of spite and/or hard feels even as you insist on ending the current conversation.

comment by lukeprog · 2011-08-16T18:17:41.462Z · LW(p) · GW(p)

Sure. See the two chapters - in MSv3 - immediately following Greene's article, and see Greene's response to them (in the next chapter after that). They are not easily summarized.

Replies from: Alicorn
comment by Alicorn · 2011-08-16T19:23:19.105Z · LW(p) · GW(p)

At no time was my primary point to critique your skills at summary.

comment by Arandur · 2011-08-17T12:20:35.880Z · LW(p) · GW(p)

Wow. I've been guilty of this for a while, and not realized it. That "is this action morally wrong" question really struck me.

Myself, I believe that there is an objective morality outside humanity, one that is, as Eliezer would deride the idea, "written on a stone tablet somewhere". This may be an unpopular hypothesis, but accepting it is not a prerequisite for my point. When asked about why certain actions were immoral, I, too, have reached for the "because it harms someone" explanation... an explanation which I just now see as the sin of Avoiding Your Belief's Real Weak Points.

What I really believe, upon much reflection, is that there are two overlapping, yet distinct, classes of "wrong" actions: one we might term "sins", and the other we might term "social transgressions". Social Transgressions is that class of acts which are punishable by society, usually those that are harmful. Sins is that class of acts which goes against this Immutable Moral Law. Examples are given below, being (in the spirit of full disclosure) the first examples I thought of, and neither the more pure examples, nor the most defensible, non-controversial examples.

  • Spitting on the floor of an office building is a social transgression, but not a sin.
  • Homosexuality is a sin, but not a social transgression (insofar as it is accepted by society, which is more and more very day).
  • Murder is both a sin and a social transgression.

I do not know if this is a defensible position, but I now recognize it as a clearer form of what I believe than what I had previously claimed to believe.

Replies from: MixedNuts, Vaniver, hairyfigment, nshepperd
comment by MixedNuts · 2011-08-17T15:53:46.044Z · LW(p) · GW(p)

Voted up for thinking about the problem, self-honesty, and more importantly for speaking up. (I don't quite understand whence the downvotes... just screaming "Boo!" at outgroup beliefs?) [Edit: at the time of this comment, the parent was at -5.]

It seems to me that by "sin" you just mean things that make you go "Squick!". Why do you expect that, if we found the relevant stone tablet, it wouldn't read "Spitting on the floor is wrong. Ew, tuberculosis.", nor "Maximise your score at Tetris.", but "Homosexuality is wrong."?

I'm really having trouble not snickering as I write this. I literally cannot empathise with "Homosexuality is wrong". I can sorta picture "Gay sex? Squick!", but the obvious followup is "Squick isn't a good criterion", not "Homosexuality is wrong". Also, pray tell, what (rather, whom) should genderqueers do?

Replies from: Eugine_Nier, handoflixue
comment by Eugine_Nier · 2011-08-18T03:13:40.200Z · LW(p) · GW(p)

I'm really having trouble not snickering as I write this. I literally cannot empathise with "Homosexuality is wrong".

If Arandur is correct, that makes you no different from the theist who literally can't imagine God not existing, or even anyone truly believing that God doesn't exist, and thus concludes that "atheists" are merely angry at God.

Replies from: MixedNuts, Manfred
comment by MixedNuts · 2011-08-18T12:47:20.704Z · LW(p) · GW(p)

I didn't say it was a good thing! But as Manfred points out, I can imagine it. More than just imagine it: I know that people hold such beliefs, are sincere about it, and act upon them in acceptably predictable ways. I can also imagine it being true (like, it cause strange psychological damage, or if you zoom out and look at the universe like a painting it's prettier when purely heterosexual, or unresolved sexual tension is a really important emotion, whatever) - but that doesn't put me in the same mental state as people who currently believe it; namely, it makes me fall over laughing at how deeply weird the universe is.

It does make me no different from the theist who, upon reading blog posts carefully explaining "No, we don't hate your god, we just think it's a silly idea like the tooth fairy", stammers "Buh... buh... WHY?", looks for arguments, find they don't at all match eir arguments for theism, and walks away scratching eir head. The cure is more blog posts.

Replies from: lionhearted
comment by lionhearted (Sebastian Marshall) (lionhearted) · 2011-08-18T14:43:42.136Z · LW(p) · GW(p)

Take as a premise, "One of the key [insert suitable word choice something like: duties/responsibilities/purposes/nice-things-to-do] of being human is to carry on your ancestry and raise healthy children to serve as the next strong generation of humanity."

Or, as a less extreme version - "A mentally and physically healthy person having kids and raising them with more opportunities than they had is one of the easiest huge benefits for humanity. This is especially true if the person is particularly intelligent and thoughtful."

If you had one of those premises, you might come to the conclusion that homosexuality doesn't serve that goal.

Now me, I actually have the second ethic and do believe it, but I also have gay friends and could care less who anyone is loving, fucking, cuddling with, consorting abouts with, or whatever. Though if I had a son that was intelligent, healthy, and gay, I'd strongly encourage him to look into other ways to reproduce and get both the joy of having children and serve humanity by creating the next line of a-bit-more-intelligent and a-bit-better-informed people. (I don't know what I'd do if I had a daughter who was gay - I'd have to do more research. I think I understand well enough how a gay man thinks sexually and in terms of family, but I don't personally know any lesbian women so will refrain from an opinion until knowing more.)

(Edit: I realize this isn't a mainstream view. I tend to believe people have base temperaments and pushing people against their base temperament is a bad idea, but I also think one of the chief forms of the world getting better is by healthy people having kids and raising them with better opportunities and teaching them more than they knew growing up. So I sat down and thought it through, and this is what I came up with. I doubt I'm the only person in the world that thinks this way, but I'm pretty sure I've never heard it put this way before.)

comment by Manfred · 2011-08-18T04:54:48.988Z · LW(p) · GW(p)

Not quite, since "empathize" is different from "imagine." Perhaps he even thought of this when making that word choice.

comment by handoflixue · 2011-08-17T18:22:00.503Z · LW(p) · GW(p)

laughs I have always wondered how to define gay/straight/queer from a gender-queer perspective. I tend to figure the opposite of gender-queer is gender-absent, although I could see an argument that anyone who is gender-stable qualifies as fair game. :)

Replies from: MixedNuts
comment by MixedNuts · 2011-08-17T20:30:44.947Z · LW(p) · GW(p)

Gender-stable is the opposite of gender-fluid. Genderqueer is more disputed but seems to be a big umbrella including non-gendered and null-gendered and bigender and in-between people and then some and wow this is complicated. I'd just say the opposite of genderqueer is binary gender - the set {man, woman}.

comment by Vaniver · 2011-08-17T13:53:04.437Z · LW(p) · GW(p)

Homosexuality is a sin

Any idea where this stone tablet is, so I can break it?

Replies from: Arandur
comment by Arandur · 2011-08-17T15:08:24.582Z · LW(p) · GW(p)

Reckon it's atop some mystical unassailable mountain on a windswept planet. That, or it doesn't exist. :P I'm well aware of the arguments against stone tablet morality. I had thought I'd made it clear above that this was an epiphany about my flawed mind-state, not about Actual Morality. Judging by the downvotes, I did not make this sufficiently clear.

Replies from: Vaniver
comment by Vaniver · 2011-08-17T16:06:21.561Z · LW(p) · GW(p)

What I really believe, upon much reflection, is that there are two overlapping, yet distinct, classes of "wrong" actions

the first examples I thought of

I now recognize it as a clearer form of what I believe than what I had previously claimed to believe.

I don't think this is a problem with clarity. Did you mean "believed" rather than "believe"? If you think this is a flawed mind-state rather than a defensible position, why not use "feel" instead of "belief"? In a similar vein, MixedNuts's suggestion to replace 'sin' with 'squick' seems like it might describe and communicate your mind-state more effectively.

comment by hairyfigment · 2011-08-17T17:59:45.673Z · LW(p) · GW(p)

I've been guilty of this for a while, and not realized it. That "is this action morally wrong" question really struck me.

What, specifically, were you guilty of? And how does your new formulation solve the problem? Re-reading the OP doesn't make this clear for me.

It seems like the talk of rationalization in the OP made you notice rationalization of a different kind in yourself. You would previously justify moral claims using reasoning that did not even appeal to your alleged premises. Now you associate this with Avoiding Your Belief's Real Weak Points, because until now you didn't notice the discrepancy. Postulating two different kinds of moral beliefs may solve this problem and certainly improves the situation in the sense that it allows you to give the real reasons behind some of your moral beliefs.

But it doesn't begin to address the OP.

You appear to have separated out the part of your moral approach that you don't understand yet, shoved it in a box and labelled it "sins". Now if you intend to figure out the contents of the box or prove that it has such a small effect you can ignore it, then this seems like a perfectly good method of inquiry (one that resembles Feynman's approach to quantum mechanics). But you appear to say that you still think the contents of the box come from a "Law" that exists "outside humanity", and as yet I've seen you give no reason for continuing to believe this.

comment by nshepperd · 2011-08-17T14:51:04.539Z · LW(p) · GW(p)

What if we actually found this stone tablet and it said "no, morality is maximising your score in tetris"?

Replies from: Arandur
comment by Arandur · 2011-08-17T15:11:09.706Z · LW(p) · GW(p)

Yes, I've read through Yudkowsky's post on metaethics, I'm sorry if I made the point of this post insufficiently clear, please see the... cousin... to this comment.

comment by lessdazed · 2011-08-16T21:12:46.588Z · LW(p) · GW(p)

But now, consider the crying baby dilemma from the final episode of M.A.S.H:

It happened in real life.

comment by Wei Dai (Wei_Dai) · 2011-08-16T19:58:25.534Z · LW(p) · GW(p)

I want to point out these concluding paragraphs from Greene's "The secret joke of Kant's soul":

Taking these arguments seriously, however, threatens to put us on a second slippery slope (in addition to the one leading to altruistic destitution): How far can the empirical debunking of human moral nature go? If science tells me that I love my children more than other children only because they share my genes (Hamilton, 1964), should I feel uneasy about loving them extra? If science tells me that I am nice to other people only because a disposition to be nice ultimately helped my ancestors spread their genes (Trivers, 1971), should I stop being nice to people? If I care about myself only because I am biologically programmed to carry my genes into the future, should I stop caring about myself? It seems that one who is unwilling to act on human tendencies that have amoral evolutionary causes is ultimately unwilling to be human. Where does one draw the line between correcting the nearsightedness of human moral nature and obliterating it completely?

This, I believe, is among the most fundamental moral questions we face in an age of growing scientific self-knowledge, and I will not attempt to address it here. Elsewhere I argue that consequentialist principles, while not true, provide the best available standard for public decision making and for determining which aspects of human nature it is reasonable to try to change and which ones we would be wise to leave alone (Greene, 2002; Greene & Cohen, 2004).

In Greene's 2002 Ph.D. thesis, he defends consequentialism but not very strongly:

We’re just looking for a practical guideline, not an eternal moral code that handles fantasy cases as well as real ones or that tells us which among our reasons for action are philosophically privileged.

Unfortunately, for the purpose of building FAI, we are looking for an "eternal moral code".

ETA: I'm having trouble finding where Greene addresses the issue of how consequentialism handles this "slippery slope" problem. Can anyone point me to a page number, or perhaps have independent arguments for why consequentialism is less vulnerable to "growing scientific self-knowledge" than deontology?

Replies from: lukeprog
comment by lukeprog · 2011-08-16T20:36:38.945Z · LW(p) · GW(p)

The 2004 paper with Cohen is here. I'm not sure if he addresses this issue anywhere else; perhaps he will in his 2012 book.

comment by lessdazed · 2011-08-16T18:06:34.004Z · LW(p) · GW(p)

If you believe that killing a fetus is murder, then a woman seeking an abortion pays a doctor to commit murder. Why shouldn't she be convicted of that?

I am not convinced that this post needed to introduce the added complications of legality. It adds another plane of variables under dispute and is not too illuminating.

Replies from: lukeprog
comment by lukeprog · 2011-08-16T18:20:27.884Z · LW(p) · GW(p)

Agreed. This problem has already been encountered and I have updated my wording in the original post in response.

Replies from: lessdazed
comment by lessdazed · 2011-08-16T19:12:39.993Z · LW(p) · GW(p)

You did your best to maximally frame all problems as being on the reader's end during your exchange with Alicorn, during which you did not admit to making any contribution to misunderstanding. You evaded all criticisms about tone by disparaging them as unimportant, and despite the short length of your first exchange about the post (cut short by the person you annoyed) you managed to squeak in "I already responded to this in my last comment."

Now that we have moved beyond the first person's criticism, you can begin avoiding blame by redirecting it in earnest; if the post was misinterpreted by the reader (with no help from the author), those mistakes have already been corrected, in what amounts to the distant past, please see previous discussion. And the author and critic are on equal footing as erring beings, the author for having once made a mistake, (long since corrected), and the reader for not noticing the correction of the mistake.

Saying "This problem has already been encountered" and linking to another criticism of your post in which the legal angle in general is not named as a problem is an extravagant way to avoid admitting you contributed to a problem. The legal issue didn't have to be squarely addressed there for you to resort to passing off error like a hot potato. (Not "I see the problem", not "I encountered the problem", "the problem has been encountered", in the passive voice and the past perfect.)

And superfluously saying kind and social things like "thanks for the feedback" hones the passive-aggressiveness you wield and does not simply act as a pile of merits to weigh against your sins.

I'm sorry I've totally neglected your recent universal requests for positive feedback. I'm not really good with those or these things. I don't get along with Alicorn because I'm boorish. You seem to not get along with her because of a particular synergy - your defensiveness and self justifications trigger her sensitivities to being attacked and taking offense, which trigger your defensiveness...maybe you can tell me how to fix my problems, though it seems hard to correct for not knowing what to say and when. But your problem with her is a feedback cycle in which you are a participant, so cut it out and that will be the end of it!

There has to be a better way to handle perfectionism. You could have expanding circles of people to whom you submit stages of rough drafts, not everything has to emerge perfect and ready to be maximally defended. I don't know.

Good luck to us all. I hope you get lots of positive feedback soon, you deserve it.

P.S. Do not thank me for this feedback. 1) If you do, it will be passive aggressive. 2) If you do, people will simply upvote this comment.

Replies from: Will_Newsome, lukeprog
comment by Will_Newsome · 2011-08-16T19:42:17.939Z · LW(p) · GW(p)

I think there's something important to say here... Like, maybe a list of bad things that happen when you start thinking of persons as the primary bearers of justification, rather than beliefs or actions. You seem way too focused on whether or not Luke as a person is justified... like, if you could think things through from first principles, this wouldn't be how you would approach problems like this... bleh. Sorry, that's really enigmatic, but it's important to mention.

Replies from: lessdazed
comment by lessdazed · 2011-08-18T07:00:36.889Z · LW(p) · GW(p)

I think there's something important to say here

I'm not sure if this means "I have something to say to you, it is the following:" or "I think you said some things that were good to say, (and by implication, you said some things that were not good to say,) and I will now paraphrase the good part of what you said as I would have said it:"

like, if you could think things through from first principles, this wouldn't be how you would approach problems like this

Same issue as above, I can't tell if you are trying to say something to me (and if so, what) or if you are offering an alternative to have said instead of what I did say.

comment by lukeprog · 2011-08-20T17:30:51.244Z · LW(p) · GW(p)

My 'thanks for your feedback' to jsalvatier was not passive-aggressive but genuine. John's feedback genuinely helped change my approach on Less Wrong, and made me more aware of how I am appearing to others. Plausibly, he got through to me because I spent a week with John in person and have a lot of respect for him.

Replies from: lessdazed
comment by lessdazed · 2011-08-20T18:03:35.435Z · LW(p) · GW(p)

That's good.

comment by Kaj_Sotala · 2011-08-16T19:22:33.850Z · LW(p) · GW(p)

But this still doesn't answer the question. If you believe that killing a fetus is murder, then a woman seeking an abortion pays a doctor to commit murder. Why don't abortionists want to change the laws so that abortion is considered murder and a woman who has an abortion can be charged with paying a doctor to commit murder? Psychologist Robert Kurzban cites this as a classic case of moral rationalization.

IAWYC, but the obvious alternative explanation in this example is that the person in question does believe that killing a fetus is murder and that the doctor should be tried for it, but also realizes that expressing such a radical opinion would harm the cause. So he refuses to answer. Of course, the fact that he can get away with simply refusing to answer is suspicious, and there are plenty of more damning examples.

Regardless, I find this to be a great post. Though readers would also do well to remind themselves that you can't derive ought from is - just because deontological judgements would be defended by rationalizations, it wouldn't mean the judgements themselves would be wrong. (As moral judgments can't be right or wrong, only something you agree or disagree with. No XML tags in the universe, and so forth.)

Replies from: Will_Newsome, lukeprog
comment by Will_Newsome · 2011-08-16T20:24:26.675Z · LW(p) · GW(p)

As moral judgments can't be right or wrong, only something you agree or disagree with.

This is highly contentious; did you mean to state it so confidently?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-08-16T21:52:56.148Z · LW(p) · GW(p)

Yes, but see also my response to Luke.

comment by lukeprog · 2011-08-16T19:44:49.005Z · LW(p) · GW(p)

This is a bit off topic I know, but...

moral judgments can't be right or wrong, only something you agree or disagree with

I think moral judgments can be correct or incorrect if you actually define your moral terms. They might also be correct or incorrect if we define moral terms with reference to whichever meanings for moral terms pop out of a completed cognitive neuroscience and a careful analysis of the brain of the person whose moral judgments we are evaluating.

Do you disagree?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-08-16T21:51:16.373Z · LW(p) · GW(p)

I generally consider "you ought to do X" to mean "I'd prefer it if you did X", and do not think judgements of "ought" can be wrong in this sense. (Aside for the normal questions of "what does a preference mean", but I don't find those relevant in this particular situation.) I agree that there are definitions of "ought" by which moral judgements can actually be wrong.

Incidentally, since I just read your comment over at Remind Physicalists where you pointed out that upvotes (or by extension, my "great post" comment) don't convey you information about what it was about the post that was good: I found the most value in this post from the fact that it made the general argument of "our moral arguments tend to be rationalizations" with better citations and backing than I'd previously seen. The fact that it also made the case of deontology in particular tending to be rationalization was interesting, but not as valuable.

Replies from: Vladimir_Nesov, lukeprog
comment by Vladimir_Nesov · 2011-08-16T22:20:00.342Z · LW(p) · GW(p)

I generally consider "you ought to do X" to mean "I'd prefer it if you did X", and do not think judgements of "ought" can be wrong in this sense. (Aside for the normal questions of "what does a preference mean", but I don't find those relevant in this particular situation.)

That some judgment or opinion that can be changed on further reflection (and that goes for all actions; perhaps you ate incorrect sort of cheese), motivates introducing the (more abstract) idea of correctness. Even if something is just a behavior, one can look back and rewrite the heuristics that generated it, to act differently next time. When this process itself is abstracted from the details of implementation, you get a first draft of a notion of correctness. With its help, you can avoid what you would otherwise correct and do the improved thing instead.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-08-16T23:09:21.755Z · LW(p) · GW(p)

You're right, though I'm not sure if "correctness" is the word I'd use for that, as it has undesirable connotations. Maybe something like "stable (upon reflection)".

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-08-17T02:08:58.452Z · LW(p) · GW(p)

What are the undesirable connotations of "correctness"?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-08-17T08:23:40.392Z · LW(p) · GW(p)

"Correct" is closely connected with a moral "ought", which in turn has a number of different definitions (and thus connotations) depending on who you speak with. The statement "it would be correct for Clippy to exterminate humanity and turn the planet into a paperclip factory" might be technically right if we equate "stable" and "correct", but it sure does sound odd. People who are already into the jargon might be fine with it, but it's certain to create unneeded misunderstandings with newcomers.

Also, I suspect that taking a criteria like stability under reflection and calling it correctness may act as a semantic stopsign. If we just call it stability, it's easier to ask questions like "should we require moral judgements to be stable" and "are there things other than stability that we should require". If we call it correctness, we have already framed the default hypothesis as "stability is the thing that's required".

Replies from: Wei_Dai, Vladimir_Nesov
comment by Wei Dai (Wei_Dai) · 2011-08-17T09:45:39.772Z · LW(p) · GW(p)

Now I'm confused about what your position is. What you said originally was:

As moral judgments can't be right or wrong, only something you agree or disagree with.

But if you're now saying that it makes sense to ask questions like "should we require moral judgements to be stable", that seems to imply that moral judgments can be wrong (or at least it's unclear that moral judgements can't be wrong). Because asking that question implies that you think the answer might be yes, in which case unstable moral judgments would be wrong. Am I missing something here?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-08-17T11:40:51.774Z · LW(p) · GW(p)

You're right, I was being unclear. Sorry.

When I originally said that moral judgments couldn't be right or wrong, I was defining "ought" in the common sense meaning of the word, which I believe to roughly correspond to emotivism.

When I said that we shouldn't use the word correctness to refer to stability, and that we might have various criteria for correctness, I meant "ought" or "correct" in the sense of some hypothetical goal system we may wish to give an AI.

There's some sort of a complex overlap/interaction between those two meanings in my mind, which contributed to my initial unclear usage and which prompted the mention in my original comment. Right now I'm unable to untangle my intuitions about that connection, as I hadn't realized the existence of the issue before reading your comment.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-08-17T23:37:21.645Z · LW(p) · GW(p)

When I originally said that moral judgments couldn't be right or wrong, I was defining "ought" in the common sense meaning of the word, which I believe to roughly correspond to emotivism.

Here's my argument against emotivism. First, I don't dispute that empirically most people form moral judgments from their emotional responses with little or no conscious reflection. I do dispute that this implies when they state moral judgements, those judgements do not express propositions but only emotional attitudes (and therefore can't be right or wrong).

Consider an analogy with empirical judgements. Suppose someone says "Earth is flat." Are they stating a proposition about the way the world is, or just expressing that they have a certain belief? If it's the latter, then they can't be wrong (assuming they're not deliberately lying). I think we would say that a statement like "Earth is flat" does express a proposition and not just a belief, and therefore can be wrong, even if the person stating it did so based purely on gut instinct, without any conscious deliberation.

You might argue that the analogy isn't exact, because it's clear what kind of proposition is expressed by "Earth is flat", but we don't know what kind of proposition moral judgements could be expressing, nor could we find out by asking the people who are stating those moral judgements. I would answer that it's actually not obvious what "Earth is flat" means, given that the true ontology of the world is probably something like Tegmarks' Level 4 multiverse with its infinite copies of both round and flat Earths. Certainly the person saying "Earth is flat" couldn't tell you exactly what proposition they are stating. I could also bring up other examples of statements whose meanings are unclear, which we nevertheless do not think "can't be right or wrong", such as "UDT is closer to the correct decision theory than CDT is" or "given what we know about computational complexity, we should bet on P!=NP".

(To be clear, I think it may still turn out to be the case that moral judgments can't be said to mean anything, and are mere expressions of emotional attitude (or, more generally, brain output). I just don't see how anyone can state that confidently at this point.)

Right now I'm unable to untangle my intuitions about that connection, as I hadn't realized the existence of the issue before reading your comment.

I'd be interested in your thoughts once you've untangled them.

Replies from: Kaj_Sotala, Vladimir_Nesov
comment by Kaj_Sotala · 2011-08-18T13:45:42.092Z · LW(p) · GW(p)

As far as I can tell, in this comment you present an analogy between moral judgements and empirical judgements. You then provide arguments against a specific claim saying "these two situations don't share a deep cause". But you don't seem to have provided arguments for the judgements sharing a deep cause in the first place. It seems like a surface analogy to me.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-08-18T17:50:10.123Z · LW(p) · GW(p)

Perhaps I should have said "reason for skepticism" instead of "argument". Let me put it this way: what reasons do you have for thinking that moral judgments can't be right or wrong, and have you checked whether those reasons don't apply equally to empirical judgments?

(Note this is the same sort of "reason for skepticism" that I expressed in Boredom vs. Scope Insensitivity for example.)

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-08-20T19:18:00.396Z · LW(p) · GW(p)

Occam's Razor, I suppose. Something roughly like emotivism seems like a wholly adequate explanation of what moral judgements are, both from a psychological and evolutionary point of view. I just don't see any need to presume that moral judgements would be anything else, nor do I know what else they could be. From a decision-theoretical perspective, too, preferences (in the form of utility functions) are merely what the organism wants, and are simply taken as givens.

On the other hand, empirical judgements clearly do need to be evaluated for their correctness, if they are to be useful in achieving an organism's preferences and/or survival.

comment by Vladimir_Nesov · 2011-08-17T23:47:32.994Z · LW(p) · GW(p)

Consider an analogy with empirical judgements. Suppose someone says "Earth is flat." Are they stating a proposition about the way the world is, or just expressing that they have a certain belief? If it's the latter, then they can't be wrong (assuming they're not deliberately lying).

They can be wrong if they should on reflection change this belief.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2011-08-18T00:06:20.717Z · LW(p) · GW(p)

Nesov, I'm taking emotivism to be the theory that moral judgments are just expressions of current emotional attitude, and therefore can't be wrong, even if on reflection one would change one's emotional attitude. And I'm arguing against that theory.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-08-18T00:26:50.667Z · LW(p) · GW(p)

Ah, I see, that was stupid misinterpretation on my part.

comment by Vladimir_Nesov · 2011-08-17T08:53:53.148Z · LW(p) · GW(p)

I was responding to a slightly different situation: you suggested that sometimes, considerations of "correctness" or "right/wrong" don't apply. I pointed out that we can get a sketch of these notions for most things quite easily. This sketch of "correctness" is in no way intended as something taken to be the accurate principle with unlimited normative power. The question of not drowning the normative notions (in more shaky opinions) is distinct from the question of whether there are any normative notions to drown to begin with.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2011-08-17T11:45:49.357Z · LW(p) · GW(p)

I think I agree with what you're saying, but I'm not entirely sure whether I'm interpreting you correctly or whether you're being sufficiently vague that I'm falling prey to the double illusion of transparency. Could you reformulate that?

comment by lukeprog · 2011-08-16T22:04:02.430Z · LW(p) · GW(p)

Thanks for the detail!

comment by Will_Newsome · 2011-08-16T19:20:14.062Z · LW(p) · GW(p)

I'm having trouble with this post.

First I was like wha because I didn't see a clear way for a judgment to be a rationalization. It took me awhile to figure out what was meant. If anyone else happens to be similarly confused, Greene's explanation is: "Deontology, then, is a kind of moral confabulation. We have strong feelings that tell us in clear and uncertain terms that some things simply cannot be done and that other things simply must be done. But it is not obvious how to make sense of these feelings, and so we, with the help of some especially creative philosophers, make up a rationally appealing story: There are these things called 'rights' which people have, and when someone has a right you can’t do anything that would take it away."

The emphasis on common folk strikes me as unfortunate in itself. Such focus makes me wary; equivocation becomes too easy, as do apparent victories. Even when data about common folk isn't used for propaganda the mind still treats it as an example of stupidity, any reversal of which gets bonus points.

But the equivocation is my real problem. I dislike the terminology and think it is insidious. "Utilitarian" and "deontological" modules? "Deontological" judgment? The connection between the folk morality of the common man and the deontologies of the philosophers is not well-made in this post; hinting that the same neurological processes could perhaps lead to both, based on a few studies, just isn't enough to justify the provocative terminology. Indeed, the key link is basically skipped over:

First, it could be that both kinds of moral judgment are generally 'cognitive', as Kohlberg’s theories suggest (Kohlberg, 1971). At the other extreme, it could be that both kinds of moral judgment are primarily emotional, as Haidt’s view suggests (Haidt, 2001). Then there is the historical stereotype, according to which consequentialism is more emotional (emerging from the 'sentimentalist' tradition of David Hume (1740) and Adam Smith (1759) while deontology is more 'cognitive' [including the Kantian 'rationalist' tradition: see Kant (1785)]. Finally, there is the view for which I will argue, that deontology is more emotionally driven while consequentialism is more 'cognitive.'

We have already seen the neuroscientific evidence in favor of Greene's view. Now, let us turn to further evidence from the work of Jon Haidt.

I do not see how the evidence supports Greene's view. It can be argued that it does, but the obvious arguments do not seem particularly strong. I do not find the relevant parts of Greene's "The Secret Joke of Kant’s Soul" very persuasive---indeed I find them mildly anti-persuasive. ('I'm sure that the proponents of the philosophical position I am tarring with low status associations would disagree with me, but you see, religious people would act similarly and would also be wrong' is a really obnoxious approach to conceptual gardening.)

Individuals who are (1) high in "need for cognition" and low in "faith in intuition", or (2) score well on the Cognitive Reflection Test, or (3) have unusually high working memory capacity... all give more utilitarian judgments.

I would bet that most deontologist philosophers fit those specifications, especially the most influential ones.

The experiments were done over very short timescales. Philosophers think over very long timescales. It's not clear to what extent data from the former can tell us about the reasons of the latter.

bla bla words bla equivocation something bla too tired to write in a way that humans can understand, organizing points too difficult, stupid signalling constantly so as not to tread on toes. bla bla complaints about terminology. bla meta level stuff about being cautious, principle of charity, bla. insert some placating thing or another.

ETA: TLDR: This post seems to implicitly bully a half-straw man and I don't see what it's supposed to teach us. Luke, might you explain your motivations a little more?

Replies from: lukeprog, Nisan
comment by lukeprog · 2011-08-16T19:37:24.451Z · LW(p) · GW(p)

The emphasis on common folk strikes me as unfortunate in itself. Such focus makes me wary; equivocation becomes too easy, as do apparent victories.

Note that philosophers usually have the same judgments as 'common folk' on trolley problems (Fischer & Ravizza, 1992). Also see this post from Eric Schwitzgebel. He suggests that philosophers are actually more likely to rationalize because (1) they have more powerful tools for rationalization, (2) rationalization for them has a broader field of play (via tossing more of morality into doubt), and (3) they have more psychological occasion for rationalization (by nurturing the tendency to reflect on principles rather than simply take things for granted).

In one experiment, Schwitzgebel found that "philosophers, more than other professors and more than non-academics, tended to endorse moral principles in labile ways to match up with psychologically manipulated intuitions about particular cases." In another study he found that "professional ethicists, more than professors in other fields, seemed to exhibit self-congratulatory rationalization in their normative attitudes about replying to emails from students."

But the equivocation is my real problem. I dislike the terminology and think it is insidious. "Utilitarian" and "deontological" modules? "Deontological" judgment? The connection between the folk morality of the common man and the deontologies of the philosophers is not well-made in this post; hinting that the same neurological processes could perhaps lead to both, based on a few studies, just isn't enough to justify the provocative terminology.

Did you see the bit where Greene explains in more detail what he means by these terms? I quoted some of it here.

I do not find the relevant parts of Greene's "The Secret Joke of Kant’s Soul" very persuasive

If you have time, I'd like to hear more about this. Were there, for example, methodological problems with the studies linking deontological-style judgments to emotional processing, or with the studies linking utilitarian judgments to more 'cognitive' kinds of mental processing?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-08-16T20:08:28.195Z · LW(p) · GW(p)

Note that philosophers usually have the same judgments as 'common folk' on trolley problems (Fischer & Ravizza, 1992).

'Course, but mere correlation of binary judgments tells us little about the similarity of causal mechanisms that lead to their judgments. We should expect philosophers to have more reasons, and more disjunctive ones. Even overlap of reasons doesn't necessarily give us license to imply that if deontologist philosophers weren't biased in the same way as common folk are then they wouldn't be deontologists; we must be careful with our connotations. True beliefs have many disjunctive supporting reasons, and it would be unwise to presuppose that parsimony is on the side of deontology being 'true' or 'false' such that finding a single reason for or against it substantially changes the balance. If you want to believe true things then your wanting to believe something becomes correlated with its truth... "rationalization" is complex and in some ways an essential part of rationality.

All that said, Schwitzgebel's experiment does seem to indicate commonplace 'bad' rationalization. (ETA: I need to look closer at effect sizes, prestige of philosophers, etc, to get a better sense of this though.)

Did you see the bit where Greene explains in more detail what he means by these terms?

Yeah, and I see their logic and appeal; still, the equivocations seem to be unnecessary and distracting. (It would've been much less contentious to use less provocative terms to describe the research and then separately follow that up with research like Schwitzgebel's; this would allow readers to have more precise models while also minimizing distraction.) If this were anywhere except Less Wrong I'd think it was meh, but here we should perhaps make sure to correct errors of conceptualization like that. This has worked in the past. That said, it would have been more work for you, which is non-trivial. Furthermore I am known to be much more paranoid than most about these kinds of things. I'd argue that that's a good thing, but, meh.

Were there, for example, methodological problems with the studies linking deontological-style judgments to emotional processing, or with the studies linking utilitarian judgments to more 'cognitive' kinds of mental processing?

Neither, the "relevant parts" I was speaking of were the parts where he argued that Kant and other philosophers were falling to the same errors as the members of the studies. I still find his arguments to be weak; e.g. the section Missing the Deontological Point struck me as anti-persuasive. However Schwitzgebel's experiment makes up for Greene's lack of argument. Are there any meta-studies of that nature? (Presumably not, especially as that experiment seems to have been done in the last year.)

Replies from: lukeprog
comment by lukeprog · 2011-08-16T20:29:40.882Z · LW(p) · GW(p)

I still find his arguments to be weak; e.g. the section Missing the Deontological Point struck me as anti-persuasive. However Schwitzebel's experiment makes up for Greene's lack of argument. Are there any meta-studies of that nature?

Sure. There's Weinberg et al.:

Recent experimental philosophy arguments have raised trouble for philosophers’ reliance on armchair intuitions. One popular line of response has been the expertise defense: philosophers are highly-trained experts, whereas the subjects in the experimental philosophy studies have generally been ordinary undergraduates, and so there’s no reason to think philosophers will make the same mistakes... We consider three promising hypotheses concerning what philosophical expertise might consist in: (i) better conceptual schemata; (ii) mastery of entrenched theories; and (iii) general practical know-how with the entertaining of hypotheticals. On inspection, none seem to provide us with good reason to endorse this key empirical premise of the expertise defense.

comment by Nisan · 2011-08-18T17:43:37.384Z · LW(p) · GW(p)

The connection between the folk morality of the common man and the deontologies of the philosophers is not well-made in this post; hinting that the same neurological processes could perhaps lead to both, based on a few studies, just isn't enough to justify the provocative terminology.

To make this criticism of Greene more concrete, I will point out that a "consequentialist" judgment, in Greene's terminology, is one in which consideration of outcomes have trumped or overpowered or have won in spite of other considerations; and a "deontological" judgment is one in which other considerations have won in spite of the outcomes. An actual consequentialist theory will always output "consequentialist" judgments, but a deontological theory will sometimes output "deontological" judgments and sometimes output "consequentialist" judgments.

So one can sort of see where Greene's terminology is coming from, but in the context of the eternal debate between consequentialists and deontologists it would be uncharitable to imply that deontologists always give "deontological" judgments.

comment by Mitchell_Porter · 2011-08-18T07:57:44.506Z · LW(p) · GW(p)

My guess is that the appropriate way to dissolve the conflict between utilitarian and deontological moral philosophy is to see deontological rules as heuristics. I think we could design an experiment in which utilitarians get emotional and inconsistent, and deontologists come off as the sober thinkers, just by making it a situation where adoption of a simple consistent heuristic is superior to the attempt to weigh up unknown probabilities and unknown bads.

Replies from: army1987, None
comment by A1987dM (army1987) · 2012-05-19T08:51:28.409Z · LW(p) · GW(p)

(The example I've seen is “I wear the safety belt whenever I drive a car, because unthinkingly wearing a safety belt is even less expensive than thinking whether or not to wear it”.)

comment by [deleted] · 2011-08-19T18:27:23.708Z · LW(p) · GW(p)

Do you mean consistent in the sense of choosing by a fixed criterion, like "no torturing people", or choosing by a fixed criterion that is not exposed to certain losses in terms of the agent's preferences given an adversary with identical knowledge, like "behavior positively correlated with the that of an agent who knows and shares your preferences and is able to conditionalize on evidence and decides to maximize its updateless expectation"?

If the latter, as I understood your comment upon first reading, that seems to be contradicted by the claims of Eli's circular altruism post [? · GW], though he provides no citations. Also, the post says nothing explicitly of whether people who call themselves utilitarians are better in practice of shutting up and multiplying, though I don't see how having no verbal beliefs such as "you can't put a price on life" would make one more likely to act as though human lives are incomparably valuable.

comment by Will_Newsome · 2011-08-16T17:09:42.222Z · LW(p) · GW(p)

Obligatory link.

Replies from: lukeprog
comment by lukeprog · 2011-08-16T17:18:38.008Z · LW(p) · GW(p)

Added, thanks.

comment by gwern · 2012-01-27T03:39:53.600Z · LW(p) · GW(p)

Possibly relevant; "The Price of Your Soul: Neural Evidence for the Non-Utilitarian Representation of Sacred Values ":

Deontological theory suggests that sacred values are processed based on rights and wrongs irrespective of outcomes, while utilitarian theory suggests they are processed based on costs and benefits of potential outcomes, but which mode of processing an individual naturally uses is unknown. The study of decisions over sacred values is difficult because outcomes cannot typically be realized in a laboratory, and hence little is known about the neural representation and processing of sacred values. We utilized an experimental paradigm that used integrity as a proxy for sacredness and which paid real money to induce individuals to sell their personal values. Using functional magnetic resonance imaging (fMRI), we found that values that people refused to sell (sacred values) were associated with increased activity in the left temporoparietal junction and ventrolateral prefrontal cortex, regions previously associated with semantic rule retrieval. This suggests that sacred values affect behavior through the retrieval and processing of deontic rules and not through a utilitarian evaluation of costs and benefits.

Apparently there was a LessWronger involved:

Using the Becker-DeGroot-Marschak (BDM) auction mechanism, participants were instructed to specify an “ask” price for each of the statements they chose in the active phase (32). The price could range from $1 to $100. The BDM auction is generally accepted be an incentive-compatible mechanism to reveal an individual’s willingness to pay for something. Here, we use it as a willingness to accept. Submitting an ask price of $1, for example, means that the individual is willing to accept any amount of money and is assured of receiving some amount, which, on average, would be $50. ...As noted above, signing a document does not bind one to the action that one is signing. It is therefore somewhat surprising that most people didn’t sell all of their choices. The fact that participants took money for some items and not others suggests that they were adequately motivated to express their preferences through their choices....One participant was excluded from the analysis because the participant submitted bids of $1 for all items, and thus no contrasts could be formed.

comment by Eugine_Nier · 2011-08-17T08:03:05.303Z · LW(p) · GW(p)

I don't see how your conclusion follows from your data. I could just as easily use the same model to argue that our morality is deontological and it is the utilitarian judgements that mere moral rationalizations.

I have observed that utilitarians will attempt to fudge the numbers to make the utility calculations come out the way they "should" inventing large amounts of anti-epistemology in the process (see the current debate on race and intelligence for an example of this process in action). A better approach might be to admit our morals are partially deontological and that certain things are wrong no matter how the calculations come out.

Replies from: Vladimir_M
comment by Vladimir_M · 2011-08-17T17:38:23.974Z · LW(p) · GW(p)

I have observed that utilitarians will attempt to fudge the numbers to make the utility calculations come out the way they "should" inventing large amounts of anti-epistemology in the process

Welfare economics is the clearest example. It's the closest thing that exists to a rigorous formalization of utilitarianism. Yet economists of all ideological stripes have no problem at all coming up with welfare-economic arguments in favor of their positions, whatever they are -- and despite the contradictions, all these arguments are typically plausible-sounding enough to get published and win a group of adherents.

(Also, unsurprisingly, as much as economists bitterly disagree over these ideologically charged theories, they all just happen to imply that learned economists like them should be put in charge to manage things with their wisdom and expertise. Small wonder that Austrian economists, who are pretty much the only ones who call bullshit on all this, are so reviled by the mainstream.)

comment by shminux · 2011-08-16T20:54:09.923Z · LW(p) · GW(p)

While I can certainly say that the Greene's assertion that deontological ethics is shrouded in rationalizations (is this a fair summary?) rings true to me, I'd reserve judgement until I see a blind study showing that utilitarian or pragmatic ethics can be experimentally distinguished from the deontological one based on some unambiguous rationalization quotient.

I suspect that if we dig deep enough, we find Kant's deontological moral imperatives in any ethics. The rules themselves certainly depend on the ethical system. For example, EY clearly believes in a radical version of "Life, liberty and the pursuit of happiness", which he call the Fun Theory . As a deontological model, it must be shrouded in rationalizations, according to Greene.

If true, I wonder what those rationalizations are.

comment by lukeprog · 2011-08-30T07:25:33.060Z · LW(p) · GW(p)

Related: This is a good overview of recent papers on the science of morality.

comment by gwern · 2011-08-16T18:31:43.004Z · LW(p) · GW(p)

Studies of individual differences also seem to support the dual-process theory. Individuals who are (1) high in "need for cognition" and low in "faith in intuition", or (2) score well on the Cognitive Reflection Test, or (3) have unusually high working memory capacity... all give more utilitarian judgments.10

This sounds like it would predict gender differences in responses. I'm guessing such utilitarian vs deontological differences are observed in the dilemmas?

Replies from: lukeprog
comment by lukeprog · 2011-08-16T18:40:09.190Z · LW(p) · GW(p)

The researchers cited here don't comment on gender differences in particular, except see footnote 1 in Bartels (2008). But see this study and this one and this one.

Replies from: gwern
comment by gwern · 2011-08-16T18:46:31.716Z · LW(p) · GW(p)

Researchers have proposed that females and males differ in the structure of their moral attitudes, such that females tend to adopt care-based moral evaluations and males tend to adopt justice-based moral evaluations. The existence of these gender differences remains a controversial issue, as behavioral studies have reported mixed findings. The current study investigated the neural correlates of moral sensitivity in females and males, to test the hypothesis that females would show increased activity in brain regions associated with care-based processing (posterior and anterior cingulate, anterior insula) relative to males when evaluating moral stimuli, and males would show increased activity in regions associated with justice-based processing (superior temporal sulcus) relative to females. Twenty-eight participants (14 females) were scanned using fMRI while viewing unpleasant pictures, half of which depicted moral violations, and rated each picture on the degree of moral violation that they judged to be present. As predicted, females showed a stronger modulatory relationship between posterior cingulate and insula activity during picture viewing and subsequent moral ratings relative to males. Males showed a stronger modulatory relationship between inferior parietal activity and moral ratings relative to females. These results are suggestive of gender differences in strategies utilized in moral appraisals.

Interesting; I'm a little troubled by the citations, though, especially the 2000 meta-analysis that apparently didn't find any overall difference. The Skoe counter-citation doesn't sound very convincing - just one survey.

EDIT: Third link is a little beyond me. Second study link surprises me - no effect from education or religion?!

Whereas we found no differences between the two genders in utilitarian responses to non-moral dilemmas and to impersonal moral dilemmas, men gave significantly more utilitarian answers to personal moral (PM) dilemmas (i.e., those courses of action whose endorsement involves highly emotional decisions). Cultural factors such as education and religion had no effect on performance in the moral judgment task.

comment by lukeprog · 2013-11-14T09:11:26.713Z · LW(p) · GW(p)

Update: Joshua Greene & company have published a reply to Kahane et al. (2011).

Also, Greene's Moral Tribes is quite good.

comment by fortyeridania · 2012-04-30T13:39:09.522Z · LW(p) · GW(p)

You quote Greene as writing, "We have strong feelings that tell us in clear and uncertain terms that some things simply cannot be done and that other things simply must be done."

Shouldn't uncertain read certain?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-30T15:32:28.243Z · LW(p) · GW(p)

Well, I would agree with the line as quoted as well, come to that.