Not By Empathy Alone
post by gwern · 2011-10-05T00:36:43.766Z · LW · GW · Legacy · 109 commentsContents
1 Introduction 2 Is Empathy Necessary for Moral Judgment? 3 Is Empathy Necessary for Moral Development? 4 Is Empathy Necessary for Moral Conduct? 5 Should we Cultivate An Empathy Based Morality? None 109 comments
- 1 Introduction
- 2 Is Empathy Necessary for Moral Judgment?
- 3 Is Empathy Necessary for Moral Development?
- 4 Is Empathy Necessary for Moral Conduct?
- 5 Should we Cultivate An Empathy Based Morality?
The following are extracts from the paper “Is Empathy Necessary For Morality?” (philpapers) by Jesse Prinz (WP) of CUNY; recently linked in a David Brooks New York Times column, “The Limits of Empathy”:
1 Introduction
Not only is there little evidence for the claim that empathy is necessary, there is also reason to think empathy can interfere with the ends of morality. A capacity for empathy might make us better people, but placing empathy at the center of our moral lives may be ill‐advised. That is not to say that morality shouldn’t centrally involve emotions. I think emotions are essential for moral judgment and moral motivation (Prinz, 2007)1. It’s just that empathetic emotions are not ideally suited for these jobs.
2 Is Empathy Necessary for Moral Judgment?
…For example, one might judge that charity is good, or that wife beating is bad. According to the view under consideration these judgments depend on empathetic responses: we empathize with the positive feelings experienced by the recipients of charity and with the negative feelings of those who fall prey to domestic violence. It is these empathetic responses that allow one to see these actions as good and bad respectively.
…[but] consider cases where deontological considerations overrule utilitarian principles. For example, one might judge that it is bad to kill an innocent person even if his vital organs could be used to save five others who desperately need transplants. Here, arguably, we feel cumulatively more empathy for the five people in need than for the one healthy person, but our moral judgment does not track that empathetic response. Second, consider the moral judgments one might issue from behind a Rawlsian veil of ignorance; you might decide it’s good to distribute resources to the needy because you might be needy. Here there is no empathy for the needy, but rather concern for the self. Third, while on the topic of the self, consider cases in which you yourself are the victim of a moral transgression. You judge that you’ve been wronged, but you don’t thereby empathize with yourself, whatever that would mean. Fourth, consider cases in which there is no salient victim. One can judge that it would be wrong to evade taxes or steal from a department store, for instance, without dwelling first on the suffering of those who would be harmed. Fifth, there are victimless transgressions, such as necrophilia, consensual sibling incest, destruction of (unpopulated) places in the environment, or desecration of a grave of someone who has no surviving relative. Empathy makes no sense in these cases. As a descriptive claim it seems wrong to suppose that empathy is a precondition for moral judgment.
…It might be objected that empathy is needed to construe an action as greedy, but I find that implausible. I can recognize an action as greedy without putting myself in someone else’s shoes. It’s cognitively cumbersome to think I route through the simulation of another person every time I classify some behavior as greedy (or thieving, or murderous, or incestuous, or nepotistic, or indecent, and so on, for everything I am apt to condemn as morally bad). Morally significant actions can be recognized without empathy, even if those actions are ones that involve harm. We need not reflect on the harm to see that the action is bad. Perhaps you are delighted that I ate the last cookie. I recognize that, empathetically, and I still feel guilty; I still think I should have offered the cookie to you.
If this is right, then empathy is not a necessary precursor to moral judgment. I emphasize this point, because it is sometimes presumed that sentimentalist theories of moral judgment must be empathy‐based theories. The tradition that includes David Hume and Adam Smith has placed empathy in a central place. It is even sometimes suggested that empathy is the fundamental affective response involved in moral judgment. That is a mistake. The emotions just mentioned have been demonstrated to play a major part in morality. One can advance a sentimentalist theory based on such emotions as anger and guilt, while giving only marginal import to empathy. Empathy may help us come to the conclusion that a particular action is wrong on a particular occasion, but it hardly seems necessary for that purpose.
3 Is Empathy Necessary for Moral Development?
…The emergence of empathy has been extensively investigated, and some developmentalists speculate that empathy plays an essential role in developing a sense of morality (Hoffman, 2000)2. Conceptually, the idea has much appeal.
…It’s somewhat difficult to find evidence for developmental hypotheses of this kind. Most studies of normally developing children measure relationships between empathy and morally relevant behaviors such as aggression and helping behaviors (Eisenberg et al., 2006)3. But what’s really at issue here is whether empathy gives rise to the capacity to make moral judgments. Studies do show that children engage in empathetic reasoning when making moral judgments (Eisenberg‐Berg, 1979)4, but they do not show that empathy is essential to moral judgment.
…To assess the necessity thesis, researchers must consider pathological populations. They must identify people who lack empathy and see whether they lack moral competence as a result. Blair (1995)5 takes on precisely this challenge. His study investigates morality in psychopaths. Lack of empathy is a diagnostic criterion for psychopathy (Hare, 1991), and Blair shows that psychopaths also suffer from a profound deficit in moral competence. In particular, they do not draw a distinction between moral rules (e.g., don’t hit people) and conventional rules (e.g., rules about what clothing to wear in school). Blair concludes that psychopaths’ failure to draw this distinction indicates that they do not comprehend the essence of moral rules. When they say that something is “morally wrong,” they don’t really understand what these words mean. Blair speculates that this failure is a direct result of the empathy deficit.
…One of the diagnostic criteria for psychopathy is “criminal versatility,” which suggests that psychopathy does not stem from a specific deficit in violence inhibition, as Blair’s model suggests. Third, there is evidence that normally developing children draw the moral/conventional distinction well before they associate empathy with morality. Smetana and Braeges (1990)6 show sensitivity to the distinction before the third birthday, and Eisenberg‐Berg (1979) shows that empathy does not enter actively into moral reasoning until high school. Fourth, there are other explanations of why psychopaths have deficits in both empathy and moral competence: these two deficits may arise from a third cause. In particular, psychopaths suffer from a more general deficit in moral emotions. “Shallow Affect” is one of the diagnostic criteria on psychopathy….Psychopaths are also poor at recognizing emotions, especially fear and sadness— and recognition deficits are known to be correlated with deficits in emotional experience (Blair et al., 2002)7. These affective abnormalities could explain both the low levels of empathy in psychopaths and the lack of moral competence. Empathy requires a disposition to experience emotions appropriate for another person, and a person with shallow affect and poor emotional recognition will have a diminished capacity for empathy as a result. The emotion deficit will also make an individual comparatively insensitive to common methods of moral education: they will be relatively indifferent to punishment, because they have low levels of fear, and they will be unmoved by love withdrawal, because they have low levels of sadness. They will also have a diminished capacity for emotions like guilt, which seem to have sadness as a component (Prinz, 2004)8, and moral anger. So psychopaths will lack emotions that facilitate moral education as well as the emotions that constitute moral judgments on the model that I outlined in the previous section. Therefore, the deficit in moral competence can be explained without appeal to the empathy deficit.
4 Is Empathy Necessary for Moral Conduct?
…Still it might be conjectured that empathy is necessary in another way: it might be necessary for moral motivation. Let’s suppose someone arrives at the judgment that it would be good to give charity. It might be possible to make such a judgment without feeling motivated to act on it. Perhaps empathy with the recipients of charity is what converts moral judgment into moral conduct. Or suppose someone comes to think it’s bad to abuse his spouse. Without empathy for her, he might continue to be abusive.
…[but] Anger promotes aggression, disgust promotes withdrawal, guilt promotes reparation, and shame promotes self‐concealment. More generally, these emotions are negatively valenced, and negative emotions are things we work to avoid (Prinz, 2004). If we anticipate that an action will make us feel guilty, we will be thereby inclined to avoid that action. The guilt‐prone would‐be wife beater might learn to overcome his abusive rages. It follows from this that moral judgments, which contain emotions, are intrinsically motivating states. A person who judges that stealing is wrong, for example, will be motivated to resist the urge to steal, even when it would be easy and lucrative. Such a person will also be motivated to prevent others from stealing; for example, those who think stealing is wrong might report a shoplifter to store clerk even though this intervention carries some risk and no direct reward. And this is just half the story. I have been focusing on disapprobation. There may also be a suite of positive emotions associated with moral approbation. Good behavior by others elicits admiration and gratitude, as remarked above. And the person who engages in good behavior feels pride or gratification. Anticipating these good feelings can lead to good actions. On this view, moral judgments have plenty of motivational impact in the absence of empathy.
…That empathy leads to action is actually quite weak. …In an extensive meta‐analysis, Underwood and Moore (1982)9 show that there is a positive correlation between emotion attribution and prosocial behavior in children, but no correlation between empathy and prosocial behavior. Indeed, a number of the studies show negative correlations between empathy and altruism. Critics have worried that the studies contained in this meta‐ analysis are flawed because they measure empathy by self report (though measures include non‐verbal self report, such as asking children to point out a facial expression corresponding to how they feel). In lieu of self report, Eisenberg et al. (1989)10 used observers’ reports and found that prosocial behavior is positively correlated with “concerned attention” in children. A child who wrinkles her brow when watching someone in need, is more likely to help. But no correlation was found for “shared emotion.”…There are modest correlations in adults between prosocial behavior and shared sadness (Eisenberg et al., 1989). Adults who looked sad while watching a film about a woman whose children had been in a car wreck were slightly more likely to offer to help that woman with yard work when, later in the experiment, they read a letter from her requesting help. But this study does not establish that empathy, in general, relates to altruism, because it is restricted to sadness. And curiously, there is no correlation between expressions of sadness while reading the letter, and the decision to help, which is made just afterwards….A meta‐analysis shows that empathy only weakly correlated with prosocial behavior (Neuberg, et al., 1997)11. More strikingly, the correlation appears only when there is little cost. If someone has to do something as easy as crossing a street to help someone in need, they are not especially likely to, and those who are empathetic show no greater tendency to help in such circumstances than those who are not.
…The meager effects of empathy are greatly overshadowed by other emotions. Consider, for example, positive affect. Above, I suggested the feelings of approbation are positive and positive emotions may help to explain why people do good things. Empirical support for this hypothesis comes from the large literature on positive emotions and helping (Carlson et al., 1988)12. For example, Isen and Levin (1982)13 induced positive affect by planting a dime in a neighborhood phone booth. They then watched to see whether the person who found the dime would help a passerby who dropped some papers. Among those who found the dime, 87.5% helped. Among those in the control condition, where there was no dime planted in the phone, only 4% helped. Other studies have not always shown such a large effect size, but they do tend to confirm that a small dose of happiness seems to promote considerable altruism. This is often true even when the altruism is costly. For example, Weyant (1978)14 found that people who are made to feel good by being given an easy test to solve are almost twice as likely, when compared to neutral controls, to volunteer for a charity that requires going door to door collecting donation. Happiness seems to make us work for people in need. This conclusion is embarrassing for those who think empathy is crucial for altruism because vicarious distress presumably has a negative correlation with positive happiness.
…[And on the flip side] Lerner et al. (1998)15 showed subjects emotion‐inducing film clips and then probed their attitudes towards punishment on unrelated vignettes. Subjects who watched anger inducing films recommended harsher punishments than those in the control condition. Studies using economic games have shown that, when angry, people are even willing to pay significant costs to punish those who fail to cooperate (Fehr and Gächter, 2002)16. This contrasts strikingly with empathy, which does not motivate moral behavior when there are significant costs. Guilt is also a great motivator. In a study by Carlsmith and Gross (1969)17 subjects were asked to make some fundraising phone calls for a charity organization after they administered shocks to an innocent person. These subjects made more than three times as many fundraising calls as the subjects in a control condition where no shocks were administered.
5 Should we Cultivate An Empathy Based Morality?
…empathy may lead to preferential treatment. Batson et al. (1995)18 presented subjects with a vignette about a woman, Sheri, awaiting medical treatment, and then asked them if they wanted to move Sheri to the top of the waitlist, above others who were more needy. In the control condition, the majority declined to more her up the list, but in a condition where they were encouraged to empathize with Sheri, they overwhelmingly elected to move her up at the expense of those in greater need.
…Third, empathy may be subject to unfortunate biases including cuteness effects. Batson et al. (2005)19 found that college students were more likely to feel empathetic concern for children, dogs, and puppies than their own peers. Batson’s notion of empathetic concern is not equivalent to empathy, as I am defining it, because it does not require feeling what the object of empathy should feel, but I think cuteness effects would also arise for empathy. For example, I’d wager that we would feel more vicarious sadness for a dying mouse than a rat, and more vicarious fear for a frog crossing the highway than a lizard. It has also been found that empathetic accuracy—which includes the ability to identify someone else’s emotions, and, thus, perhaps, to mirror them—increases when the target is viewed as attractive (Ickes et al., 1990)20.
Fourth, empathy can be easily manipulated. Tsoudis (2002) found that in mock trials, a jury’s recommendation for sentencing could be influenced by whether or not victims and defendants expressed emotions. When sadness was expressed, empathy went up, ingratiating the jury to the one who expressed the sadness. Sad victims evoked harsher sentences, and sad defendants got lighter sentences.
…Sixth, empathy is prone to in‐group biases. We have more empathy for those we see as like us, and that empathy is also more efficacious. Brown et al. (2006)21 found that when viewing pictures of faces, people show more empathetic responses, as measured by physiology and self report, for members of the same ethnic group. Stürmer et al. (2005)22 found that empathy leads to helping only in cases when the person in need is a member of the in‐group. In one of their studies, participants learn about someone who may have contracted hepatitis and their willingness to offer support, such as talking on the phone, depended on both empathy and whether the person had the same sexual orientation as the participant. This strong in‐group bias doesn’t show up in every study, but even if only occasional, it is something that defenders of empathy should worry about.
Seventh, empathy is subject to proximity effects. There was an outpouring of support for the Katrina hurricane victims in the United States in 2005, and passionate expressions of empathy for the victims is still frequently expressed in public discourse here. The death toll was 1,836. A year later, an earthquake in Java killed 5,782 people and there was little news coverage in comparison. I would venture to guess that few Americans remember the incident.
Eighth, empathy is subject to salience effects. Natural disasters and wars are salient, news worthy events. The happen during temporary circumscribed periods in localized areas, and can be characterized in narrative terms (preconditions, the catastrophe, the aftermath). Other causes of mass death are less salient, because they are too constant and diffuse to be news items. This is the case with hunger and disease. To put some depressing numbers on the problem consider the following: malaria is estimated to kill between 1.5 and 4 million people a year; tuberculosis kills 2 million; and AIDS kills 2.8 million. Hunger is the biggest killer of all: 9 million die each year for lack of food. That means that every single day, there are 24 Katrinas. 10.5 times the number of people who died in Katrina die each day from preventable diseases, and 13.5 times as many people die from malnutrition. These deaths are not salient, so they induce little empathy.
In sum, empathy has serious shortcomings.
-
Prinz, J. J. (2007). The Emotional Construction of Morals. Oxford: Oxford University Press. ↩
-
Hoffman, M. (2000). Empathy and moral development: The implications for caring and justice. Cambridge, UK: Cambridge University Press. ↩
-
Eisenberg, N., Spinrad, T.L., and Sadovsky, A. (2006). Empathy‐related responding in children. In M. Killen and J. G. Smetana (Eds); Handbook of Moral Development (pp. 517‐549). Mahwah, NJ, US: Lawrence Erlbaum Associates ↩
-
Eisenberg‐Berg, N. (1979). The development of children’s prosocial moral judgment. Developmental Psychology, 15, 128‐137 ↩
-
Blair, R. J. R. (1995). A cognitive developmental approach to morality: Investigating the psychopath. Cognition, 57, 1‐29 ↩
-
Smetana, J. and Braeges, J. (1990). The development of toddlers’ moral and conventional judgments. MerrillPalmer Quarterly, 36, 329‐346 ↩
-
Blair, R. J. R., Mitchell, D. G. V., Richell, R. A., Kelly, S., Leonard, A., Newman, C., and Scott, S. K. (2002). Turning a deaf ear to fear: Impaired recognition of vocal affect in psychopathic individuals. Journal of Abnormal Psychology, 111, 682– 686 ↩
-
Prinz, J. J. (2004). Gut Reactions: A Perceptual Theory of Emotion. New York: Oxford University Press ↩
-
Underwood, B., and Moore, B. (1982). Perspective‐taking and altruism. Psychological Bulletin, 91, 143‐173 ↩
-
Eisenberg, N., Fabes, R. A., Miller, P. A., Fultz, J., Shell, R., Mathy, R. M., and Reno, R. R. Relation of sympathy and personal distress to prosocial behavior: a multimethod study. Journal of personality and social psychology, 57, 55‐66 ↩
-
Neuberg, S. L., Cialdini, R. B., Brown, S. L., Luce, C., Sagarin, B. J., and Lewis, B. P. (1997). Does empathy lead to anything more than superficial helping? Comment on Batson et al (1997). Journal of Personality and Social Psychology, 73, 510‐516 ↩
-
Carlson, M., Charlin, V., and Miller, N. (1988). Positive mood and helping behavior: A test of six hypotheses. Journal of Personality and Social Psychology, 55, 211–229 ↩
-
Isen, A. M. and Levin, P. F. (1972). The effect of feeling good on helping: Cookies and kindness. Journal of Personality and Social Psychology, 21, 384‐388 ↩
-
Weyant, J. M. (1978). Effects of mood states, costs, and benefits on helping. Journal of Personality and Social Psychology, 36, 1169–1176 ↩
-
Lerner, J., and Tiedens, L. (2006). Portrait of The Angry Decision Maker: How Appraisal Tendencies Shape Anger’s Influence on Cognition. Journal of Behavioral Decision Making, 19, 115‐137 ↩
-
Fehr, E., and Gächter, S. (2002). Altruistic punishment in humans. Nature, 415, 137‐140 ↩
-
Carlsmith, J. M., and Gross, A. E. (1969). Some Effects of Guilt on Compliance. Journal of Personality and Social Psychology, 11, 232‐9 ↩
-
Batson, C. D., Klein, T. R., Highberger, L., and Shaw, L. L. (1995). Immorality from empathy‐induced altruism: When compassion and justice conflict. Journal of Personality and Social Psychology, 68, 1042‐1054 ↩
-
Batson, C., Lishner, D., Cook, J., and Sawyer, S. (2005). Similarity and Nurturance: Two Possible Sources of Empathy for Strangers. Basic and Applied Social Psychology, 27, 15‐25 ↩
-
Ickkes, W., Stinson, L., Bissonnette, V., and Garcia, S. (1990). Naturalistic social cognition: Empathic accuracy in mixed‐sex dyads. Journal of Personality and Social Psychology, 59, 730‐742 ↩
-
Brown, L., Bradley, M., and Lang, P. (2006). Affective reactions to pictures of ingroup and outgroup members. Biological Psychology, 71, 303‐311 ↩
-
Stürmer, S., Snyder, M., and Omoto, A. (2005). Prosocial Emotions and Helping: The Moderating Role of Group Membership. Journal of Personality and Social Psychology, 88, 532‐546 ↩
109 comments
Comments sorted by top scores.
comment by Jack · 2011-10-05T19:05:59.858Z · LW(p) · GW(p)
What does it mean to "cultivate an X based morality" and why should we do it? Why should we have an any-one-thing based morality? Obviously picking one moral emotion and only teaching and encouraging that is likely to leave important moral judgments out. I don't think even Peter Singer is recommending that. Nonetheless, empathy seems to have a central if not exclusive role in the motivation and development of lots of really important moral judgments. That empathy is not necessary for all moral judgments does not mean that it can be systematically replaced by other moral emotions in cases where it is central. Helping people is good! We should teach children to help people and laud those who do.
I'm not sure section 5 says... anything at all. All of the things said about empathy in this section are true of people. Try substituting one for the other. Which is to say, they're true for lots of other behaviors and emotions as well. Pointing out that biases affect empathy isn't helpful unless one has found a different moral emotion which inspires a extensionally similar moral judgment (one that leads to the same behaviors) that combines the motivational force of empathy without the vulnerability to bias. Anyone have candidates for that?
Edit: Prinz's suggestion is "outrage". He says we should get angry and indignant at the causes of suffering- claiming that this has more motivational power than empathy. This may be the case-- but outrage tends to come with empathy (unless the outrage is directed at something causing oneself harm) so it isn't clear how to evaluate this claim. More importantly, I see no reason at all to think outrage is less subject to bias. It can certainly be subject to in-group bias, proximity effects, salience effects. It can be easily manipulated. It also leads to people looking for an enemy where there isn't necessarily one. This leads to people ignoring causes of suffering like economic inefficiencies and institutional ineffectiveness in favor of targeting people perceived as greedy. A bit richly, he condemns the 'empathy-inspired' moral system of collectivism by referencing collectivist atrocities... as if they had nothing to do with outrage.
Replies from: false_vacuum, algekalipso↑ comment by false_vacuum · 2011-10-05T19:43:34.435Z · LW(p) · GW(p)
So he's outraged by people basing their moral decisions on empathy? I'm... not sure how to empathise with that emotion.
Replies from: atucker↑ comment by atucker · 2011-10-06T05:06:37.752Z · LW(p) · GW(p)
I sometimes have a feeling where I see people being preferential towards people they like and insensitive towards those that they don't and having a mixed feeling of "aww that's nice" and "eurgh this scales badly into nasty things". Part of this feeling is the feeling that the people acting preferentially are acting in a way that feels nice and is all warm and fuzzy, while another part is that humans have lots of icky moral bugs that make bad things happen.
Could you empathize with that?
↑ comment by algekalipso · 2012-12-20T19:14:25.715Z · LW(p) · GW(p)
It is my understanding that outrage is the result of 'selective empathy' if at all, and VERY often completely lacking in empathy. E.g. When a group of people are outraged to a gay couple for having gay sex. Ok, so where is empathy in this case? Victimless crime evoking huge deontological moral self-righteousness and anger.
comment by NancyLebovitz · 2011-10-05T08:28:34.798Z · LW(p) · GW(p)
I think empathy with oneself is a meaningful concept, or at least there's a substantial subset of people who have trouble noticing what they want or identifying when they're being arbitrarily treated as low status.
Replies from: wedrifid, Alexei↑ comment by wedrifid · 2011-10-05T09:11:38.274Z · LW(p) · GW(p)
Agree strongly on both points. Empathy with oneself is both meaningful and incredibly important. In fact as someone who at one time lacked empathy with himself (in terms of empathy with emotions and desires) I even assert that building empathy with oneself is one of the best ways to improve one's ability to relate to others. (By allowing you to make use of all the wisdom embedded in your emotional and 'gut' responses and desires.)
↑ comment by Alexei · 2011-10-05T18:39:15.206Z · LW(p) · GW(p)
Lacking self-empathy sound a bit like Alexithymia
Replies from: false_vacuum↑ comment by false_vacuum · 2011-10-05T19:37:08.358Z · LW(p) · GW(p)
Interesting. But one could have the awareness, understanding, and ability to describe, but also an attitude of not caring, with regard to one's own emotions. Or at least some of them, sometimes.
On the other hand, I'm not sure the word 'emotions' means the same thing to everyone. I'm not even sure that what I take it to mean hasn't changed substantially.
ETA: Here I seem to be defining 'empathy' in yet another way. It's odd how my intuition about what a word means can vary situationally. It seems to me right now that I would want to claim I usually think 'empathising with X' is '(accurately) modelling the internal state of X'. But perhaps in contexts where the distinction is irrelevant I may also have been identifying the conjunct of '(accurately) modelling the internal state of X' and 'caring about the result' as 'empathising with X'. And then here I took 'empathy' to be just the 'caring about the result' part.
comment by Normal_Anomaly · 2011-10-06T12:30:51.897Z · LW(p) · GW(p)
Fifth, there are victimless transgressions, such as necrophilia, consensual sibling incest, destruction of (unpopulated) places in the environment, or desecration of a grave of someone who has no surviving relative. Empathy makes no sense in these cases.
One person's modus ponens is another's modus tollens. The fact that there is no harm involved in "victimless crimes" leads myself and plenty of other people to label (at least the first two of) those "crimes" as acceptable.
Replies from: pedanterrific↑ comment by pedanterrific · 2011-10-07T18:34:32.609Z · LW(p) · GW(p)
I'm curious what distinction you're drawing that makes the first acceptable but not the fourth.
Edit: Upon rereading, this seems more confrontational than was intended. To clarify, I agree there's nothing wrong with the second, hold reservations about the third only insofar as it's not clear to me that there really is no harm involved, and have simply never thought much about the first or fourth.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-10-08T02:23:48.135Z · LW(p) · GW(p)
I believe there is harm done by the third, because I value the existence of natural beauty even when I can't see it, and there are other problems with destroying unpopulated places as well. I have only a minor problem with the fourth. If it is known that the graves of people with no surviving relatives are often desecrated, this may make currently-alive people sad about their or their loved ones' graves being desecrated later. If nobody knows about the desecration, it's probably okay (excepting TDT-style concerns about people predicting future desecration from others' moral opinions).
Replies from: pedanterrific↑ comment by pedanterrific · 2011-10-08T02:38:02.277Z · LW(p) · GW(p)
I suppose it depends on what precisely is meant by "destruction" - there's been mention downthread of nuking the moon, which... I could see the argument that it would add value without harming anything worthwhile.
And I get how desecrating graves could make people unhappy about the prospects of their own remains. I was asking because I don't quite see how necrophilia could be okay while corpse-desecration is not - one seems to require the other.
Replies from: Normal_Anomaly↑ comment by Normal_Anomaly · 2011-10-08T15:21:45.257Z · LW(p) · GW(p)
Oh, I see now. I was compartmentalizing pretty heavily there, wasn't I? I think I know why: the hypothetical situation I was imagining for necrophilia was on a desert island, (probably borrowed from the default one for cannibalism). The hypothetical for grave-desecration was spray-painting a gravestone in a local cemetery. People are less likely to find out in the former, so I never took those considerations into account.
Yes, in situations where grave-desecrating in general is not okay, necrophilia isn't either. I still think both are mostly okay if nobody finds out, and my saying this shouldn't make anyone sad as I have no desire to do either.
comment by irrational · 2011-10-05T04:59:15.543Z · LW(p) · GW(p)
Fifth, there are victimless transgressions, such as necrophilia, consensual sibling incest, destruction of (unpopulated) places in the environment, or desecration of a grave of someone who has no surviving relative. Empathy makes no sense in these cases.
It is also unclear to me that these should be subject to any moral judgement.
Replies from: wedrifid, Nisan, PhilGoetz, Jack↑ comment by wedrifid · 2011-10-05T06:12:45.946Z · LW(p) · GW(p)
I'm going to judge based on the destruction of the environment. Do what you want with your dead sister.
Replies from: irrational, None↑ comment by irrational · 2011-10-05T07:40:14.322Z · LW(p) · GW(p)
I think destruction of the environment, even unpopulated, is indeed not a victimless crime, since it can have various external consequences.
Replies from: kilobug↑ comment by kilobug · 2011-10-05T10:07:14.513Z · LW(p) · GW(p)
Indeed. Destruction of an environment, in a way that will never affect directly (because they would visit it and delight on the view) or indirectly (because it purifies the air they breath) any sentient being (humans, transhumans or aliens) doesn't call any strong moral judgment to me.
The only reason for which I would make a moral judgment in that case is because I do have a limited form of empathy towards animals, not as strong as towards humans, but that empathy towards animals makes me judge as unethical the destruction of their environment. But then, it's again empathy.
Replies from: Nornagest↑ comment by Nornagest · 2011-10-05T17:18:50.898Z · LW(p) · GW(p)
Yeah, there's a distinction there that seems to have gotten lost. Nuking the moon seems about as good an example of environmental destruction without short-term externalities as I can think of, and indeed it doesn't trigger the same moral instincts in me as, say, nuking a national park would.
That doesn't seem to bear directly on the OP's main point, but a lot of the other supporting examples seem to show similarly sloppy reasoning.
Replies from: false_vacuum↑ comment by false_vacuum · 2011-10-05T19:06:07.283Z · LW(p) · GW(p)
Thanks for the link; I didn't know about Project A119. Probably a good thing they didn't do it, though.
↑ comment by [deleted] · 2011-10-05T17:43:13.245Z · LW(p) · GW(p)
Yeah -- necrophilia strikes me as more a normative transgression than one whose a priori immorality is obvious or defensible; quite a bit moreso consenesual sibling incest.
I know of no places in "the environment" (at least on Earth) that aren't populated; I feel a fair bit of empathy for living things generally (even plants), but even if one assumes that it's all meat-automata with no moral weight as so many LWers do, the negative externalities and oft-unrecognized-but-real value of biodiversity to human endeavors makes this seem like less of a victimless transgression.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2011-10-06T00:05:39.685Z · LW(p) · GW(p)
It is at best as victimless as destroying a great work of art that is rarely seen.
The playa of the Black Rock Desert appears to be completely lifeless. I haven't checked for microbes. (And I don't care if anybody destroys it; it is too simple to be interesting or beautiful. No great work of art, that. Ironic that that barren, boring, lifeless mud flat is taken better care of by burners than are most places on Earth.)
Replies from: None, APMason↑ comment by [deleted] · 2011-10-06T02:13:47.084Z · LW(p) · GW(p)
It's not -- lots of encysting macroinvertebrates there, some of them probably endemic. Nothing too charismatic to the average human, I suppose, but it's not nearly as lifeless as it looks -- and their seasonal population booms are important to migratory fauna that pass through each year, such as birds. The ecology there responds to seasonal flooding, so if you've only gone during Burning Man, appearances will be deceiving.
Replies from: PhilGoetz↑ comment by APMason · 2011-10-06T00:30:03.641Z · LW(p) · GW(p)
You don't find deserts beautiful?
Replies from: PhilGoetz, None↑ comment by PhilGoetz · 2011-10-06T02:27:20.901Z · LW(p) · GW(p)
Some deserts. Not this one. There are no Joshua trees, no grasses, no lizards, no snakes, no rocks, no valleys, no hills, no birds, no insects. Nothing but miles of silent, flat, dry mud, a burning sun, burning alkali dust, and frequent dust storms. If it's beautiful, it's only the way a blank sheet of paper is beautiful.
Replies from: APMason↑ comment by Nisan · 2011-10-05T07:19:22.131Z · LW(p) · GW(p)
It seems in that section the author was talking about moral judgments that actually occur, not what moral judgments should occur.
Replies from: Multiheaded↑ comment by Multiheaded · 2011-10-05T15:13:39.736Z · LW(p) · GW(p)
He certainly should've been clearer about that.
↑ comment by PhilGoetz · 2011-10-06T00:02:41.198Z · LW(p) · GW(p)
It is inconsistent to list the destruction of "unpopulated" environment as a victimless crime, since the main reason for calling it a crime is the belief that its victims matter (it is not "unpopulated"). He seems unaware that he's promoting two opposing viewpoints at the same time.
↑ comment by Jack · 2011-10-05T20:35:50.652Z · LW(p) · GW(p)
That you, I and lots of people here share a morality that de-emphasizes or abandons judgments that stem from the purity/sanctity pillar does not mean that those moral judgments do not need to be accounted for by a theory of morality. Note that wedrifid's popular reply to your comment defends one of the few purity-based moral judgments common among the liberal/cosmopolitan demographic cluster.
The problem is Prinz actively conflates metaethical concerns -- concerns about the adequacy of a theory of morality based centrally on empathy-- and normative concerns about whether our moral system does a good job at making the world a better place, or something. The above examples of victimless transgressions are good evidence for his metaethical thesis but irrelevant for the normative thesis.
comment by gwern · 2014-12-30T03:14:17.034Z · LW(p) · GW(p)
"Most people see the benefits of empathy as too obvious to require justification", Paul Bloom:
...then I add, “I’m against it.” This usually gets an uncomfortable laugh. This reaction surprised me at first, but I’ve come to realize that taking a position against empathy is like announcing that you hate kittens—a statement so outlandish it can only be a joke. And so I’ve learned to clarify, to explain that I am not against morality, compassion, kindness, love, being a good neighbor, doing the right thing, and making the world a better place. My claim is actually the opposite: if you want to be good and do good, empathy is a poor guide. The word “empathy” is used in many ways, but here I am adopting its most common meaning, which corresponds to what eighteenth-century philosophers such as Adam Smith called “sympathy.” )
... I have argued elsewhere that certain features of empathy make it a poor guide to social policy. Empathy is biased; we are more prone to feel empathy for attractive people and for those who look like us or share our ethnic or national background. And empathy is narrow; it connects us to particular individuals, real or imagined, but is insensitive to numerical differences and statistical data. As Mother Teresa put it, “If I look at the mass I will never act. If I look at the one, I will.” Laboratory studies find that we really do care more about the one than about the mass, so long as we have personal information about the one. In light of these features, our public decisions will be fairer and more moral once we put empathy aside. Our policies are improved when we appreciate that a hundred deaths are worse than one, even if we know the name of the one, and when we acknowledge that the life of someone in a faraway country is worth as much as the life a neighbor, even if our emotions pull us in a different direction. Without empathy, we are better able to grasp the importance of vaccinating children and responding to climate change. These acts impose costs on real people in the here and now for the sake of abstract future benefits, so tackling them may require overriding empathetic responses that favor the comfort and well being of individuals today. We can rethink humanitarian aid and the criminal justice system, choosing to draw on a reasoned, even counter-empathetic, analysis of moral obligation and likely consequences.
...Strong inclination toward empathy comes with costs. Individuals scoring high in unmitigated communion report asymmetrical relationships, where they support others but don’t get support themselves. They also are more prone to suffer depression and anxiety. Working from a different literature on “pathological altruism,” Barbara Oakley notes in Cold-Blooded Kindness (2011), “It’s surprising how many diseases and syndromes commonly seen in women seem to be related to women’s generally stronger empathy for and focus on others.”
...In Consequences of Compassion (2009) Charles Goodman notes the distinction in Buddhists texts between “sentimental compassion,” which corresponds to empathy, and “great compassion,” which involves love for others without empathetic attachment or distress. Sentimental compassion is to be avoided, as it “exhausts the bodhisattva.” Goodman defends great compassion, which is more distanced and reserved and can be sustained indefinitely. This distinction has some support in the collaborative work of Tania Singer, a psychologist and neuroscientist, and Matthieu Ricard, a Buddhist monk, meditation expert, and former scientist. In a series of studies using fMRI brain scanning, Ricard was asked to engage in various types of compassion meditation directed toward people who are suffering. To the surprise of the investigators, these meditative states did not activate parts of the brain that are normally activated by non-meditators when they think about others’ pain. Ricard described his meditative experience as “a warm positive state associated with a strong prosocial motivation.” He was then asked to put himself in an empathetic state and was scanned while doing so. Now the appropriate circuits associated with empathetic distress were activated. “The empathic sharing,” Ricard said, “very quickly became intolerable to me and I felt emotionally exhausted, very similar to being burned out.”
One sees a similar contrast in ongoing experiments led by Singer and her colleagues in which people are either given empathy training, which focuses on the capacity to experience the suffering of others, or compassion training, in which subjects are trained to respond to suffering with feelings of warmth and care. According to Singer’s results, among test subjects who underwent empathy training, “negative affect was increased in response to both people in distress and even to people in everyday life situations. . . . these findings underline the belief that engaging in empathic resonance is a highly aversive experience and, as such, can be a risk factor for burnout.” Compassion training—which doesn’t involve empathetic arousal to the perceived distress of others—was more effective, leading to both increased positive emotions and increased altruism.
...Even I, a skeptic, would imagine there is some substantive relationship between empathy and aggression, since presumably someone with a great deal of empathy would find it unpleasant to cause pain in others. But a recent review summarizing data from all available studies of the relationship between empathy and aggression reaches a different conclusion. The authors of “The (non)relation between empathy and aggression: Surprising results from a meta-analysis” report that only 1 percent of the variation in aggression is accounted for by empathy...Baron-Cohen notes that people with Asperger syndrome and autism typically have low cognitive empathy—they struggle to understand the minds of others—and have low emotional empathy as well. (As with psychopaths, there is some controversy about whether they are incapable of empathy or choose not to deploy it.) Despite their empathy deficit, such people show no propensity for exploitation and violence. Indeed, they often have strong moral codes and are more likely to be victims of cruelty than perpetrators.
Effective altruists typically donate a percentage of their income—usually at least 10 percent, and in some cases 50 percent or more—to charities that have been demonstrated to be highly effective. Some choose careers that will enable them to earn more not so that they can have more money, but so that they can donate more. Recent Princeton graduate Matt Wage, for example, was offered a place for postgraduate studies at the University of Oxford but instead went to Wall Street, where within a year he had earned enough to donate $100,000 to organizations helping people in extreme poverty. My admittedly impressionistic observation is that effective altruists are not especially empathetic—at least, not in the sense of emotional empathy. They do have what is sometimes called “cognitive empathy” or “perspective taking” capacity—that is, the ability to see what life is like for someone else...Unlike the majority of donors to charity, they are not prone to give to local charities, nor to particular children in developing countries who will write them thank-you letters. They do not give to causes that have touched them personally—“my wife/sister/mother died of breast cancer, so I donate to breast cancer research.” They direct the resources they have where they will do the most good. The result is that they are doing much more than most people to make the world a kinder and better place.
comment by kilobug · 2011-10-05T10:03:09.397Z · LW(p) · GW(p)
Hum, there are interesting things in that article, but it seems way too one-sided to me, and it dwells upon confusion between two thesis which are very different : « empathy is not the only source of morality » (which I agree with) and « empathy is not a core part of morality » (which I disagree with).
Attempt to reduce human morality to a single factor (like empathy) is doomed to fail. And every time you'll look at a single factor behind some part of human morality, you'll find cases in which it fails to explain our behavior, and others in which it'll lead us to acts that we'll not like. That applies to empathy as much as to anything else. But showing some examples of empathy being misleading, and that there can be other feelings (like anger in front of unfairness) that can lead to moral decisions and acts don't say that empathy is not a fundamental part of human morality, nor that empathy does more wrong than good.
Take a typical bias, the scope insensivity. People give more money to save one child than to save 10 children. Where does it come from ? It comes from empathy. It's empathy that makes us give money to save one child, because we put ourselves in this child place. When offered to save 10 children, we don't identify ourselves with any (because we don't know with which one), so we don't offer as much. What does it show ? That empathy is indeed a great driving force behind ethical acts, but that we fail to apply it to more than one person at once, that we don't use it enough when faced with the fate of 10 children.
Another problem I see in the article is that it calls upon guilt as one of the possible "replacements" for empathy, but guilt to me is a consequence of empathy. I feel guilty because I imagine myself in the shoes of the one I wronged, and that makes me feel guilty. I never feel guilty of trespassing a law or societal code of conduct (I may feel ashamed if it is known, but that's a different issue), I only feel guilty when I did, directly or not, wrong someone, because I imagine myself in his shoes.
Also, most of the examples of the second paragraph of part 2 seems very broken to me, I could give a (more or less) detailed analysis for each of them to say how they seem broken to me, but this comment is already long enough so I don't want to enter those details now.
Replies from: None, fiddlemath↑ comment by [deleted] · 2011-10-05T17:07:50.826Z · LW(p) · GW(p)
Another problem I see in the article is that it calls upon guilt as one of the possible "replacements" for empathy, but guilt to me is a consequence of empathy.
Not to mention that people seem to be really good at guilt-proofing themselves when they see it coming, or transferring their guilt onto things that do more to salve conscience than actually help a situation.
As someone on the autistic spectrum it's been my impression that neurotypicals in my society really aren't that great at empathy as a general thing -- "strong empathy" seems to be a specific trait, like strong muscles or a gift for music. Typically NTs seem to be looking for adherence to recognizable scripts as a necessary precondition for empathy, and the moment the subject deviates from that their ability to empathize is gone -- people do inconvenient or upsetting things "to get attention" or to annoy them, animals are reduced to two-bit animatrons or even deemed without subjective experience; in many cases even autistic people are read this way, and sociopaths if the person has a coherent idea of them will be thought to be this way (even as they miss the sociopath in their midst, who may be better at faking signalling).
I guess this paper doesn't feel like a great takedown of empathy from my perspective, because the average person already seems to be fairly bad at it.
↑ comment by fiddlemath · 2011-10-05T16:41:50.717Z · LW(p) · GW(p)
Hum, there are interesting things in that article, but it seems way too one-sided to me,
Think of it as a single message in a longer debate, spanning centuries. That's what an academic philosophers do, on their better days. ;) So, yes, this is not a "debate", this is a position piece, laying out arguments for only one side. Read that way, it's quite even-handed.
and it dwells upon confusion between two thesis which are very different : « empathy is not the only source of morality » (which I agree with) and « empathy is not a core part of morality » (which I disagree with).
Does it? On my reading, it's only arguing that "empathy is not the only source of morality," and then in section 5 "empathy should not be the only source of morality." I don't think it argues, anywhere, that empathy is not a /part/ of morality, or moral judgment.
comment by Unnamed · 2011-10-05T20:09:28.612Z · LW(p) · GW(p)
The main criticism of empathy in section 5 (the section which argues that empathy could be harmful and not merely ineffectual) is partiality. Empathy is felt towards a specific target, which could lead to a failure to help people other than that target or even actions that help the target at the expense of others. But other moral emotions face the same problem. Anger is stronger for local misdeeds than for global ones - Americans were angrier about BP's oil spill in the Gulf of Mexico than about other oil spills in other parts of the world. Someone who feels grateful about being helped by another person is liable to repay the good deed by helping that person at the expense of other people. Someone who would feel guilty about letting an elderly neighbor die in a heat wave may be motivated to check in on that neighbor to offer help, but they might not feel any guilt about people dying in Africa and do nothing about it.
So the questions are whether empathy is better or worse than other moral motivators in terms of partiality, and whether effective techniques for overcoming partiality make use of empathy. I'm not sure about the first question, but for the second question empathy seems promising. In many cases, cultivating empathy involves learning to feel empathy for new groups of people, which means partially overcoming partiality. People like Peter Singer who have emphasized empathy often talk about expanding the moral circle to feel empathy towards a larger set of beings, and extending empathy to care about more people seems easier than expanding on emotions like anger, guilt, or gratitude.
comment by byrnema · 2011-10-05T16:36:02.785Z · LW(p) · GW(p)
In sum, empathy has serious shortcomings.
But all of these examples are examples of incomplete or unskilled empathy.
Giving someone preferential treatment because they are cute isn't 'empathy'. (edited: better to say that not empathizing with something that is not cute is a failure of empathy)
Giving preferential treatment to Sheri is empathizing with Sheri and not the other people in the line -- that's again lopsided empathy. It won't happen if you also empathize with everyone in the line.
The ability of empathy to be manipulated would need a little more work to defend. There are certain emotions that people should feel (for example, remorse) that can be faked, and that is difficult to spot, and sometimes people don't express emotions in ways that other people recognize. A sophisticated empathy won't be based heavily on the expressed emotions of a person (for example, if a depressed person doesn't care if he is wrongly jailed should we not care if he is?) and will factor in that it is easier for some people to feel and display the "correct button" behaviors.
Replies from: gwern↑ comment by gwern · 2011-10-05T17:02:00.825Z · LW(p) · GW(p)
Ah, I see. So empathy is like violence or drugs - if it isn't solving your problems, you simply aren't using enough.
Replies from: byrnema↑ comment by byrnema · 2011-10-05T17:12:40.421Z · LW(p) · GW(p)
Whether we can rely on it to solve our problems is a different question. I suppose we don't trust people to properly apply empathy which is why we ask justice to be blind. In which case I could reinterpret your post as why people fail at empathy. And then yes, the arguments fit.
But would you agree that a hypothetical (but impossible) perfectly empathetic person would always be able to make a moral choice as good as any other system? This is more along the lines of the question I initially thought you were addressing with your post.
Replies from: gwern↑ comment by gwern · 2011-10-05T17:21:31.220Z · LW(p) · GW(p)
But would you agree that a hypothetical (but impossible) perfectly empathetic person would always be able to make a moral choice as good as any other system?
No, I would not. I am suspicious of this entire line of argument, hence my comment (any time someone says 'X is flawed, so we need more of X', I begin to wonder). I suspect that if you broaden empathy to try to patch up the observed problems, all you are doing is re-inventing, in a very obtuse way, utilitarianism or some other standard system of ethics.
(On what empathetic grounds are you criticizing the examples of empathy-gone-wrong? None at all, I suspect, but comparing them to what utilitarianism says.)
Replies from: byrnema↑ comment by byrnema · 2011-10-05T17:33:27.550Z · LW(p) · GW(p)
On what empathetic grounds are you criticizing the examples of empathy-gone-wrong? None at all, I suspect, but comparing them to what utilitarianism says.
That's ridiculous. I feel very sorry for all the people in line that had to watch Sherri glibly skip up to the top, and I'm sorry they didn't get a chance to tell their stories.
But kidding aside, there may be some truth to your statement because I might be led to a definition of empathy as appreciating the value of someone's utilons. For example, Sherri gets some utilons for skipping the line and everyone else loses some (except for those they gain by empathizing with Sherri).
So the end result is the same -- you want to maximize utilons. However -- and I think this is an important point -- you use empathy to figure out what it is that each person in the equation values, as it is different for each person. How else are you to know what utilities to assign to each person for each of the outcomes?
Replies from: gwern↑ comment by gwern · 2011-10-05T17:56:06.754Z · LW(p) · GW(p)
That's ridiculous. I feel very sorry for all the people in line that had to watch Sherri glibly skip up to the top, and I'm sorry they didn't get a chance to tell their stories.
Sure, you say that now...
I might be led to a definition of empathy as appreciating the value of someone's utilons. For example, Sherri gets some utilons for skipping the line and everyone else loses some
What does the word 'empathy' buy you there? If you don't already appreciate the value of someone else's utilons, in what sense are you a regular utilitarian? Aren't you just anything-but-utilitarian? (Ethical egoism 'only my utilons count', deontologies 'only law-tilons count', etc.)
How else are you to know what utilities to assign to each person for each of the outcomes?
The way utilitarians have always done - revealed preference, willingess to pay, neural or behavioral correlates, self-reports, longevity, etc. Empathy seems no better than any of those to me. (I can't imagine I would be able to accurately assess utility for a masochist just by trying to employ my empathy!)
Replies from: Nornagest, byrnema↑ comment by Nornagest · 2011-10-05T18:14:04.092Z · LW(p) · GW(p)
(I can't imagine I would be able to accurately assess utility for a masochist just by trying to employ my empathy!)
I think that might be a better strike against empathetic ethics than anything the OP presents, actually. Empathy's effectiveness as a moral guide is strictly limited by its ability to model others' hedonic responses, an ability constrained both by the breadth of hedonic variation in the environment and by the modeler's imagination. That doesn't even work too well for heterogenous, multicultural societies like most of the First World -- you need to take a meta-ethical approach to keep it from breaking down over sexual and religious points, for example -- so I'd expect it to be completely inadequate for problems involving nonhuman or transhuman agents. Which needn't be speculative; animal rights would qualify, as would corporate ethics.
Beware of other-optimizing, essentially.
Replies from: gwern↑ comment by gwern · 2011-10-05T18:26:30.007Z · LW(p) · GW(p)
Those are good points.
Replies from: byrnema↑ comment by byrnema · 2011-10-05T19:30:24.711Z · LW(p) · GW(p)
Yes, but just to iterate: it's a failure to empathize not a failure of empathy.
Replies from: Ubiquity, false_vacuum↑ comment by Ubiquity · 2011-10-07T23:27:13.867Z · LW(p) · GW(p)
The semantics are important in understanding the debate. Perhaps that is obvious to the rationalists here, but it seems to me this was essentially a semantic debate (the last few comments between gwern and byrnema), with which I tend to agree with byrnema. Perhaps it could be helpful to clearly define the expected denotation of "empathy"? Wikipedia states "Empathy is the capacity to recognize and, to some extent, share feelings (such as sadness or happiness) that are being experienced by another sapient or semi-sapient being" whereas dictionary dot com defines it quite a bit more broadly as "1. the intellectual identification with or vicarious experiencing of the feelings, thoughts, or attitudes of another. 2. the imaginative ascribing to an object, as a natural object or work of art, feelings or attitudes present in oneself: By means of empathy, a great painting becomes a mirror of the self. "
I guess my point is I have always thought of the term in the broader sense, and I don't think anyone can have any understanding whatsoever without the broader form of "Empathy". Perhaps my own connotations are filtering in there too.
↑ comment by false_vacuum · 2011-10-05T19:45:38.652Z · LW(p) · GW(p)
Yes, exactly.
↑ comment by byrnema · 2011-10-05T19:35:50.149Z · LW(p) · GW(p)
I agree with Nornagest, empathy's effectiveness as a moral guide is strictly limited by ability to model others' [values]. (I struck out "it's" ability -- empathy doesn't have inherent limits, though we do.)
What does the word 'empathy' buy you there?
For me, 'empathy' just means determining in any way whatsoever what people value, and how much. So it would include everything you mentioned. Empathy means just accurately estimating what someone else values. Since preferences are indeed hidden (or exaggerated!), it's a high-level prediction/interpolation game. If you share a lot in common with someone, it helps a lot if you can 'feel' what they are feeling by imagining being them and this is often what is meant by empathy. The feelings/emotions of empathy are also very important because they lend a common currency for value. For example, it something makes someone sad, I can relate to their being sad without relating to that particular reason. Also, I can measure how sad it makes them as a quantitative input in my equation. The problem with trying to empathize with a computer (or to an animal, see Nornagest's comment) is it is difficult to know how to weight their expressed preferences.
If my concept of empathy is accepted, then empathy buys me the ability to model other people's preferences. You can't apply utilitarianism in which there is a term for other people's preferences without it.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-10-05T19:54:12.312Z · LW(p) · GW(p)
Well, OK.
Note that your concept of empathy includes cultivating a social circle in which people honestly and accurately report their own preferences when asked and then explicitly asking someone in that circle for their preferences. It includes reading the results of studies on the revealed preferences of different communities and assuming that someone shares the most common preferences of the community they demographically match.
More generally, it includes a number of things that completely ignore any impressions you might garner about an individual's preferences by observation of that individual.
I agree that your concept of empathy is a useful thing to have.
I also think it fails to map particularly closely to the concept "empathy" typically evokes in conversation with native English speakers.
Replies from: byrnema↑ comment by byrnema · 2011-10-05T20:06:58.889Z · LW(p) · GW(p)
Note that your concept of empathy includes cultivating a social circle in which people honestly and accurately report their own preferences when asked and then explicitly asking someone in that circle for their preferences. It includes reading the results of studies on the revealed preferences of different communities and assuming that someone shares the most common preferences of the community they demographically match.
Huh. I don't entirely see where you are getting this. I'll reread my comment, but what I meant is that empathy is accurately modeling people's preferences using any means whatsoever.
Sometimes people use the term empathy to mean (for example, with respect to a 'bleeding heart') that the empathetic person tries very sincerely to model other people's preferences and weights those preferences strongly. Also, empathy can mean that a person relies solely on predicting emotions for modeling preferences. I'm not sure how prevalent these different definitions are but regarding "your concept of empathy is a useful thing to have", thanks.
A good distinguishing question for the common concept of empathy might be to ask-the-audience if a sociopath could have empathy. That is, consider a sociopath that is really good at modeling other people's preferences but simply doesn't weight other people's preferences in their utility function. Could this person be said to 'have empathy'?
If the answer is decidely 'no', then it seems a common concept of empathy might really be about a feeling a person has about the importance of other people's preferences, depending on whether 'accuracy' is or isn't also required.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-10-05T20:55:32.782Z · LW(p) · GW(p)
Agreed that that's a good distinguishing question. I predict that audiences of native English speakers who have not been artificially primed otherwise will say the sociopath lacks empathy.
As for not seeing where I'm getting what you quote... I'm confused. Those are two plausible techniques for arriving at accurate models of other people's preferences; would they not count as 'empathy' in your lexicon?
Replies from: byrnema, byrnema↑ comment by byrnema · 2011-10-10T23:43:11.791Z · LW(p) · GW(p)
As for not seeing where I'm getting what you quote... I'm confused.
I'm confused too. I read your comments over again today and they made sense. I kept making the same consistent mistake (at least 3 times) that you were defining rather than giving examples.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-10-10T23:58:42.218Z · LW(p) · GW(p)
Ah! Yes, OK, the conversation makes sense now. Thanks for saying that out loud.
↑ comment by byrnema · 2011-10-05T23:21:20.585Z · LW(p) · GW(p)
... I applied some google-foo and not having empathy is one of the defining characteristics of sociopaths, and then the first definition given seems pretty straight-forward:
The ability to understand and share the feelings of another.
I'm happy with that definition, and it doesn't change much. On the one hand, to understand the feelings of another you've got to have a good model for their preferences. Then sharing their feelings is the human/connection aspect of it.
In relation to this thread, I would refine that (perfect) empathy would be more than enough to be moral since understanding another person's preferences is the first part of it. (I don't think the second part is necessary for morality, but it makes it more natural.)
comment by billswift · 2011-10-05T03:35:44.527Z · LW(p) · GW(p)
Slightly off-topic here, since your post doesn't require the conflation, but it has been annoying me lately that there are three distinct usages of "empathy" that are frequently conflated.
1) "Empathy" as the emotional interest in others, could also be called "moral sense", the kind of empathy that sociopaths are said to lack.
2) "Empathy" as the ability to identify emotionally with others, the set of instincts or "firmware" that make interpersonal communications and interactions go smoothly, the kind of empathy that autistics are said to lack.
3) "Empathy" or "imaginative identification with others", a more intellectual version, part of the definition of an intelligent being which is associated with metalaw. The ability to intellectually and purposely imagine yourself in the place of another, even a very different other, such as an extra-terrestrial alien, hence its association with metalaw.
Replies from: gwern↑ comment by gwern · 2011-10-05T03:44:56.090Z · LW(p) · GW(p)
Much of what I cut was Prinz going on about the fine details like that; your and his distinctions did not interest me very much, and your definitions are incorrect/not commonly accepted, anyway. (How does a word coined only in 1903 pick up those 3 meanings anyway?)
Replies from: false_vacuum↑ comment by false_vacuum · 2011-10-05T18:57:09.494Z · LW(p) · GW(p)
It seems to me that billswift is accurately identifying three different meanings the word 'empathy' is taken to have. I'd never heard of metalaw before, though.
comment by PhilGoetz · 2011-10-05T23:57:13.123Z · LW(p) · GW(p)
I agree with many things Prinz says, but it dismays me that, yet again, the conclusion under moral system X that it is moral to kill one innocent person to save 5 innocent people, is used as evidence against moral system X:
For example, one might judge that it is bad to kill an innocent person even if his vital organs could be used to save five others who desperately need transplants. Here, arguably, we feel cumulatively more empathy for the five people in need than for the one healthy person, but our moral judgment does not track that empathetic response.
Eventually, after enough moral systems founder on this "bug", it's time to consider the possibility that it is a feature.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-10-06T00:02:01.768Z · LW(p) · GW(p)
Your remark sort of made me go click about something. If one is conflicted about a moral or ethical dilemma, then one method of getting a not insane answer is to look at what a bunch of different systems. would say. This is like how in electronics that are needed to have an extremely low rate of failure they sometimes use a few slightly different systems and then go with what the majority says if there's a conflict. Similarly, there are some attempts to make predictive systems by weighing what a variety of different predictive rules say. The same approach might work for the tough moral cases.
comment by gwern · 2011-10-05T14:19:39.797Z · LW(p) · GW(p)
Comments on Reddit:
-
Note: no examples are given of philosophers who hold the (bizarre) thesis that empathy, or any other single emotion, is on its own necessary or sufficient for morality/ethics. No-one that I know of, not even Smith or Hume, holds this thesis. But, y'know, it's Prinz, so it gets published. [Prinz would disagree; see my reply]
Prinz should know better than to artificially isolate a single type of motivation in this way... it's doubtful that such an isolation is even conceptually coherent, let alone that anyone ever acts from "empathy alone", so the whole idea of it being necessary or sufficient for morality is kind of confused to begin with.
-
Right. Every time I read lesswrong I'm struck by the total lack of empathy and desire to diminish it's role in the world. As far as I can tell this is part of their overall program to promote diversity in all things except values.
Personally I'm sympathetic to large parts of their philosophical program. But their insistence that everyone must follow the party line or be deemed "wrong" would make a Stalinist blush.
↑ comment by JoshuaZ · 2011-10-05T14:47:58.993Z · LW(p) · GW(p)
First comment seems useful. Second seems to be a generic attack with no actual content.
Replies from: gwern↑ comment by gwern · 2011-10-05T14:54:17.318Z · LW(p) · GW(p)
I found the second helpful in a backhanded way - for me, one of the more interesting parts of the Prinz paper was section 5, on the downsides of empathy. He's right, I'm no longer so keen on empathy and more interested in diminishing its role in the world.
Replies from: JoshuaZ, nshepperd↑ comment by JoshuaZ · 2011-10-05T14:57:04.158Z · LW(p) · GW(p)
This seems like a bad idea. If the overall level of empathy is reduced, the result won't be more efficient charity, the result will less charity. Having money go to support the cute abused puppies is orders of magnitude less useful than having it go to say Village Reach, but it is still probably better than the money going to a lot of status symbols like gold necklaces, fancy cars, and the like.
Replies from: gwern, None↑ comment by gwern · 2011-10-05T15:01:42.876Z · LW(p) · GW(p)
If the overall level of empathy is reduced, the result won't be more efficient charity, the result will less charity.
-_- It's too bad that I didn't just post most of a paper arguing the contrary, and carefully copied out every single citation to make it that much easier for LWers to follow the references.
(I don't know why I bother sometimes.)
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-10-05T15:04:38.410Z · LW(p) · GW(p)
The studies in question showed that charity could be increased by means other than empathy. They don't as far as I can tell go in the direction of showing that people will give the same amount of charity when there's no empathy.
Replies from: gwern↑ comment by gwern · 2011-10-05T15:09:07.696Z · LW(p) · GW(p)
'Reduce X and you reduce Y'.
'But Y is increased by other mechanisms like Z, and sometimes quite substantially!'
'That doesn't matter "as far as I can tell".'
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-10-05T15:12:53.215Z · LW(p) · GW(p)
That's not the argument. I agree that there are other mechanisms that can influence giving rates and that there are quite a few of them, some of which seem to swamp empathy in controlled conditions. The issue is whether empathy is a mechanism which impacts giving rates. These studies don't seem to answer that effectively.
Replies from: gwern↑ comment by gwern · 2011-10-05T15:19:09.936Z · LW(p) · GW(p)
Read what you wrote:
If the overall level of empathy is reduced, the result won't be more efficient charity, the result will less charity.
Even if I grant you that empathy matters at all for giving, because of those other mechanisms influencing the level of charity, the net effect is still indeterminate.
Sections 4 & 5 are the relevant ones here; the net effect of empathy is unclear - if it were removed, it's not clear that the removal of the related biases etc would not compensate.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2011-10-05T15:22:06.167Z · LW(p) · GW(p)
The net is indeterminate for reducing empathy and using these other techniques to trigger more giving. Actually, in that sort of context, I suspect given this literature that the total giving will likely go up. But that didn't seem to be what you were advocating. If it is what you are advocating then I misread your remark.
↑ comment by nshepperd · 2011-10-05T15:41:08.555Z · LW(p) · GW(p)
A few downsides is enough to convince you that it's bad on net?
Replies from: gwern↑ comment by gwern · 2011-10-05T15:47:03.002Z · LW(p) · GW(p)
I was unaware of any downsides before; how do you suggest I update but to be no longer so keen on empathy? Should I be even keener? (Well, that doesn't sound right...)
Replies from: kilobug↑ comment by kilobug · 2011-10-05T16:00:04.335Z · LW(p) · GW(p)
Hum, no I think he was referring more to "more interested in diminishing its role in the world". That sounds like you'll actively try to lower the amount of empathy of the world - not just stop being as eager as before to try to increase it. Maybe we (or at least I) misunderstood you on that part.
But (some of) the drawbacks of empathy listed on the article are real, and indeed should lower the energy invested in trying to increase empathy - but they are far from enough to make empathy a "globally bad" thing, or to make me actually try to decrease empathy. I still think the world would be much better off if we had more empathy. Not as much as before, but empathy is still something to promote, not to diminish.
Replies from: gwern↑ comment by gwern · 2011-10-05T16:12:16.399Z · LW(p) · GW(p)
That sounds like you'll actively try to lower the amount of empathy of the world - not just stop being as eager as before to try to increase it. Maybe we (or at least I) misunderstood you on that part.
I am more interested in lowering it - before, I had zero interest in lowering it, and now I have some interest, which is more than I had before. (Not nearly enough to make me do anything, but then, it's not like I was actively trying to increase empathy before.)
Replies from: Jack, Normal_Anomaly↑ comment by Normal_Anomaly · 2011-10-06T12:38:10.949Z · LW(p) · GW(p)
So, if you could increase world empathy by 1% by pushing one button, decrease it by 1% by pushing a second button, or do nothing, what would you do?
Replies from: gwerncomment by [deleted] · 2011-10-08T12:52:26.642Z · LW(p) · GW(p)
…For example, one might judge that charity is good, or that wife beating is bad. According to the view under consideration these judgments depend on empathetic responses: we empathize with the positive feelings experienced by the recipients of charity and with the negative feelings of those who fall prey to domestic violence.
If was the adherent of a heretical sect of PC I would say that I detected microagression here.
comment by false_vacuum · 2011-10-05T19:28:49.278Z · LW(p) · GW(p)
There seems to be a general tendency here to conflate 'empathy' with 'the particular (biased, inconsistent) ways humans tend to (attempt to) practise empathy'. The latter is obviously far less capable of constituting a basis for morality than the former, on just about any reasonable construal of 'morality' (another term the ambiguous employment of which obviates the usefulness of many an argument on such topics...).
comment by nshepperd · 2011-10-05T06:52:27.612Z · LW(p) · GW(p)
…[but] consider cases where deontological considerations overrule utilitarian principles. For example, one might judge that it is bad to kill an innocent person even if his vital organs could be used to save five others who desperately need transplants. Here, arguably, we feel cumulatively more empathy for the five people in need than for the one healthy person, but our moral judgment does not track that empathetic response.
Not sure about this one, not least because I don't believe in deontology in the first place.
Second, consider the moral judgments one might issue from behind a Rawlsian veil of ignorance; you might decide it’s good to distribute resources to the needy because you might be needy. Here there is no empathy for the needy, but rather concern for the self.
Is a moral judgement that is made for the sole reason of self-interest a moral judgement?
Third, while on the topic of the self, consider cases in which you yourself are the victim of a moral transgression. You judge that you’ve been wronged, but you don’t thereby empathize with yourself, whatever that would mean.
Presumably you don't need to empathize with yourself, since you already have direct access to your own feelings.
Fourth, consider cases in which there is no salient victim. One can judge that it would be wrong to evade taxes or steal from a department store, for instance, without dwelling first on the suffering of those who would be harmed.
Actually a worthwhile point. Although empathy might make us care about victims in the first place, once that's established you just need to know that the action will result in victims to formulate a rule against it. This is because morality is about the victims, not about one's feelings about them.
Fifth, there are victimless transgressions, such as necrophilia, consensual sibling incest, destruction of (unpopulated) places in the environment, or desecration of a grave of someone who has no surviving relative. Empathy makes no sense in these cases.
Not terribly great examples, since I see no reason to care about most of these, but it is true that we're allowed to care about beautiful or interesting natural biomes for their own sake without "empathizing" with them, as we established in the previous "Not for the sake of X alone" discussions. (Although in most cases, destroying the place would have the effect of depriving possible future visitors of it, thereby creating victims.)
comment by JonatasMueller · 2011-11-26T07:12:29.485Z · LW(p) · GW(p)
I think it's pretty clear that empathy has flaws and occasionally leads to unethical behavior, but it may help somewhat "cognitively disadvantaged" people act in a less evil way, maybe. Emotions as a whole are not necessary for morality if there is very high intelligence to really understand ethics in a conceptual level. Emotions by themselves also can never sustain ethical behavior without this conceptual understanding. Although you could argue that empathy only works well with understanding of ethics, since empathy leads to errors for example in the Trolley experiment or in abortion, understanding of ethics is better off without empathy. Ethical rules or laws may serve as guidelines and threats for people to act in ways that are predicted to be favorable (no need to invoke deontology, I think, as two-level utilitarianism allows for rules), but emotional people would be all the more prone to breaking them, I suppose.
The universal ethics of the universe do happen to exist. The universal ethics has a positive value which is feeling good and a negative value which is feeling bad. These are physical phenomena which in their essential form consist in the activation of the neural areas that produce them in the brain, in our case. It applies to all sentient creatures in the universe since although they may not have human emotions they may have good or bad feelings, by their own classification. Other values are either reducible to these or invalid. For example, survival as a value is dependent on having good feelings, therefore it is reducible to them. The proof is that in eternal hell, for example, survival acquires a highly negative value. Another example is knowledge, it is reducible to increasing our ability to solve the causes of our feeling bad and increase our power of feeling good. Without it, knowledge by itself is as worthless as a boring class of useless information... if we had all the knowledge in the universe, but lived as an isolated paraplegic in a prison, then what? This wouldn't change anything therefore knowledge too is reducible to feeling good. Also, personal identity is a Darwinian delusion, so egotism should not be accepted as reasonable, although this ethics can work in an individual framework. Rules or laws may be accepted for humans to manage ignorance and incapability to make correct ethical decisions, on the basis that these laws increase global value.
comment by gwern · 2011-11-14T10:06:23.540Z · LW(p) · GW(p)
Increasing group solidarity is not always so awesome: http://www.overcomingbias.com/2010/04/the-dark-side-of-cooperation-2.html
comment by MileyCyrus · 2011-10-07T17:31:56.968Z · LW(p) · GW(p)
For example, one might judge that it is bad to kill an innocent person even if his vital organs could be used to save five others who desperately need transplants. Here, arguably, we feel cumulatively more empathy for the five people in need than for the one healthy person, but our moral judgment does not track that empathetic response.
The contrary, it is easier to empathize with one person than five:
The reason human beings seem to care so little about mass suffering and death is precisely because the suffering is happening on a mass scale. The brain is simply not very good at grasping the implications of mass suffering.
comment by Mass_Driver · 2011-10-05T03:48:44.953Z · LW(p) · GW(p)
This was useful, thank you!
comment by taw · 2011-10-16T03:10:53.012Z · LW(p) · GW(p)
This is a common issue with lesswrong posts. A good way to write a post would be:
- what the post is about
- primary argument
- various details and secondary arguments
- summary
- references
Or even better, since brevity is a virtue:
- what the post is about
- primary argument
This post contains a lot of words and a lot of minor nitpicks about imperfections of empathy (versus which alternative exactly) but I don't see any primary points? Posts about morality are especially prone to this, since every basis of morality including varieties of utilitarianism have some severe problems, so saying that empathy has them too means nothing.
comment by latanius · 2011-10-06T18:34:33.551Z · LW(p) · GW(p)
Empathy has serious shortcomings... compared to what?
For example, I consider it a feature that we are more likely to emphatize with children and puppies than cockroaches just because they are cute and more similar to us. Or what about rocks? (who said that rocks don't have feelings? they are just... different. We bastards constantly fail to understand them.)
As there is no Universal Ethics of the Universe, the only thing we can compare our own implementation of morality is our (probably incomplete) models of it, then we do the evaluation... again using our built-in system on those corner cases, declaring that we've found a bug... (and in this case, it seems to be wise to think twice before fixing them in order to build an Even More Friendly AI....)
Nevertheless, I agree that simulating the feelings of others is not all there is to morality. Consider Schelling points, that (in my view) nicely explain "fairness" without a little bit of feeling for your enemies. Or... of course, it is a little mixed up (empathy as an implementation of Schelling points?)...
Replies from: DSimon↑ comment by DSimon · 2011-10-08T13:11:40.337Z · LW(p) · GW(p)
who said that rocks don't have feelings?
Well, not to be snippy, but the answer to that question is: neurologists.
Replies from: Clarica↑ comment by Clarica · 2011-10-08T16:49:30.579Z · LW(p) · GW(p)
Since when have neurologists studied rocks? The whimsical suggestion that rocks might have feelings is somewhat akin to the less whimsical suggestion that there are lots of things that may have 'feelings' that we do not easily or usually detect, or can not detect without special equipment.
And some of these feelings (like bio-communication in plants), while measurable, we usually don't care that much about, and empathy for the pain of plants (and animals) may interfere with empathy for the pain of people, if you take compassion fatigue into consideration.
Replies from: DSimon↑ comment by DSimon · 2011-10-09T16:46:18.805Z · LW(p) · GW(p)
And some of these feelings (like bio-communication in plants), while measurable, we usually don't care that much about[...]
I'd argue that bio-communication in plants is probably not any actual kind of feeling or qualia. This is what I meant when I referred to neurologists; currently science has a fair (though vague) idea as to what sort of structure enables human feelings and experiences. When we don't detect this structure or anything analogous to it, we can be pretty confident that there's no consciousness/qualia/nameless-redness/etc. going on. Not extremely confident, as we don't understand consciousness well enough yet to be able to eliminate possible alternate implementations which don't resemble the known ones, but still fairly confident.
Replies from: Clarica↑ comment by Clarica · 2011-10-10T04:22:39.146Z · LW(p) · GW(p)
I think it is an important point to distinguish between feelings (aka response) and consciousness. I am not sure how to distinguish these two things. And elevation of 'consciousness' does not dismiss 'feelings'.
Replies from: DSimon↑ comment by DSimon · 2011-10-10T05:28:57.187Z · LW(p) · GW(p)
I'm not sure what you're getting at here by binding "feelings" and "response"; I think our terminology is getting confused.
I'll clarify my earlier comment by saying that when it comes to figuring out if something is an entity which I should behave morally towards, I'm only really interested in conscious feelings. Response to stimuli alone, without conscious experience, shouldn't have any moral weight. And inversely, if something can consciously experience pain but is unable to respond to it, it is immoral to hurt it.
Replies from: Clarica↑ comment by Clarica · 2011-10-10T22:46:59.472Z · LW(p) · GW(p)
Ah. I see consciousness as the ability to interrupt 'instinctive' response with a measured or planned response. And feelings as the middle stage between action and reaction, conscious or no.
I do not privilege conscious experience, just because I absolutely enjoy it. It sounds like you do.
comment by CG_Morton · 2011-10-05T15:02:25.264Z · LW(p) · GW(p)
What kind of 'morality' are we talking about here? If we're talking about actual systems of morality, deontological/utilitarian/etc, then empathy is almost certainly not required to calculate morally correct actions. But this seems to be talking about intuitive morality. It's asking: is empathy, as a cognitive faculty, necessary in order to develop an internal moral system (that is like mine)?
I'm not sure why this is an important question. If people are acting morally, do we care if it's motivated by empathy? Or put it this way: Is it possible for a psychopath to act morally? I'd say yes, of course, no matter what you mean by morality.
comment by [deleted] · 2015-09-17T12:33:30.516Z · LW(p) · GW(p)
Is it just me or is Blair's logic incomplete?
Lack of empathy is a diagnostic criterion for psychopathy (Hare, 1991), and Blair shows that psychopaths also suffer from a profound deficit in moral competence. In particular, they do not draw a distinction between moral rules (e.g., don’t hit people) and conventional rules (e.g., rules about what clothing to wear in school). Blair concludes that psychopaths’ failure to draw this distinction indicates that they do not comprehend the essence of moral rules. When they say that something is “morally wrong,” they don’t really understand what these words mean. Blair speculates that this failure is a direct result of the empathy deficit.
Some gems for behavioural mechanism designers:
Studies using economic games have shown that, when angry, people are even willing to pay significant costs to punish those who fail to cooperate (Fehr and Gächter, 2002)16.
For example, Weyant (1978)14 found that people who are made to feel good by being given an easy test to solve are almost twice as likely, when compared to neutral controls, to volunteer for a charity that requires going door to door collecting donation
comment by gwern · 2012-10-31T15:42:13.296Z · LW(p) · GW(p)
"fMRI reveals reciprocal inhibition between social and physical cognitive domains" (media):
Two lines of evidence indicate that there exists a reciprocal inhibitory relationship between opposed brain networks. First, most attention-demanding cognitive tasks activate a stereotypical set of brain areas, known as the task-positive network and simultaneously deactivate a different set of brain regions, commonly referred to as the task negative or default mode network. Second, functional connectivity analyses show that these same opposed networks are anti-correlated in the resting state. We hypothesize that these reciprocally inhibitory effects reflect two incompatible cognitive modes, each of which is directed towards understanding the external world. Thus, engaging one mode activates one set of regions and suppresses activity in the other. We test this hypothesis by identifying two types of problem-solving task which, on the basis of prior work, have been consistently associated with the task positive and task negative regions: tasks requiring social cognition, i.e., reasoning about the mental states of other persons, and tasks requiring physical cognition, i.e., reasoning about the causal/mechanical properties of inanimate objects. Social and mechanical reasoning tasks were presented to neurologically normal participants during fMRI. Each task type was presented using both text and video clips. Regardless of presentation modality, we observed clear evidence of reciprocal suppression: social tasks deactivated regions associated with mechanical reasoning and mechanical tasks deactivated regions associated with social reasoning. These findings are not explained by self-referential processes, task engagement, mental simulation, mental time travel or external vs. internal attention, all factors previously hypothesized to explain default mode network activity. Analyses of resting state data revealed a close match between the regions our tasks identified as reciprocally inhibitory and regions of maximal anti-correlation in the resting state. These results indicate the reciprocal inhibition is not attributable to constraints inherent in the tasks, but is neural in origin. Hence, there is a physiological constraint on our ability to simultaneously engage two distinct cognitive modes. Further work is needed to more precisely characterize these opposing cognitive domains.
comment by shokwave · 2011-10-05T11:56:49.254Z · LW(p) · GW(p)
That is not to say that morality shouldn’t centrally involve emotions.
Rephrase this as the positive claim
Morality should centrally involve emotions
and it feels a whole lot different. What evidence for this claim can we think of?
Replies from: thomblake↑ comment by thomblake · 2011-10-05T14:46:44.371Z · LW(p) · GW(p)
That's not just a re-phrasing. That something is not to be avoided is not the same as that it is to be pursued - thus, "not should" is different from "should not". Also, "that is not to say that..." is not a direct endorsement of what follows (EDIT: or its negation).
Replies from: false_vacuum↑ comment by false_vacuum · 2011-10-05T19:13:18.778Z · LW(p) · GW(p)
Yes, but "not shouldn't" is the same thing as "should", unless you're an intuitionist. (I'm not not an intuitionist...)
comment by [deleted] · 2014-06-13T06:03:33.614Z · LW(p) · GW(p)
In X-men days of future past Charles Xavier said: 'You are not afraid of others’ pain, its yours that you fear. Face the pain, accept, see it because it would make you stronger. Our greatest gift is to bear all the pain and suffering of others and not break, and still have hope''
Replies from: gwern↑ comment by gwern · 2014-06-13T14:20:38.479Z · LW(p) · GW(p)
I think that's bunk. Instead of quoting feel-good applause lights expressing conventional sentimental made-up fiction from people who have never existed, try responding to the arguments and presenting facts.
Replies from: None↑ comment by [deleted] · 2014-06-13T14:37:42.727Z · LW(p) · GW(p)
I like it - it helps me stop thinking I can mentalise a given random persons minds. I don't know what about it you think is bunk, but that's okay. I'll respond if it will calm you down cause you sound agitated.
It's just a quote from a movie to encourage people to consider an alternative hypothesis. Movies are made by people who did exist, actually.
Does it have less significance because it wasn't said at some ""feel-good applause light expressing conventional sentimental "" by an academic or something?
I wasn't engaging in argument at all, anyway, but from your ad-hominin I don't think you really want a reasoned one this time.
Replies from: gwern, Jiro↑ comment by gwern · 2014-06-13T16:22:19.659Z · LW(p) · GW(p)
It's just a quote from a movie to encourage people to consider an alternative hypothesis. Movies are made by people who did exist, actually.
Yes, it was written by a Hollywood screenwriter who has led a life of no particular note. This is not helping your credibility.
Does it have less significance because it wasn't said at some ""feel-good applause light expressing conventional sentimental "" by an academic or something?
Absolutely. As terrible as academics may be, they at least occasionally are in touch with data.
I wasn't engaging in argument at all, anyway
So we're done here. Thanks for nothing.
comment by [deleted] · 2014-06-13T11:23:16.693Z · LW(p) · GW(p)
''A new study published in NeuroImage found that separate neural pathways are used alternately for empathetic and analytic problem solving. The study compares it to a see-saw. When you’re busy empathizing, the neural network for analysis is repressed, and this switches according to the task at hand.''