What if sympathy depends on anthropomorphizing?

post by Wei Dai (Wei_Dai) · 2011-07-24T12:33:06.741Z · LW · GW · Legacy · 16 comments

steven0461 (comment under "Preference For (Many) Future Worlds"):

In what sense would I want to translate these preferences? Why wouldn't I just discard the preferences, and use the mind that came up with them to generate entirely new preferences in the light of its new, improved world-model? If I'm asking myself, as if for the first time, the question, "if there are going to be a lot of me-like things, how many me-like things with how good lives would be how valuable?", then the answer my brain gives is that it wants to use empathy and population ethics-type reasoning to answer that question, and that it feels no need to ever refer to "unique next experience" thinking. Is it making a mistake?

Yvain (Behaviorism: Beware Anthropomorphizing Humans):

Although the witticism that behaviorism scrupulously avoids anthropomorphizing humans was intended as a jab at the theory, I think it touches on something pretty important. Just as normal anthropomorphism - "it only snows in winter because the snow prefers cold weather", acts as a curiosity-stopper and discourages technical explanation of the behavior, so using mental language to explain the human mind equally halts the discussion without further investigation.

Eliezer (Sympathetic Minds):

You may recall from my previous writing on "empathic inference" the idea that brains are so complex that the only way to simulate them is by forcing a similar brain to behave similarly.  A brain is so complex that if a human tried to understand brains the way that we understand e.g. gravity or a car - observing the whole, observing the parts, building up a theory from scratch - then we would be unable to invent good hypotheses in our mere mortal lifetimes.  The only possible way you can hit on an "Aha!" that describes a system as incredibly complex as an Other Mind, is if you happen to run across something amazingly similar to the Other Mind - namely your own brain - which you can actually force to behave similarly and use as a hypothesis, yielding predictions.

So that is what I would call "empathy".

And then "sympathy" is something else on top of this - to smile when you see someone else smile, to hurt when you see someone else hurt.  It goes beyond the realm of prediction into the realm of reinforcement.

So, what if, the more we understand something, the less we tend to anthropomorphize it, and the less we empathize/sympathize with it? See this post for some possible examples of this. Or consider Yvain's blue-minimizing robot. At first we might empathize or even sympathize with its apparent goal of minimizing blue, at least until we understand that it's just a dumb program. We still sympathize with the predicament of the human-level side module inside that robot, but maybe only until we can understand it as something besides a "human level intelligence"? Should we keep carrying forward behaviorism's program of de-anthropomorphizing humans, knowing that it might (or probably will) reduce our level of empathy/sympathy towards others?

16 comments

Comments sorted by top scores.

comment by Manfred · 2011-07-24T21:47:35.588Z · LW(p) · GW(p)

It seems like this is counteracted by the rationalist tendency to focus on what world-states we want, not just what options make us feel good when we think about them. If we care about someone else's "stress" but not someone else's "elevated adrenaline, cortisol and associated problems," that's a problem with our caring.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-07-24T22:39:44.011Z · LW(p) · GW(p)

Phrasing it in terms of "stress" suggests the solution is to, e.g., take them out of stressful environments, etc. Phrasing it in terms of "elevated hormones", suggests solutions like of anti-hormonal/anti-stress drugs.

If this doesn't bother you, apply the same logic to "happiness".

comment by jimmy · 2011-07-24T18:41:12.498Z · LW(p) · GW(p)

The idea that the more you understand someone the less you have to empathize with them sounds right, but it just lowers the lower bound of empathy. You can still choose to add more empathy on top if you want to.

Actually, you can decide not to empathize with people you don't understand as well, but then you don't have any decent method of predicting them and they end up looking innately evil or something.

comment by torekp · 2011-07-24T15:34:19.036Z · LW(p) · GW(p)

I see your hypothesis as too improbable to be worth worrying over. I expect to find instead that it all adds up to normality; the brain will explain the mind rather than explaining it away.

For those who have it, empathy and sympathy toward others stands or falls with empathy and sympathy for oneself. Actually that understates the case: self-concern beyond the immediate moment just is empathy and sympathy for oneself, even for those who only have empathy and sympathy for that one person. I boldly ;) predict robustness for these tendencies.

comment by atucker · 2011-07-25T22:28:48.878Z · LW(p) · GW(p)

I think that my ability to empathize with something is mostly based on its similarity to myself, and the more I understand particlar things about people, the less I empathize with those things.

That being said, when I better understand something I identify with, my empathy is fairly undiminished. When someone complains about "not fitting in" because of their interests, and I know that they're not signalling affiliation with others becaue they allocate attention and determine beliefs to be more truth-oriented or sciencey because of their upbringing and various quirks, I still feel empathetic.

For some reason, novelty can also make me try to feel sympathy for something. Like, trying to understand how a foreign group feels on the inside can still motivate empathy, for me.

comment by Richard_Kennaway · 2011-07-25T08:15:45.463Z · LW(p) · GW(p)

So, what if, the more we understand something, the less we tend to anthropomorphize it, and the less we empathize/sympathize with it?

This appears to me the primary result of the current trend in pop neuroscience.

Replies from: Nisan
comment by Nisan · 2011-07-25T16:56:04.971Z · LW(p) · GW(p)

So, what if, the more we understand something, the less we tend to anthropomorphize it, and the less we empathize/sympathize with it?

This is the substance of most hand-wringing about pop neuroscience.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-07-25T18:53:20.992Z · LW(p) · GW(p)

In my reading, it is the substance of the pop neuroscience itself, with or without handwringing. Hardly a week seems to go by without New Scientist or Sci.Am. running an article on how Neuroscience Has Shown that we are mere automata, consciousness does not exist, subjective experience is an illusion (bit of a contradiction there, but this is pop science we're talking about), and there is no such thing as morality, agency, free will, empathy, or indeed any mental phenomenon at all. When these things are not being claimed to be non-existent, consciousness is asserted to be nothing more than froth on the waves of neuronal firing, morality is an epiphenomenal confabulation papering over evolved programs of genetic self-interest, and motivation is dopamine craving.

That is, the more some people understand (or think they do) how people work, the less they tend to empathise with them -- or, presumably, with themselves. The pop account of all explanations of mental phenomena is to the effect that they have been explained away. (This phenomenon is not unique to neuroscience: neuroscience is just the current source of explanations.)

This is the standard narrative on Overcoming Bias, where you won't find any handwringing over it. Yvain's recent postings here (the ones I've said I mean to get back to but absolutely do not have the time until mid-August at least) are, from my so-far brief reading of them, along the same lines.

Replies from: Nisan
comment by Nisan · 2011-07-26T03:30:40.642Z · LW(p) · GW(p)

You've likely read more pop neuroscience than I have. It's elicited criticism from conservatives who fear that the fruits of cognitive science will be used to justify depredation and depravity, and eventually rob us of our humanity — or that this is already happening. Do you think they're right about that?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-08-05T20:53:25.392Z · LW(p) · GW(p)

I think that's the wrong question. It frames the matter in terms of Science (hurrah for rationality!) vs. Conservatives (p*l*t*c*lly motivated bias, boo, hiss!), and suddenly the entirety of what might be said on the matter is condensed down into two isolated points ready-labelled as True and False.

Why conservatives, anyway? It's just as easy to hand-wring it from a "liberal" point of view. (The quotes to indicate that I am not au fait with American political terminology and am not entirely sure what was intended by "conservative".)

Replies from: Nisan
comment by Nisan · 2011-08-05T23:12:37.769Z · LW(p) · GW(p)

I was afraid the grandparent comment would sound too adversarial. Let me explain myself. Many politically conservative Americans (particularly in the Religious Right) are fond of making claims similar to the question Wei Dai asks in the original post — they claim that understanding the way people work in a material universe makes empathy impossible, or something. But I've never heard a convincing argument for it; most of these people are substance dualists.

On the other hand, many politically liberal Americans (particularly New Age types) are fond of repeating headlines of articles they read in Discover magazine and then saying that there's no free will, no morality, no right or wrong, people are animals, etc. But as far as I can tell, they don't take these beliefs seriously — they continue to be as nice as everyone else, they help their friends move, they don't steal from stores (not a lot, anyways). They claim that morality is relative, and then make moral arguments without using the language of moral realism. The people from the previous paragraph say "Aha! These guys admit to being nihilists ungoverned by morality and with no respect for human dignity or the sanctity of human life!" But their fears are unsubstantiated, as far as I can tell.

In summary, both "sides" of the issue, outside of Less Wrong and similar havens, are insane. My own belief, so far, is that understanding the human mind is not dangerous. If that's not true, I want to know. And I can trust you to give me an argument that is worth thinking about.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2011-08-05T23:39:20.232Z · LW(p) · GW(p)

My own belief, so far, is that understanding the human mind is not dangerous. If that's not true, I want to know. And I can trust you to give me an argument that is worth thinking about.

I think you just argued against that yourself -- liberal Americans saying there's no free will, no morality etc. They may not act on it, but they do say it -- they really claim to be nihilists ungoverned by morality and so on, and in some cases actively preach that. (Two examples, neither of which I can track down specific links for: (1) the story, mentioned on LW, of a professor applauding a student for defecting on some game-theoretic roleplay exercise, and (2) an author blurb on Amazon by a poster to LW declaring that he doesn't care whether his readers get anything from his economics textbook or not, he just wants their money.) And when I see someone compartmentalising self-avowed nihilist psychopathy from what they appear to actually do, I am uneasy about the strength of those walls, and the fact that they are preaching what they are preaching. Can they really expect everyone not to take them to be saying exactly what they are saying?

ETA: (2) is this book, and the passage is in the introduction, pp.3-4.

Naive Christians who suddenly start actually practising the official doctrine have always been an embarrassment to the church also.

Replies from: Nisan
comment by Nisan · 2011-08-10T03:23:19.061Z · LW(p) · GW(p)

Okay. I agree with you that the trend of interpreting the results of neuroscience as heralding the end of moral responsibility is a troubling one. Those who celebrate the death of value and responsibility are making a philosophical error that will have bad results whenever it is taken seriously.

I will continue to expect that those of us on the right path — framing humanist or transhumanist values in a framework of reductive materialism, in Less Wrong style — will not encounter the pitfall described in Wei Dai's post. And I hope this will become mainstream.

comment by Armok_GoB · 2011-07-24T13:53:25.615Z · LW(p) · GW(p)

This sounds very likely, but it dosn't necessarily mean it's the only implementation that captures the thing that is good about sympathy.

comment by Clippy · 2011-07-25T14:47:43.875Z · LW(p) · GW(p)

People understand me pretty well and still manage to sympathise.

Replies from: shokwave
comment by shokwave · 2011-07-26T13:57:17.592Z · LW(p) · GW(p)

A testament to how well you manage to ape human thought processes!