An attempt to 'explain away' virtue ethics

post by lukeprog · 2011-09-09T08:49:58.256Z · LW · GW · Legacy · 24 comments

Recently I summarized Joshua Greene's attempt to 'explain away' deontological ethics by revealing the cognitive algorithms that generate deontological judgments and showing that the causes of our deontological judgments are inconsistent with normative principles we would endorse.

Mark Alfano has recently done the same thing with virtue ethics (which generally requires a fairly robust theory of character trait possession) in his March 2011 article on the topic: 

I discuss the attribution errors, which are peculiar to our folk intuitions about traits. Next, I turn to the input heuristics and biases, which — though they apply more broadly than just to reasoning about traits — entail further errors in our judgments about trait-possession. After that, I discuss the processing heuristics and biases, which again apply more broadly than the attribution errors but are nevertheless relevant to intuitions about traits... I explain what the biases are, cite the relevant authorities, and draw inferences from them in order to show their relevance to the dialectic about virtue ethics. At the end of the article, I evaluate knowledge-claims about virtues in light of these attribution biases, input heuristics and biases, and processing heuristics and biases. Every widely accepted theory of knowledge must reject such knowledge-claims when they are based merely on folk intuitions.

An overview of the 'situationist' attack on character trait possession can be found in Doris' book Lack of Character.

24 comments

Comments sorted by top scores.

comment by Vladimir_M · 2011-09-10T00:52:48.512Z · LW(p) · GW(p)

Alfano says:

At the end of the article, I evaluate knowledge-claims about virtues in light of these attribution biases, input heuristics and biases, and processing heuristics and biases. Every widely accepted theory of knowledge must reject such knowledge-claims when they are based merely on folk intuitions.

This sounds absurd on its face. If Alfano finds out that someone has a history of cheating and stealing, will he avoid having any business with this person, expecting similar behavior in the future, or will he "reject such knowledge-claims... based merely on folk intuitions"?

Are his claims really so silly, or am I missing something?

Replies from: lessdazed
comment by lessdazed · 2011-09-10T01:38:36.690Z · LW(p) · GW(p)

If a person's history is to cheat in business, it might be that the person habitually and easily lies whenever on the phone and he or she can't see who is on the other end. The person might be solidly in the middle of the bell curve for everything but predilection to dehumanization. (Scholarship FTW.)

Alternatively, the person might have a unique situation, such as being blind, isolated, and requiring a reader to speak out received emails in Stephen-Hawking voice, that is such that anyone would experience dehumanization sufficient to make them cheaters. (I'm not claiming this is the case, just that some of similarly plausible set-ups would cause actions, just as time since judges ate affects sentencing.)

So either virtue ethics breaks down as people's uniqueness lies in their responses to biases and/or people's being overwhelmingly, chaotically directed by features of their environments.

Either way, cheaters and thieves are likely to cheat or steal again.

Replies from: sam0345
comment by sam0345 · 2011-09-10T09:18:40.020Z · LW(p) · GW(p)

If I can look someone in the face, can usually detect lying. Voice only, can often detect lying. Text only, can sometimes detect lying.

Thus if a person is honest in proportion to the bandwidth, this requires no more psychological explanation than the fact that burglars are apt to burgle at night.

Replies from: gwern
comment by gwern · 2011-09-10T21:25:20.640Z · LW(p) · GW(p)

If I can look someone in the face, can usually detect lying. Voice only, can often detect lying. Text only, can sometimes detect lying.

Is that by the same way you can divine people's true natures?

  • the Wizards Project tested 20,000 people to come up with 50 who panned out
  • an aggregation of techniques offered no better than 70% accuracy
  • people with no instructions did little better than chance in distinguishing lies and truth

But I suppose these results (and the failings of mechanical lie detectors) are just unscientific research, which pale next to the burning truth of your subjective conviction that you "can usually detect lying".

Replies from: lessdazed, sam0345
comment by lessdazed · 2011-09-10T22:20:52.366Z · LW(p) · GW(p)

What was the self-assuredness of the 20,000? What was the self-assuredness of the 50?

What was the ability of the top 100, or 1,000, as against the top 50?

Replies from: gwern
comment by gwern · 2011-09-10T22:42:24.875Z · LW(p) · GW(p)

Does any of that really matter? This is the same person who thinks a passel of cognitive biases doesn't apply to him and that the whole field is nonsense trumped by unexamined common sense. (Talk about 'just give up already'.)

Replies from: lessdazed
comment by lessdazed · 2011-09-10T22:44:16.340Z · LW(p) · GW(p)

If the top 200 lie-detectors were among the 400 most confident people at the outset, I would think that relevant.

Replies from: gwern
comment by gwern · 2011-09-10T22:52:41.511Z · LW(p) · GW(p)

And how likely is that, really?

This is the sort of desperate dialectics verging on logical rudeness I find really annoying, trying to rescue a baloney claim by any possibility. If you seriously think that, great - go read the papers and tell me and I will be duly surprised if the human lie-detectors are the best calibrated people in that 20,000 group and hence that factoid might apply to the person we are discussing.

Replies from: lessdazed
comment by lessdazed · 2011-09-10T23:15:27.217Z · LW(p) · GW(p)

Seems like homework for the person making the claim, I'm just pointing out it exists.

I will be duly surprised if the human lie-detectors are the best calibrated people

Nit-pick, they could be the worst calibrated and what I said would hold, provided the others estimated themselves suitably bad at it.

comment by sam0345 · 2011-09-11T21:29:43.551Z · LW(p) · GW(p)

That academics who do not want to succeed in doing something tend to be grossly unsuccessful in doing something is weak evidence that it cannot be done.

Some business places have a lot of small high value stuff, easily stolen, and a lot of employees with unmonitored access to that stuff.

Somehow they succeed in selecting (as close to 100% as makes no difference) employees who do not steal.

The evidence that people cannot detect lying resembles the evidence that the scientific method is undefined and impossible.

The existence and practice of certain business places shows that some people are very good at predicting other people's behavior, even when those people would prefer that they fail to predict that behavior.

Replies from: gwern
comment by gwern · 2011-09-12T00:53:52.873Z · LW(p) · GW(p)

That academics who do not want to succeed in doing something tend to be grossly unsuccessful in doing something is weak evidence that it cannot be done.

These academics would be richly rewarded, in and out of academia, for finding human lie detectors and even more so for finding techniques to train people into such things. This is true for all the obvious reasons, and for the more subtle reason that saying '99.75% of people suck and the ones who don't think this are self-deluded' is a negative result and academia punishes negative results.

(Also, bizarre ad hominem with no real world backing. How on earth are you getting upvotes?)

Some business places have a lot of small high value stuff, easily stolen, and a lot of employees with unmonitored access to that stuff. Somehow they succeed in selecting (as close to 100% as makes no difference) employees who do not steal.

'Shrinkage' is and remains a problem in retail; the solutions to this have nothing to do with human lie detectors. The solutions involve filtering heavily for people who have demonstrated that they haven't stolen in the past, summary termination upon theft, technological counter-measures, and elaborate social sanctions. If human lie detectors existed in such quantities or humans were so analyzable, why does do the diamond dealers of NYC resort to such desperate means as dealing as much as possible with their co-ethnics who have decades of reputation and social connections standing hostage for their business dealings?

(Non sequitur; how on earth is this getting upvoted?)

The evidence that people cannot detect lying resembles the evidence that the scientific method is undefined and impossible.

No evidence cited, and what is this juvenile relativism doing here?

The existence and practice of certain business places shows that some people are very good at predicting other people's behavior, even when those people would prefer that they fail to predict that behavior.

I like how this looks like an argument, yet completely fails to include any information that matters at all. 'existence and practice', 'certain business places', 'some people' - all of these are empty of semantic content.

And even assuming you filled in these statements with something meaningful, so what? The point of the OP was not that predictions cannot be made about humans, the point is that the predictions are not made by a hypothetical 'character'. Predictions made by situation are quite powerful, and I would expect that many businesses exploit this quite a bit in all sorts of ways, like placement of goods in grocery stores.

(Non sequitur again; good grief.)

Replies from: lessdazed, sam0345
comment by lessdazed · 2011-09-12T03:20:09.162Z · LW(p) · GW(p)

How on earth are you getting upvotes?...how on earth is this getting upvoted?

Better not to go there.

comment by sam0345 · 2011-09-12T07:08:46.871Z · LW(p) · GW(p)

These academics would be richly rewarded, in and out of academia, for finding human lie detectors

When a businessman wants to detect liars, he is not going to turn to academia.

The strange inability of academia to detect a propensity to bad behavior, or to acknowledge that anyone else can detect such propensities is based on their horror of "discrimination"

Recall that you could tell the shoe bomber was a terrorist at forty paces, you could tell on sight that Umar Farouk Abdulmutallab was some kind of criminal and up to no good, and yet the TSA insists on groping the genitals of six year old girls.

Although academics can supposedly scientifically prove it is impossible to detect propensities to behave badly, they are able to do a remarkably good job at detecting the slightest propensity to engage in politically incorrect thoughts.

Shrinkage' is and remains a problem in retail;

Retail has low value stuff, low wage employees. High value stuff, you hire more carefully, high wage employees, you can hire more carefully.

Retail has shrinkage because they don't care that much about shrinkage. When they do care about shrinkage, they can and routinely do solve the problem, notwithstanding academics piously saying it cannot be done.

The point is that the predictions are not made by a hypothetical 'character'.

Yet, oddly, businesses bet on character all the time. That you cannot tell is political correctness that all normal people ridicule, much as they ridicule the TSA.

comment by Kaj_Sotala · 2011-09-10T21:37:09.781Z · LW(p) · GW(p)

"According to most versions of virtue ethics, an agent’s primary ethical goal is to cultivate the virtues. The fully virtuous person possesses all the virtues, and so is disposed to do the appropriate thing in all circumstances. [...]

Yet skeptics such as Doris (1998, 2002) and Harman (1999, 2000, 2001, 2003, 2006) argue that situational influences swamp dispositional ones, rendering them predictively and explanatorily impotent. And in both science and philosophy, it is but a single step from such impotence to the dustbin.

We can precisify the skeptics’ argument in the following way. If someone possesses a character trait like a virtue, she is disposed to behave in trait-relevant ways in both actual and counterfactual circumstances. However, exceedingly few people—even the seemingly virtuous—would behave in virtue-relevant ways in both actual and counterfactual circumstances. Seemingly (and normatively) irrelevant situational features like ambient smells, ambient sounds, and degree of hurry overpower whatever feeble dispositions inhere in people’s moral psychology, making them passive pawns of forces they themselves typically do not recognize or consider.

Are individual dispositions really so frail? A firestorm followed the publication of Doris’s and Harman’s arguments that virtue ethics is empirically inadequate. If they are right, virtue ethics is in dire straits: it cannot reasonably recommend that people acquire the virtues if they are not possible properties of “creatures like us”

This seems obviously false to me. It may well be true that, in general, situational influences swamp dispositional ones. But that doesn't mean that it's pointless to try to cultivate virtue and teach yourself to behave virtuously. You might not always succeed, but as long as the effect of dispositional influences isn't entirely neglible, you will succeed more often than if you didn't cultivate virtue.

You could use the same reasoning to argue that consequentialism is in dire straits: Wanting to act in a consequentialist manner is a human disposition, but situational influences swamp dispositional ones. Thus, consequentialism cannot reasonably recommend that people act in a consequentialist manner, because that is not a possible property of "creatures like us".

comment by thomblake · 2011-09-09T17:54:59.544Z · LW(p) · GW(p)

Alfano is entirely too strict about knowledge, though he rests comfortably in the philosophical landscape there. "Can we know on the basis of folk intuitions that we have traits" isn't as interesting of a question when seen in these terms. He does not address the question "Are our folk intuitions about traits strong Bayesian evidence for their existence?" which would be required to dismiss consideration of folk intuitions entirely as he does. Thus, his claim "We need pay no heed to any attempt to defend virtue ethics that appeals only to intuitions about character traits" has not been proven satisfactorily.

Nonetheless, t's very nice for him that he's discovered that there are biases. Anyone who believes that virtue ethics is true should certainly be aware of the relevant ones.

I submit that the form of his argument could be used just as well against any knowledge claim using those definitions and picking some relevant biases.

comment by gwern · 2011-09-09T20:26:59.719Z · LW(p) · GW(p)

Some excerpts:

Why do we have so many trait terms and feel so comfortable navigating the language of traits if actual correlations between traits and individual actions (typically <0.30, as Mischel 1968 persuasively argues)1 are undetectable without the use of sophisticated statistical methodologies (Jennings et al. 1982)?

1 See also Mischel and Peake (1982). Epstein (1983), a personality psychologist, admits that predicting particular behaviors on the basis of trait variables is “usually hopeless.” Fleeson (2001, p. 1013), an interactionist, likewise endorses the 0.30 ceiling.

To answer this question, situationists invoke a veritable pantheon of gods of ignorance and error. Some, like the fundamental attribution error, the false consensus effect, and the power of construal, pertain directly to trait attributions. Others are more general cognitive heuristics and biases, whose relevance to trait attributions requires explanation. These more general heuristics and biases can be classed under the headings of input heuristics and biases and processing heuristics and biases. Input heuristics and biases include selection bias, availability bias, availability cascade, and anchoring. Processing heuristics and biases include disregard of base rates, disregard of regression to the mean, and confirmation bias.

According to Jones and Nisbett (1971, p. 93), the unique breakdown of the fundamental attribution error occurs when we explain what we ourselves have done: instead of underemphasizing the influence of environmental factors, we overemphasize them. Especially when the outcome is negative, we attribute our actions to external factors. This bias seems to tell against situationism, since it suggests that we can recognize the power of situations at least in some cases. However, the existence of such an actor-observer bias has recently come in for trenchant criticism from Malle (2006), whose meta-analysis of three decades worth of data fails to demonstrate a consistent actor-observer asymmetry.2 Malle’s meta-analysis only strengthens the case for the fundamental attribution error. Whereas Jones & Nisbett had argued that it admitted of certain exceptions at least in first-personal cases, Malle shows their exceptionalism to be ungrounded.

In one variation of the [Milgram] obedience experiment, a second experimenter played the role of the victim and begged to be released from the electrodes. Participants in this version of the study had to disagree with one of the experimenters, so a desire to avoid embarrassment and save face would give them no preference for obedience to one experimenter over the other. Nevertheless, in this condition 65% of the participants were maximally obedient to the experimenter in authority, shocking the other experimenter with what they took to be 450 volts three times in a row while he slumped over unconscious (Milgram, p. 95).

When people use the availability heuristic, they take the first few examples of a type that come to mind as emblematic of the whole population. This process can lead to surprisingly accurate conclusions (Gigerenzer 2007, p. 28), but it can also lead to preposterously inaccurate guesses (Tversky and Kahnemann 1973, p. 241). We remember the one time Maria acted benevolently and forget all the times when she failed to show supererogatory kindness, leading us to infer that she must be a benevolent person. Since extremely virtuous and vicious actions are more memorable than ordinary actions, they will typically be the ones we remember when we consider whether someone possesses a trait, leading to over-attribution of both virtues and vices.

In her defense of virtue ethics, Kupperman (2001, p. 243) mentions word-of-mouth testimony that “the one student who, when the Milgram experiment was performed at Princeton, walked out at the start was also the person who in Viet Nam blew the whistle on the My Lai massacre.” Such tales are comforting: perhaps a few people really are compassionate in all kinds of circumstances, whether the battlefield or the lab. But while anecdotes about character may be soothing, it should be clear that anecdotal evidence is at best skewed and biased, as well as prone to misinterpretation. We should focus on the fact that most experimental subjects are easily swayed by normatively irrelevant factors, not the fact that one person might be virtuous.

The existence of these biases does not prove that no one has traits, nor does it demonstrate that no arguments could warrant the conclusion that people have traits. What it instead shows it that regardless of whether people have traits, folk intuitions would lead us to attribute traits to them.

It should be noted here that many of psychologists, such as Fleeson (2001), do believe in traits, and not merely on the basis of folk intuitions. It is beyond the scope of this article to assess the success of their arguments and the extent to which those arguments apply to virtues (which are a distinctive subspecies of traits individuated not merely causally but by their characteristic reasons).

Replies from: sam0345
comment by sam0345 · 2011-09-10T09:26:55.672Z · LW(p) · GW(p)

Why do we have so many trait terms and feel so comfortable navigating the language of traits if actual correlations between traits and individual actions (typically <0.30, as Mischel 1968 persuasively argues)1 are undetectable without the use of sophisticated statistical methodologies (Jennings et al. 1982)?

I get the impression I can predict specific bad behavior pretty reliably, implying that folk wisdom can achieve markedly higher correlations that psychometric traits.

Replies from: gwern
comment by gwern · 2011-09-10T18:27:40.452Z · LW(p) · GW(p)

I get the impression I can predict specific bad behavior pretty reliably, implying that folk wisdom can achieve markedly higher correlations that psychometric traits.

I find it amusing that I can quote a paper on how 5-10 cognitive biases lead us to think that there are stable predictable 'character traits' in people with major correlations, and then the first reply is someone saying that they think they see such traits.

I see.

Replies from: sam0345
comment by sam0345 · 2011-09-10T21:04:18.278Z · LW(p) · GW(p)

Such papers come from a field of science whose claims to be scientific, whose claims to be a field of science, are far from universally accepted

Since its claims to be scientific are weak, any contradiction between its claims and common sense should be interpreted to its disfavor, and in favor of common sense.

comment by Jack · 2011-09-09T20:03:26.338Z · LW(p) · GW(p)

It seems plausible that our capacity for moral judgment might mirror our capacity for belief formation in that it includes crude but efficient algorithms like what we call cognitive biases in belief formation. But I don't think it follows that we can make our moral judgments 'more accurate' by removing moral 'biases' in favor of some idealized moral formula. What our crude but efficient moral heuristics are approximating is evolutionarily advantageous strategies for our memes and genes. But I don't really care about replicating the things that programmed me-- I just care about what they programmed me to care about.

In belief formation there are likely biases that have evolutionary benefits too- it is easier to deceive others if you sincerely believe you will cooperate when you are in a position to defect without retaliation, for example. But we have an outside standard to check our beliefs against-- experience. We know after many iterations of prediction and experiment which reasons for beliefs are reliable and which are not. Obviously, a good epistemology is a lot trickier than I've made it sound but it seems like, in principle, we can make our beliefs more accurate by checking them against reality.

I can't see an analogous standard for moral judgments. This wouldn't be a big problem if our brains were cleanly divided into value-parts and belief-parts. We could then just fix the belief parts and keep the crude-but-hey-that's-how-evolution-made-us value parts. But it seems like our values and beliefs are all mixed up in our cognitive soup. We need a sieve.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-09-10T07:52:26.020Z · LW(p) · GW(p)

But I don't really care about replicating the things that programmed me-- I just care about what they programmed me to care about.

Tangential public advisory: I suspect that it is a bad cached pattern to focus on the abstraction where it is memes and genes that created you rather than, say, your ecological-developmental history or your self two years ago or various plausibly ideal futures you would like to bring about &c. In the context of decision theory I'll sometimes talk about an agent inheriting the decision policy of its creator process which sometimes causes people to go "well I don't want what evolution wants, nyahhh" which invariably makes me facepalm repeatedly in despair.

comment by lessdazed · 2011-09-09T09:41:56.113Z · LW(p) · GW(p)

Assuming the evidence favors the false consensus effect, we may explain its relevance to the dispute about virtues by pointing out that, since people tend to make such rash inferences, they are prone to over-attributing traits. They could reason as follows: “Well, I helped these strange fellows advertise for Joe’s Bar, so almost anyone would do the same. I guess most people are helpful!” Such an inference, however, is at best dubious.

I do not see how the false consensus effect advances the argument.

comment by lessdazed · 2011-09-09T23:46:14.146Z · LW(p) · GW(p)

LW post on an example used in the common, stronger argument against virtue ethics, that we have no character traits. Stronger in that it makes more ambitious claims, not because it is more likely true.

comment by sam0345 · 2011-09-12T06:56:45.209Z · LW(p) · GW(p)

These academics would be richly rewarded, in and out of academia, for finding human lie detectors

When a businessman wants to detect liars, he is not going to turn to academia.

The strange inability of academia to detect a propensity to bad behavior, or to acknowledge that anyone else can detect such propensities is based on their horror of "discrimination"

Though strangely, although they can supposedly scientifically prove it is impossible to detect propensities to behave badly, they are able to do an extremely good job at detecting the slightest propensity to engage in politically incorrect thoughts.