Do we underuse the genetic heuristic?

post by Stefan_Schubert · 2014-01-22T17:37:26.608Z · LW · GW · Legacy · 18 comments

Contents

  Direct arguments for the genetic heuristic
  Genetic arguments for the genetic heuristic
  Pragmatic considerations
None
18 comments

Someone, say Anna, has uttered a certain proposition P, say "Betty is stupid", and we want to evaluate whether it is true or not. We can do this by investigating P directly - i.e. we disregard the fact that Anna has said that Betty is stupid, but look only at what we know about Betty's behaviour (and possibly, we try to find out more about it). Alternatively, we can do this indirectly, by evaluating Anna's credibility with respect to P. If we know, for instance, that Anna is in general very reliable, then we are likely to infer that Betty is indeed stupid, but if we know that Anna hates Betty and that she frequently bases her beliefs on emotion, we are not.

The latter kind of arguments are called ad hominem arguments, or, in Hal Finney's apt phrase, the genetic heuristic (I'm going to use these terms interchangeably here). They are often criticized, not the least within analytical philosophy, where the traditional view is that they are more often than not fallacious. Certainly the genetic heuristic is often applied in fallacious ways, some of which are pointed out in Yudkowsky's article on the topic. Moreover, it seems reasonable to assume that such fallacies would be much more common if they weren't so frequently pointed out by people (accussations of ad hominem fallacies are common in all sorts of debates). No doubt, we are biologically disposed to attack the person rather than what he is saying on irrelevant grounds.

The genetic heuristic is not always fallacious, though. If a reputable scientist tells us that P is true, where P falls under her domain, then we have reason to believe that P is true. Similarly, if we know that Liza is a compulsive liar, then we have reason to believe that P is false if Liza has said P.

We see that genetic reasoning can be both positive and negative - i.e. it can both be used to confirm, and to disconfirm, P. It should also be noted that negative genetic arguments typically only make sense if we assume that we generally put trust into what other people say - i.e. that we use a genetic argument to the effect that the fact that S having said P makes P more likely to be true. If people don't use such arguments, but only look at P directly to evaluate whether it is true or not, it is unclear what importance arguments that throw doubt on the reliability of S have, since it that case, knowing whether S is reliable or not shouldn't affect our belief in P.

Three kinds of genetic arguments

We can differentiate between three kinds of genetic arguments (this list is not intended to be exhaustive):

1) Caren is unreliable. Hence we disregard anything she says (e.g. since Caren is three years old).

2) David says P, and given what we know about P and about David (especially of David's knowledge and attitute to P), we have reason to believe that David is not reliable with respect to P. (For instance, P might be some complicated idea in theoretical physics, and we know that David greatly overestimates his knowledge of theoretical physics.)

3) Eric's beliefs on a certain topic has a certain pattern. Given what we know of Eric's beliefs and preferences, this pattern is best explained on the hypothesis that he uses some non-rational heuristic (e.g. wishful thinking). Hence we infer that Eric beliefs on this topic are not justified. (E.g. Eric is asked to order different people with respect to friendliness, beauty and intelligence. Eric orders people very similarly on all these criteria - a striking pattern that is best explained, given what we now know of human psychology, by the halo effect.)

(Possibly 3) could be reduced to 2) but the prototypical instances of these categories are sufficiently different to justify listing them as separate.)

Now I would like to put forward the hypothesis that we underuse the genetic heuristic, possibly to quite a great degree. I'm not completely sure of this, though, which is part of the reason for why I write this post: I'm curious to see what you think. In any case, here is how I'm thinking.

Direct arguments for the genetic heuristic

My first three arguments are direct arguments purporting to show that genetic arguments are extremely useful.

a) The differences in reliability between different people are vast (as I discuss here; Kaj Sotala gave some interesting data which backed up my speculations). Not only are the differences between, e.g. Steven Pinker and uneducated people vast, but also, and more interestingly, so are the difference between Steven Pinker and an average academic. If this is true, it makes sense to think that P is more probable conditional on Pinker having said it, compared to if some average academic in his field have said P. But also, and more importantly, it makes sense to read whatever Pinker has written. The main difference between Pinker and the average academic does not concern the probabilities that what they say is true, but in the strikingness of what they are saying. Smart academics say interesting things, and hence it makes sense to read whatever they write, whereas not-so-smart academics generally say dull things. If this is true, then it definitely makes sense to keep a good track of who's reliable and interesting (within a certain area or all-in-all), and who is not. 

b) Psychologists have during the last decades amassed a lot of knowledege of different psychological mechanisms such as the halo effect, the IKEA effect, the just world hypothesis, etc. This knowledge was not previously available (even though people did have a hunch of some of them, as pointed out, e.g. by Daniel Kahnemann in Thinking Fast and Slow). This knowledge gives us a formidable tool for hypothesizing that others' (and, indeed, our own), beliefs are the result of unreliable processes. For instance, there are, I'd say, lots of patterns of beliefs which are suspicious in the same way Eric's are suspicious, and which also are best explained by reference to some non-rational psychological mechanism. (I think a lot of the posts on this site could be seen in these terms - as genetic arguments against certain beliefs or patterns of beliefs, which are based on our knowledge of different psychological mechanisms. I haven't seen anyone phrase this in terms of the genetic heuristic, though.)

c) As mentioned in the first paragraph, those who only use direct arguments against P disregard some information - i.e. the information that Betty has uttered P. It's a general principle in the philosophy of science and Bayesian reasoning that you should use all the available evidence and not disregard anything unless you have special reasons for doing so. Of course, there might be such reasons, but the burden of proof seems to be on those arguing that we should disregard it.

Genetic arguments for the genetic heuristic

My next arguments are genetic arguments (well I should use genetic arguments when arguing for the usefulness of genetic arguments, shouldn't I?) intended to show why we fail to see how useful they are. Now it should be pointed out that I think that we do use them on a massive scale - even though that's too seldom pointed out (and hence it is important to do so). My main point is, however, that we don't do it enough.

d) There are several psychological mechanisms that block us from seeing the scale of the usefulness of the genetic heuristic. For instance, we have a tendency to "believe everything we read/are told". Hence it would seem that we do not disregard what poor reasoners (whose statements we shouldn't believe) say to a sufficient degree. Also, there is, as pointed out in my previous post, the Dunning-Kruger effect which says that incompetent people overestimate their level of competence massively, while competent people underestimate their level of competence. This makes the levels of competence to look more similar than they actually are. Also, it is just generally hard to assess reasoning skills, as frequently pointed out here, and in the absence of reliable knowledge people often go for the simple and egalitarian hypothesis that people are roughly equal (I think the Dunning-Kruger effect is partly due to something like this).

It could be argued that there is at least one other important mechanism that plays in the other direction, namely the fundamental attribution error (i.e. we explain others' actions by reference to their character rather than to situational factors). This could lead us to explain poor reasoning by lack of capability, even though the true cause is some situational factor such as fatigue. Now even though you sometimes do see this, my experience is that it is not as common as one would think. It would be interesting to see your take on this.

Of course people do often classify people who actually are quite reliable and interesting as stupid based on some irrelevant factor, and then use the genetic heuristic to disregard whatever they say. This does not imply that the genetic heuristic is generally useless, though - if you really are good at tracking down reliable and interesting people, it is, in my mind, a wonderful weapon. It does imply that we should be really careful when we classify people, though. Also, it's of course true that if you are absolutely useless at picking out strong reasoners, then you'd better not use the genetic heuristic but have to stick to direct arguments.

e) Many social institutions are set up in a way which hides the extreme differences in capability between different people (this is also pointed out in my previous post). Professors are paid roughly the same, are given roughly the same speech time in seminars, etc, regardless of their competence. This is partly due to the psychological mechanisms that make us believe people are more cognitively equally than they are, but it also reinforces this idea. How could the differences between different academics be so vast, given that they are treated in roughly the same way by society? We are, as always, impressed by what is immediately visible and have differences understanding that under the surface huge differences in capability are hidden.

f) Another reason for why these social institutions are set up in this way is egalitarianism: we have a political belief that people should be roughly equally treated, and letting the best professors talk all the time is not compatible with that. This egalitarianism also is, I think, an obstacle to us seeing the vast differences in capability. We engage in wishful thinking to the effect that talent is more equally distributed than it is. 

g) There are strong social norms against giving ad hominem arguments to someone else's face. These norms are not entirely unjustified: ad hominem arguments do have a tendency to make debates derail into quarrels. In any case, this makes the genetic heuristic invisible, and, again, people tend to go by what they see and hear, so if they don't hear any ad hominem arguments, they'll use them less. I use the genetic heuristic much more often when I think than when I speak and since I suspect that others do likewise, its visibility doesn't match its use nor its usefulness. (More on this below).

These social norms are also partly due to the history of analytic philosophy. Analytical philosophers were traditionally strongly opposed to ad hominem arguments. This had partly to do with their strong opposition to "psychologism" - a rather vague term which refers to different uses of psychology in philosophy and logic. Genetic arguments typically speculate that this or that belief was due to some non-rational psychological mechanism, and hence it is easy to see how someone who'd like to banish psychology from philosophy (under which argumentation theory was supposed to fall) would be opposed to such arguments.* 

h) Unlike direct arguments, genetic arguments can be seen as "embarrasing", in a sense. Starting to question why others, or I myself came to have a certain belief is a rather personal business. (This is of course an important reason why people get upset when someone gives an ad homimen argument against them.) Most people don't want to start question whether they believe in this or that simply because it's in their material interest, for if that turned out to be true, they'd come out as selfish. It seems to me that people who underuse genetic reasoning are generally poor not only at metacognition (thinking about one's own thinking) on a narrow construal -i.e. on thinking of what biases they suffer from - but also are bad at analyzing their own personalities as a whole. If that speculation is true, it incidates that genetic reasoning has an empathic and emotional component that direct reasoning typically lack. I think I've observed many people who are really smart at direct reasoning, but who completely fail at genetic reasoning (e.g. they treat arguments coming from incompetent people as on par with those from competent people). These people tend to lack empathy (i.e. they don't understand other people - or themselves, I would guess).

i) Another important and related reason for why we underuse ad hominem arguments is, I think, that we wish to avoid negative emotions, and ad hominem reasoning often does give rise to negative feelings (we think we're being judgy). This goes especially for the kind of ad hominem reasoning that classifies people into smart/dumb people in general. Most people have rather egalitarian views and don't like thinking those kinds of thoughts. Indeed when I discuss this idea with people they are visibly uncomfortable with it even though they admit that there is some truth to it. We often avoid thinking about ideas that we're not emotionally comfortable with.

j) Another reason is mostly relevant to the third genetic heuristic and has to do with the fact that many of these patterns might be so complex as to be hard to spot. This is definitely so, but I'm convinced that with training you could be much better at spotting these patterns than most people are today. As stated, ad hominem-arguments aren't held in high regard today, which makes people not so inclined to look for them. In groups where such arguments are seen as important - such as Marxists and Freudians - people come up with intricate ad hominem arguments all the time. True, these are generally invalid, as they postulate psychological mechanisms that simply aren't there, but there's no reason to believe that you couldn't come up with equally complex ad hominem-arguments that track real psychological mechanisms.

Pragmatic considerations

It is true, as many have pointed out, that since genetic reasoning are bound to upset, we need to proceed cautiously if we're going to use it against someone we're discussing with. However, there are many situations where the object of our genetic reasoning doesn't know that we're using it, and hence can't get upset. For instance, I'm using it all the time when I'm thinking for myself, and this obviously doesn't upset anyone. Likewise, if I'm discussing someone's views - say Karl Popper's - with a friend and I use genetic arguments against Popper's views, that's unlikely to upset him.

Also, given the ubiquity of wishful thinking, the halo effect, etc, it seems to me that reasonable people shouldn't get too upset if others hypothesize that they have fallen prey to these biases if the patterns of their beliefs suggest this might be so (such as they do in the case of Eric). Indeed, ideally they should anticipate such hypotheses, or objections, by explicitly showing that the patterns that seem to indicate that they have fallen prey to some bias actually do not do that. At the very least, they should acknowledge that these patterns are bound to raise their discussion partners' suspicoun. I think it would be a great step forward if our debating culture would change so that this would become standard practice.

In general, it seems to me that we pay too much heed to the arguments given by people who are not actually persuaded by those arguments, but rather have decided what to believe beforehand, and then simply pick whatever arguments support their view (e.g. doctors' arguments for why doctors should be better paid). It is true that such people might sometimes actually come up with good arguments or evidence for their position, but in general their arguments tend to be poor. I certainly often just turn off when I hear that someone is arguing in this way: I have a limited amount of time, and prioritize to listen to people who are genuinely interested in the truth for its own sake.

Another factor that should be considered is that it is true that genetic reasoning is kind of judgy, elitistic and negative to a certain extent. This is not unproblematic: I consider it important to be generally optimistic and positive, not the least for your own sake. I'm not really sure what to conclude from this, other than that I think genetic reasoning is an indispensable tool in the rationalist's toolbox, and that you thus have to use it frequently even if it would have an emotional cost attached to it.

In genetic reasoning, you treat what is being said - P - as a "black box", more or less: you don't try to analyze P or look at how justified P is directly. Instead, you look at the process of how someone came to believe P. This is obviously especially useful when it's hard or time-consuming to assess P directly, while comparatively easy to assess the reliability of the process that gave rise to the belief in P. I'd say there are many such situations. To take but one example, consider a certain academic discipline - call it "modernpostism". We don't know much about the content of modernpostism, since modernpostists use terminology that is hard to penetrate for outsiders. We know, however, how the bigshots of modernpostism tend to behave and think in other areas. On the basis of this, we have inferred that they're intellectually dishonest, prone to all sorts of irrational thinking, and simply not very smart. On the basis of this, we infer that they probably have no justification for what they're saying in their professional life either. (More examples of useful ad hominem arguments are very welcome.)

Psychology is all the time uncovering new data relevant for ad hominem-reasoning - data not only on cognitive biases but also on thought-styles, personality psychology, etc. Indeed, it might even be that brain-scanning could be used for these purposes in the future. In principle it should be possible to do a brainscan on the likes of Zizek, Derrida or Foucault, observe that there is nothing much going on in the relevant areas of the brain, and conclude that what they say is indeed rubbish. That would be a glorious victory of cold science over empty bullshit indeed...

I clearly need to learn to write shorter.

* "Anti-psychologism" is a rather absurd position, to my mind. Even though there have of course been misapplications of psychological knowledge in philosophy, a blanket prohibition of the use of psychological knowledge - knowledge of how people typically do reason - in philosophy - which is, at least in part, the study of how we ought to reason - seems to me to be quiet absurd. For an interesting sociological explanation of why this idea became so widespread, see Martin Kusch's Psychologism: A Case Study in the Sociology of Philosophical Knowledge - in effect a genetic argument against anti-psychologism...

Another reason was that analytical philosophers revolted against the rather crude genetic arguments often given by Marxists ("you only say so because you're bourgeois") and Freudians ("you only say so because you're sexually repressed"). Popper's name especially comes to mind here. The problem with their ad hominem arguments was not so much that they were ad hominem, though, but that they were based on flawed theories of how our mind works. We now know much better - the psychological mechanisms discussed here have been validated in countless experiments - and should make use of that knowledge.

There are also other reasons, such as early analytic philosophy's much too "individualistic" picture of human knowledge (a picture which I think comes naturally to us for biological reasons, but which also is an important aspect of Enlightenment thought, starting perhaps with Descartes). They simply underestimated the degree to which we rely on trusting other people in modern society (something discussed, e.g. by Hilary Putnam). I will come back to this theme in a later post but will not go into it further now.

18 comments

Comments sorted by top scores.

comment by passive_fist · 2014-01-22T21:15:33.546Z · LW(p) · GW(p)

I don't mean to be rude or anything, but I'd suggest condensing your argument to its core concepts. I'm pretty sure the inferential distance between what you're trying to say and the mind-frame of the average LW user is not as high as you may think.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-01-23T11:18:18.384Z · LW(p) · GW(p)

Yes...it is a bit rambling. I think one idea might be to split it into two posts, where one has to do with the reliability or unreliability of certain people, and the other has to do with inferring that some particular person has used an irrational psychological mechanism based on what he has said. I take it that Less Wrong has discussed the latter form of genetic arguments (though not in so many words - and I was a bit surprised that Yudkowsky was so critical of genetic reasoning in his post http://lesswrong.com/lw/s3/the_genetic_fallacy/, given that many of his posts can be interpreted as elucidations of such arguments). I haven't seen much discussion of the former idea though here.

Possibly I'll split the post into two and make them more accessible later.

"I'm pretty sure the inferential distance between what you're trying to say and the mind-frame of the average LW user is not as high as you may think."

Could you elaborate on this please? I'm not quite sure I follow.

comment by ChristianKl · 2014-01-22T23:23:41.873Z · LW(p) · GW(p)

I find the term genetic heuristic very confusing because if I don't already know the term I would assume that it has something to do with genetics.

Is that just because I'm no native English speaker?

Why not simple call it the origin heuristic? I think that would lead to significantly less confusion.

Replies from: Nornagest
comment by Nornagest · 2014-01-22T23:42:15.108Z · LW(p) · GW(p)

It's confusing in English too.

The word derives from the Greek γένεσις or genesis, i.e. origin; Wikipedia informs me that it dates from the mid-1930s, not long after genetics in the sense of "study of inheritance" was established as a field (and well before Watson and Crick). Almost certainly the coiners of "genetic heuristic", "genetics", and "gene" were all gesturing toward the same concept; we can hardly blame them for failing to anticipate the changes that bioinformatic ideas would lead to.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-23T00:08:50.454Z · LW(p) · GW(p)

History might explain how the name came about, but it doesn't prevent us from changing it to be better accessible. Especially when we argue that it might be underused it might need a better name.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-01-23T11:19:57.735Z · LW(p) · GW(p)

I think catchy and intuitive terms are important, so I'd be perfectly willing to change terminology. The only problem is that people usually do use terms such "genetic fallacy" or "ad homimen arguments", and there is a certain value to sticking to conventions, too.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-23T15:03:52.436Z · LW(p) · GW(p)

Your article doesn't use the term "genetic fallacy" a single time so to the extend that there a convention that suggest to use the term you are already breaking it.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-01-23T16:04:07.925Z · LW(p) · GW(p)

There is such a convention: http://en.wikipedia.org/wiki/Genetic_fallacy

I didn't break that convention, since I didn't use any other term for "genetic fallacy" (I didn't use the concept directly, though I did speak of it indirectly by pointing out that genetic arguments are often thought to be fallacious).

But like I said, I'd consider using another term if a catchy one would be invented.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-23T23:46:44.736Z · LW(p) · GW(p)

As far as I can see you did invent the term "genetic heuristic". If you google it with quotes on the first 10 search results there's your article and a bunch of articles on genetic algorithms. If what you are arguing has something to do with the way the term is used in talking about genetic algorithms, that connection isn't apparent to me.

As I said above, when you say origin heuristic, I think it would be clear what's meant and it would be harder to get false ideas about what you mean. Wikipedia even lists fallacy or orgins as synonym for genetic fallacy so it not that you would invent more new vocubulary than you are already doing.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-01-24T12:09:04.196Z · LW(p) · GW(p)

Hal Finney invented the term "genetic heuristic" here at Less Wrong...but it is true that it isn't a standard term (like "the genetic fallacy" is).

I'm not a native English speaker either so my linguistic sensitivity isn't the best. Is "origin heuristic" optimal? I'm thinking it might be good if the term included something about "person" or "speaker" since that makes it clear that you're attacking or supporting the speaker's reliability (rather than the proposition itself). Of course "ad hominem" does this but then again there is a case against using latin terms that people don't understand.

Replies from: ChristianKl
comment by ChristianKl · 2014-01-24T12:22:18.414Z · LW(p) · GW(p)

Hal Finney invented the term "genetic heuristic" here at Less Wrong...but it is true that it isn't a standard term (like "the genetic fallacy" is).

Sorry, that I charged the wrong person of LW ;)

If I say something it's wrong because it's the party line of the Republican party, I'm not addressing a single person or speaker. I think "origin heuristic" covers that claim quite well.

Do you have a motivation of why would want to be more specific and not include groups, movements and other sources of ideas from which a idea can originate but which are no persons?

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-01-24T13:04:25.271Z · LW(p) · GW(p)

My only objection to "the origin heuristic" is that it might not be sufficiently catchy and intuitive, since it's pretty abstract. That's why I thought something to do with "person" might be preferable. Something to do with "source" is another alternative.

comment by fubarobfusco · 2014-01-23T10:11:18.615Z · LW(p) · GW(p)

This post seems rather long. Here's what I'm hearing here:

Some people's beliefs or assertions (at least, on topics outside the commonplace) do not constitute very good evidence.

If Uncle Eric says our cat looks hungry, don't pay him any mind; he always says that — and you know how overfed his cat is.

Some people are more well-calibrated than others. But it's not instrumentally effective social behavior to go around putting calibration stickers on everyone's forehead.

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-01-23T11:24:53.775Z · LW(p) · GW(p)

"This post seems rather long. Here's what I'm hearing here:

Some people's beliefs or assertions (at least, on topics outside the commonplace) do not constitute very good evidence."

Yes...but my point is also that we underrate the difference between people. Some people's assertions constitute very much stronger evidence than others.

Some people are more well-calibrated than others. But it's not instrumentally effective social behavior to go around putting calibration stickers on everyone's forehead."

We do of course do that to a certain degree - give some people titles and such intended to show that they are well-calibrated. But yes, in many situations genetic argument have negative side-effects. But there are also many where they don't, as I point out.

comment by ThrustVectoring · 2014-01-23T17:29:42.760Z · LW(p) · GW(p)

People don't only tell us things because they are true. They also tell us things because they want us to believe it. Believing things that others want us to believe is often a very, very bad plan. Especially if we advertise that we believe things because people tell us that we ought to.

Even if you can tell when people are lying, it's still a bad plan. If Alice tells you the IQ scores of people only if it will adjust your perception of that person downward and Alice dislikes that person, you're going to reliably treat people that Alice hates worse.

comment by CronoDAS · 2014-01-25T09:54:10.386Z · LW(p) · GW(p)

http://lesswrong.com/lw/he/knowing_about_biases_can_hurt_people/

Replies from: Stefan_Schubert
comment by Stefan_Schubert · 2014-01-25T15:36:53.897Z · LW(p) · GW(p)

Yes...as per usual with him a good post. One thing, though, is that it is not entirely clear why knowledge of other people's biases would lead to more motivated reasoning than ordinary knowledge, that is used in what I call "direct" argumentation. For instance, say that Peter believes that the stocks he just bought is a great buy, and that he wants to believe that. Then he might use his knowledge of the biases to explain away any argument to the effect that these stocks will lose value - that's true. But likewise, he might use his knowledge of the stock market and the economy to explain any evidence he gets that seem to indicate that the stocks will go down. The situations seem to be quite analogous.

Perhaps there are reasons to believe that knowledge of the cognitive biases might hurt you more than other kinds of knowledge, but the fact that we often engage in motivated reasoning does not in itself show that - you need additional arguments to establish that. Perhaps such arguments can be provided - I don't know - but I'm quite certain that in the long run knowledge about biases will nevertheless make sensible people better reasoners. Indeed, that seems to be an implicit assumption of most LW'ers (why else would you spend so much time discussing these biases?)

comment by itaibn0 · 2014-01-23T12:58:16.926Z · LW(p) · GW(p) Yes, but the term 'genetic heuristic' is derived as a sugarcoating of 'genetic fallacy', and we don't want to use fallacies in our reasoning.