The ecological rationality of the bad old fallacies

post by velisar · 2014-03-19T11:39:36.813Z · LW · GW · Legacy · 13 comments

Contents

13 comments

I think that the community here may have some of the most qualified people to judge a new frame of studying the fallacies of argumentation with some instruments that psychologists use. I and my friend Dan Ungureanu, a linguist at Charles University in Prague could use some help!

I’ll write a brief introduction on the state of argumentation theory first, for context:

There is such thing as a modern argumentation theory. It can be traced back to the fifties when Perelman and Olbrechts-Tyteca published their New Rhetoric and Toulmin published his The uses of argument. The fallacies of argumentation, now somewhat popular in the folk argumentation culture, have had their turning point when the book Fallacies (Hamblin, 1970) argued that most fallacies are not fallacies at all, they are most of the time the reasonable option. Since then some argumentation schools have taken Hamblin’s challenge and tried to come up with a theory of fallacies. Of them, the Informal logic school and the pragma-dialectics are the most well-known. They even have made empirical experiments to verify their philosophies.

Another normative approach, resumed here by Kaj Sotala in Fallacies as weak Bayesian evidence, is comparing fallacious arguments with the Bayesian norm (Hahn & Oaksford, 2007; also eg. Harris, Hsu & Madsen, 2012; Oaksford & Hahn, 2013).

We cherry-pick a discourse to spot the fallacies. We realized that a couple of years ago when we had to teach the informal fallacies to journalism masters students: we would pick a text that we disagree with, and then search for fallacies. Me and Dan, we often come up with different ones for the same paragraph. They are vague. Than we switched to cognitive biases, as possible explanations for some fallacies, but we were still in the ‘privileging the hypothesis territory’, I would say now, with the benefit of hindsight.

Maybe the world heuristic has already sprung to some of you. I’ve seen this here and somewhere else on the net: fallacies as heuristics. Argumentation theorists only stumbled on this idea recently (Walton, 2010).

Now here’s what this whole intro was for: lesswrong and before overcoming bias are sites build on the idea that we can improve our rationality by doing some things in relation to the now famous Heuristics&Biases program. The heuristics as defined by Twersky and Kahneman are only marginally useful for assessing the heuristic value of a type of argument that we use to call a fallacy. The heuristic elicitation design is maybe a first step: we can see if we have some form of attribute substitution (we always have, if we think that a Bayesian daemon is the benchmark).

We started with the observation that if people generally fall back to some particular activity when they are “lazy”, that activity could be a precious hint about human nature. We believe that it is far easier to spot the fallacy a) when you are looking for it and b) that you are looking for it usually if the topic is interesting, complex, grey: theology, law, politics, health and the like. If indeed the fallacies of argumentation are stable and universal behaviors across (at least some) historical time and across cultures, we can see those “fallacies” as rules of thumb that use other, lower-level fast and frugal heuristics as solid inference rules in the right ecology. Ecological rationality is a match between the environment and the – bounded rational – agent’s decision mechanisms (G. Gigerenzer 1999, V. Smith, 2003).

You can’t just invent a norm and then compare behaviors of organisms or artifacts with it. Not even Bayes rule: the decision of some organisms will have to be Bayesian only in their natural environment (E.T. Jaynes observed this). That is why we need a computational theory of people even when we study arguments: there is no psychology which isn’t evolutionary psychology. We need to know the function, but saying fallacy is about valence, so people traditionally ask why we are so narrow or stupid or, recently, when are the fallacies irrational and when they are not. (no, we don’t want to start again the 1996 polemic between Gigerenzer and Tversky&Kahneman!).

Well, that is what we think, anyway. And if you spot a big flaw, please point it to us before we send our paper to a journal.

Here’s the draft of our paper:

https://www.academia.edu/6271737/The_Ecological_Rationality_of_Argumentation_Fallacies

 

Thanks

13 comments

Comments sorted by top scores.

comment by Viliam_Bur · 2014-03-19T13:34:14.733Z · LW(p) · GW(p)

It is true that there are reasons for our biases; that human behavior was shaped by evolution and optimized for the natural environment. Many mistakes that we do are a result of behavior that contributes to survival in nature.

But I think that "contributes to survival" does not always lead to "solid inference rules". For example, imagine that a majority of the tribe is wrong about some factual question. (A question where being right or wrong is not immediately relevant for surviving.) It contributes to survival if an individual joins this majority, because it gets them allies. -- This could be excused by saying that in an ancient tribe without much specialization, a majority is more likely to be correct than an individual, therefore "follow the majority opinion" actualy is a good truth-finding heuristics. But that ignores the fact that people sometimes lie for a purpose, e.g. calumniate their opponents, or fabricate religious experience. So there is more in joining the majority than merely a decent truth-finding heuristics.

(EDIT: It's not like in the past humans lived in harmony with nature using their heuristics, and only today we have exploitable biases. People had exploitable biases even in the ancient environment -- their heuristics were correct often, but not always -- and people have exploited each other's biases even in the ancient environment. Not only we had adaptations to make mostly correct decisions, but also adaptations to exploit other people's flaws in the former adaptations.)

Also, no species is perfectly tuned to their environment. Some useful mutations simply didn't happen yet. Also, there are various trade-offs, so even if a species as a whole is optimized for given environment, some of their individual features may be suboptimal, as a price to improve other conflicting features. Therefore, assuming that every human bias is a result of a perfect behavior in the natural environment, would be assuming too much.

But otherwise, I like this.

Replies from: velisar
comment by velisar · 2014-03-19T14:55:19.820Z · LW(p) · GW(p)

I have to admit that the text is a bit long! We sorta did say all of that you are saying, which means that the way I resumed the text here was a bit misleading.

There must be conditions when a heuristic like "follow the majority opinion" must be triggered in our heads: something is recognized maybe. There is selection pressure to find social exchange violation, but also to be ingenious in persuasion. Some of this already has experimental support. Anyway, we think that what we today call fallacies are not accidents - like the blind spot. They are good inference rules for a relatively stable environment, but cannot predict far into the future and cannot judge new complex problems. That may be why we don't spot the fallacies of small talk, of experts in domains with expertise, or in domains for which we already have intuitions.

That would imply that a bad decision today is not necessarily the product of a cognitive illusion, but that we build a bad interface for the actual human mind in the modern world (a car will be lighter and faster if it shouldn't accommodate humans). Reference class forecasting or presenting probabilities as frequencies are just technologies, interfaces. The science is about the function and the fallacies are interesting precisely because, presumably, they are a repetitive behavior. They may help in our effort to reverse engineer ourselves.

comment by Gunnar_Zarncke · 2014-03-19T12:03:51.402Z · LW(p) · GW(p)

A quote from that paper:

If a style of argumentation has survived critics for millennia, we can ask several questions: Could it be that there are evolutionary programs running in our heads that systematically push us to do the same things? Are those based on inferences that correlate with good fitness? Where does epistemic value differ from ecologic utility? Do the fallacists have some observation bias; do we suffer from the Focusing illusion (Schkade & Kahneman, 1998) when observing a bad argument?

I have this heard called the fallacy fallacy (though rational wiki sees that differently).

Replies from: velisar
comment by velisar · 2014-03-19T12:14:02.804Z · LW(p) · GW(p)

You are correct; but the Argument from fallacy is still pretty uninformative.

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2014-03-19T12:46:14.296Z · LW(p) · GW(p)

Agreed. But it kind of means that some evolution of fallacies trending toward more complex argumentation patterns is taking place. Or? I'm not versed in the classics but I take it that they didn't have this large an (anti-)tool-set.

Replies from: velisar
comment by velisar · 2014-03-19T15:09:06.804Z · LW(p) · GW(p)

I think any preoccupation, if it exists long enough, results in great refinements. The are people good a African rare languages, mineral water, all sorts of (noble!) sports, torture - why should't people get better at something as common as argumentation.

But we're advocating a look the other way around, to the more basic processes, they may say something about how humans work. And indeed, it would be easier with less sophisticated arguers.

comment by Armok_GoB · 2014-03-22T18:33:33.639Z · LW(p) · GW(p)

Conversely, any common and overused or commonly misused heuristic can also be used as a fallacy. Absurdity Fallacy, Affect Fallacy, Availability Fallacy. I probably use these far more than the original as-good-heuristic concept.

comment by John_Maxwell (John_Maxwell_IV) · 2014-03-20T03:20:44.209Z · LW(p) · GW(p)

My impression was that Kaj's essay was not original to him but rather inspired by the paper he linked to at the bottom.

Replies from: velisar
comment by velisar · 2014-03-20T13:50:38.312Z · LW(p) · GW(p)

I edited for clarity, thanks.

comment by Richard_Kennaway · 2014-03-19T15:58:38.027Z · LW(p) · GW(p)

Can you comment on how the concept of "ecological rationality" relates to this imaginary conversation?

Replies from: velisar
comment by velisar · 2014-03-19T16:28:05.500Z · LW(p) · GW(p)

It seems to me that it is the discussion about optimizing versus satisficing.

If Intel builds the computer to do some division, but they found a way to approximate the results because that way the CPU can simulate, I don't know, a nuclear explosion, it should say so. But in our case, we need God to say that the nerves in the skin are thermometers, the eyes, height measuring tools and so on. The only utility function of organisms that we now for sure is that the code that build them has to make it in the next generation; we can argue about different strategies, but they depend on - sometimes - too many other things.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2014-03-19T17:01:56.972Z · LW(p) · GW(p)

But in our case, we need God to say that the nerves in the skin are thermometers, the eyes, height measuring tools and so on.

Historically, it has been the other way round. We can recognise, without the hypothesis of God, that legs are good for walking, eyes for seeing, and so on, and these observable facts were taken as proof of the existence of a Designer.

Having dispensed with the Designer, we are left with the problem of explaining why living organisms appear to be made of distinct parts serving clear functions, and how we are able to say that these functions are sometimes performed well and sometimes badly, how we can describe some processes as pathological and some as healthy.

ETA: The answer isn't "evolution!", because when we use evolutionary techniques to solve computational problems, the result is typically something that works but we can't see how. When we look at living organisms we see things that work that we largely can see how. (The brain is a notable exception. Also, protein folding. But a heart is clearly a pump.)

Replies from: velisar
comment by velisar · 2014-03-19T19:54:45.128Z · LW(p) · GW(p)

True.

But is jealousy pathological? Or anger? Or fear?

I was arguing that the nerves in the skin are only an approximation of thermometers, likewise the eyes only a poor measure tool. By the way, there are 'evolutionary' biases: we perceive a ravine as deeper when we look down onto it and, conversely, from the bottom looking up it doesn't seem as tall (see also auditory looming). Their function is quite transparent once you think about organisms and not measure tools.