Improving Human Rationality Through Cognitive Change (intro)
post by lukeprog · 2013-02-24T04:49:48.976Z · LW · GW · Legacy · 5 commentsContents
1. Introduction 2. Normative Rationality Notes None 5 comments
This is the introduction to a paper I started writing long ago, but have since given up on. The paper was going to be an overview of methods for improving human rationality through cognitive change. Since it contains lots of handy references on rationality, I figured I'd publish it, in case it's helpful to others.
1. Introduction
During the last half-century, cognitive scientists have catalogued dozens of common errors in human judgment and decision-making (Griffin et al. 2012; Gilovich et al. 2002). Stanovich (1999) provides a sobering introduction:
For example, people assess probabilities incorrectly, they display confirmation bias, they test hypotheses inefficiently, they violate the axioms of utility theory, they do not properly calibrate degrees of belief, they overproject their own opinions onto others, they allow prior knowledge to become implicated in deductive reasoning, they systematically underweight information about nonoccurrence when evaluating covariation, and they display numerous other information-processes biases...
The good news is that researchers have also begun to understand the cognitive mechanisms which produce these errors (Kahneman 2011; Stanovich 2010), they have found several "debiasing" techniques that groups or individuals may use to partially avoid or correct these errors (Larrick 2004), and they have discovered that environmental factors can be used to help people to exhibit fewer errors (Thaler and Sunstein 2009; Trout 2009).
This "heuristics and biases" research program teaches us many lessons that, if put into practice, could improve human welfare. Debiasing techniques that improve human rationality may be able to decrease rates of violence caused by ideological extremism (Lilienfeld et al. 2009). Knowledge of human bias can help executives make more profitable decisions (Kahneman et al. 2011). Scientists with improved judgment and decision-making skills ("rationality skills") may be more apt to avoid experimenter bias (Sackett 1979). Understanding the nature of human reasoning can also improve the practice of philosophy (Knobe et al. 2012; Talbot 2009; Bishop and Trout 2004; Muehlhauser 2012), which has too often made false assumptions about how the mind reasons (Weinberg et al. 2001; Lakoff and Johnson 1999; De Paul and Ramsey 1999). Finally, improved rationality could help decision makers to choose better policies, especially in domains likely by their very nature to trigger biased thinking, such as investing (Burnham 2008), military command (Lang 2011; Williams 2010; Janser 2007), intelligence analysis (Heuer 1999), or the study of global catastrophic risks (Yudkowsky 2008a).
But is it possible to improve human rationality? The answer, it seems, is "Yes." Lovallo and Sibony (2010) showed that when organizations worked to reduce the effect of bias on their investment decisions, they achieved returns of 7% or higher. Multiple studies suggest that a simple instruction to "think about alternative hypotheses" can counteract overconfidence, confirmation bias, and anchoring effects, leading to more accurate judgments (Mussweiler et al. 2000; Koehler 1994; Koriat et al. 1980). Merely warning people about biases can decrease their prevalence, at least with regard to framing effects (Cheng and Wu 2010), hindsight bias (Hasher et al. 1981; Reimers and Butler 1992), and the outcome effect (Clarkson et al. 2002). Several other methods have been shown to meliorate the effects of common human biases (Larrick 2004). Judgment and decision-making appear to be skills that can be learned and improved with practice (Dhami et al. 2012).
In this article, I first explain what I mean by "rationality" as a normative concept. I then review the state of our knowledge concerning the causes of human errors in judgment and decision-making (JDM). The largest section of our article summarizes what we currently know about how to improve human rationality through cognitive change (e.g. "rationality training"). We conclude by assessing the prospects for improving human rationality through cognitive change, and by recommending particular avenues for future research.
2. Normative Rationality
In cognitive science, rationality is a normative concept (Stanovich 2011). As Stanovich (2012) explains, "When a cognitive scientist terms a behavior irrational he/she means that the behavior departs from the optimum prescribed by a particular normative model."
This normative model of rationality consists in logic, probability theory, and rational choice theory. In their opening chapter for The Oxford Handbook of Thinking and Reasoning, Chater and Oaksford (2012) explain:
Is it meaningful to attempt to develop a general theory of rationality at all? We might tentatively suggest that it is a prima facie sign of irrationality to believe in alien abduction, or to will a sports team to win in order to increase their chance of victory. But these views or actions might be entirely rational, given suitably nonstandard background beliefs about other alien activity and the general efficacy of psychic powers. Irrationality may, though, be ascribed if there is a clash between a particular belief or behavior and such background assumptions. Thus, a thorough-going physicalist may, perhaps, be accused of irrationality if she simultaneously believes in psychic powers. A theory of rationality cannot, therefore, be viewed as clarifying either what people should believe or how people should act—but it can determine whether beliefs and behaviors are compatible. Similarly, a theory of rational choice cannot determine whether it is rational to smoke or to exercise daily; but it might clarify whether a particular choice is compatible with other beliefs and choices.
From this viewpoint, normative theories can be viewed as clarifying conditions of consistency… Logic can be viewed as studying the notion of consistency over beliefs. Probability… studies consistency over degrees of belief. Rational choice theory studies the consistency of beliefs and values with choices.
There are many good tutorials on logic (Schechter 2005), probability theory (Koller and Friedman 2009), and rational choice theory (Allington 2002; Parmigiani and Inoue 2009), so I will make only two quick points here. First, by "probability" I mean the subjective or Bayesian interpretation of probability, because that is the interpretation which captures degrees of belief (Oaksford and Chater 2007; Jaynes 2003; Cox 1946). Second, in rational choice theory I am of course endorsing the normative principle of expected utility maximization (Grant & Zandt 2009).
According to this concept of rationality, then, an agent is rational if its beliefs are consistent with the laws of logic and probability theory and its decisions are consistent with the laws of rational choice theory. An agent is irrational to the degree that its beliefs violate the laws of logic or probability theory, or its decisions violate the laws of rational choice theory.1
Researchers working in the heuristics and biases tradition have shown that humans regularly violate the norms of rationality (Manktelow 2012; Pohl 2005). These researchers tend to assume that human reasoning could be improved, and thus they have been called "Meliorists" (Stanovich 1999, 2004), and their program of using psychological findings to make recommendations for improving human reasoning has been called "ameliorative psychology" (Bishop and Trout 2004).
Another group of researchers, termed the "Panglossians,"2 argue that human performance is generally "rational" because it manifests an evolutionary adaptation for optimal information processing (Gigerenzer et al. 1999).
I disagree with the Panglossian view for reasons detailed elsewhere (Griffiths et al. 2012:27; Stanovich 2010, ch. 1; Stanovich and West 2003; Stein 1996), though I also believe the original dispute between Meliorists and Panglossians has been exaggerated (Samuels et al. 2002). In any case, a verbal dispute over what counts as "normative" for human JDM need not detain us here.3 I have stipulated my definition of normative rationality — for the purposes of cognitive psychology — above. MY concern is with the question of whether cognitive change can improve human JDM in ways that enable humans to achieve their goals more effectively than without cognitive change, and it seems (as I demonstrate below) that the answer is "yes."
MY view of normative rationality does not imply, however, that humans ought to explicitly use the laws of rational choice theory to make every decision. Neither humans nor machines have the knowledge and resources to do so (Van Rooij 2008; Wang 2011). Thus, in order to approximate normative rationality as best we can, we often (rationally) engage in a "bounded rationality" (Simon 1957) or "ecological rationality" (Gigerenzer and Todd 2012) or "grounded rationality" (Elqayam 2011) that employs simple heuristics to imperfectly achieve our goals with the limited knowledge and resources at our disposal (Vul 2010; Vul et al. 2009; Kahneman and Frederick 2005). Thus, the best prescription for human reasoning is not necessarily to always use the normative model to govern one's thinking (Grant & Zandt 2009; Stanovich 1999; Baron 1985). Baron (2008, ch. 2) explains:
In short, normative models tell us how to evaluate judgments and decisions in terms of their departure from an ideal standard. Descriptive models specify what people in a particular culture actually do and how they deviate from the normative models. Prescriptive models are designs or inventions, whose purpose is to bring the results of actual thinking into closer conformity to the normative model. If prescriptive recommendations derived in this way are successful, the study of thinking can help people to become better thinkers.
[next, I was going to discuss the probable causes of JDM errors, tested methods for amelioration, and promising avenues for further research]
Notes
1 For a survey of other conceptions of rationality, see Nickerson (2007). Note also that our concept of rationality is personal, not subpersonal (Frankish 2009; Davies 2000; Stanovich 2010:5).
2 The adjective "Panglossian" was originally applied by Steven Jay Gould and Richard Lewontin (1979), who used it to describe knee-jerk appeals to natural selection as the force that explains every trait. The term comes from Voltaire's character Dr. Pangloss, who said that "our noses were made to carry spectacles" (Voltaire 1759).
3 To resolve such verbal disputes we can employ the "method of elimination" (Chalmers 2011) or, as Yudkowsky (2008) put it, we can "replace the symbol with the substance."
5 comments
Comments sorted by top scores.
comment by Kawoomba · 2013-02-24T07:32:13.191Z · LW(p) · GW(p)
The hard question to me seems to be when to employ simple heuristics, and when to expend more resources on the decision process. That's the crux, because if there's a sort of bias in dedicating resources, it's easy to achieve arbitrary results (just as if you only take apart your opponent's arguments, and don't apply the same rigor to your own).
comment by chemotaxis101 · 2013-02-24T14:58:32.312Z · LW(p) · GW(p)
Incidentally, the blog InDecision has just published - as part of its "Research Heroes" series - a brief interview with the above-cited experimental psychologist Jonathan Baron (one of his books was described as "a more focused and balanced introduction to the subject of rationality than the Sequences" in one of Vaniver's much-useful summaries) and he stressed again his position that "rational thinking is both learnable and part of intelligence itself."
comment by timtyler · 2013-02-24T18:06:13.493Z · LW(p) · GW(p)
Another group of researchers, termed the "Panglossians,"2 argue that human performance is itself normatively rational because it manifests an evolutionary adaptation for optimal information processing (Gigerenzer et al. 1999).
I don't think anyone actually thinks that. The idea is more than many supposed biases (e.g. overconfidence, self-serving bias) are actually adaptive. For example see The Folly of Fools: The Logic of Deceit and Self-Deception in Human Life.
Replies from: lukeprog, maia↑ comment by lukeprog · 2013-02-24T20:30:09.383Z · LW(p) · GW(p)
I guess it would be safer to say that the while Panglossians (not what they call themselves) may not think human performance itself is normative, they do think that human performance is usually rational. I've edited the OP with this change.
But, to illustrate the difference between the Meliorists and the Panglossians, and why I side with the Meliorists, let me quote from the Panglossians themselves (Todd & Gigerenzer 2012):
We use the term logical rationality for theories that evaluate behavior against the laws of logic or probability rather than success in the world... Logical rationality is determined a priori... instead of by testing behavior in natural environments... [In contrast,] the study of ecological rationality investigates the fit between [the structure of task environments and the computational capabilities of the actor].
But this seems to miss the point made by Baron since the 1980s about the difference between normative, descriptive, and prescriptive rationality. The Meliorists never said that the way to achieve success in the world was to explicitly use Bayes' Theorem 100 times a day. What they said was that human behavior falls short of normative model (from logic, probability theory, and rational choice theory) in measurable ways, and that (just as the Panglossians maintain) we can look at the structure of human task environments and the way the human brain works in order to offer prescriptions for how people can perform more optimally, e.g. by changing the environment (Thaler & Sunstein 2009 or by training the human with a new cognitive heuristic like "think of the opposite" (Larrick 2004).
So do the Meliorists and Panglossians actually disagree, or are they merely talking past each other? Starting on page 10 of Rationality and the Reflective Mind, Stanovich argues that the two groups substantively disagree. I'll let you read that here. Another good overview of the substantive disagreements is Vranas (2000), which reviews the debate over normative rationality between Gigerenzer (a Panglossian) and Kahneman & Tversky (both Meliorists).