Elqayam & Evans (2011) argue against a certain kind of normativism about rationality

post by lukeprog · 2011-09-04T05:57:08.205Z · LW · GW · Legacy · 11 comments

Contents

11 comments

A forthcoming edition of Behavioral and Brain Sciences will be devoted to Elqayam & Evans' (2011) critique of normativism about rationality and brief responses to it.

Abstract:

We propose a critique of normativism, defined as the idea that human thinking reflects a normative system against which it should be measured and judged. We analyze the methodological problems associated with normativism, proposing that it invites the controversial is-ought inference, much contested in the philosophical literature. This problem is triggered when there are competing normative accounts (the arbitration problem), as empirical evidence can help arbitrate between descriptive theories, but not between normative systems. Drawing on linguistics as a model, we propose that clear distinction between normative systems and competence theories is essential, arguing that equating them invites an ‘is-ought’ inference; to wit, supporting normative ‘ought’ theories with empirical ‘is’ evidence. We analyze in detail two research programs with normativist features, Oaksford and Chater’s rational analysis, and Stanovich and West’s individual differences approach, demonstrating how in each case equating norm and competence leads to an is-ought inference. Normativism triggers a host of research biases in psychology of reasoning and decision making:focusing on untrained participants and novel problems, analyzing psychological processes in terms of their normative correlates, and neglecting philosophically significant paradigms when they do not supply clear standards for normative judgment. For example, in a dual-process framework, normativism can lead to a fallacious ‘ought-is’ inference, in which normative responses are taken as diagnostic of analytic reasoning. We propose that little can be gained from normativism that cannot be achieved by descriptivist computational-level analysis, illustrating our position with Hypothetical Thinking Theory and the theory of the suppositional conditional. We conclude that descriptivism is a viable option, and that theories of higher mental processing would be better off freed from normative considerations.

11 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2011-09-04T06:15:39.658Z · LW(p) · GW(p)

Your title, "LessWrong usually assumes normativism about rationality; Elqayam & Evans (2011) argue against it" makes it sound like the authors disagree with LW. I don't think they do. They're pointing out some methodological problems with psychological research that involves measuring people's actual cognitive processes against norms of rationality, which have little to do with our use of normativism (i.e., using normative rationality to improve people's reasoning and decision making).

In the conclusion they specifically disclaim that they're arguing against our kind of normativism:

It is not our purpose to exclude normativism entirely from scientific endeavor. There is a need for research in education, planning, policy development and so on, in all of which norms play a crucial role. The Meliorist position is a strong case in point, both the version advocated so powerfully by the individual differences research program of Stanovich and West (2000; Stanovich, 1999; 2004; 2009b), and the version put forward by Baron (e.g., 2008). Such authors wish to find ways improve people’s reasoning and decision-making and therefore require some standard definition of what it means to be rational.

I think they are also not saying that human thinking and decisions can't be measured against normative models. My understanding is that they are suggesting that doing so makes it easy for several fallacies and biases to sneak into one's research, so it's a bad idea in practice for someone trying to find out how humans actually think.

Replies from: ciphergoth, lukeprog
comment by Paul Crowley (ciphergoth) · 2011-09-04T07:32:02.987Z · LW(p) · GW(p)

From this description, they are cautioning against treating the is-brain as the should-brain plus a diff.

comment by lukeprog · 2011-09-04T09:14:55.808Z · LW(p) · GW(p)

Critique accepted, post title and body edited.

comment by Manfred · 2011-09-04T06:34:59.113Z · LW(p) · GW(p)

The paper was better than I expected. Part of that is that I misunderstood what was meant by "normativism" - they actually excluded instrumental rationality, defined as "Behaving in such a way as to achieve one’s personal goals."

If we pull the now-possibly-standard LW trick of defining "ought" as "me::ought," suddenly we're all talking about instrumental rationality. There is some trouble because how we extract preferences from human brains is not fully determined, but that at least is a more "meta" level of normativism.

comment by shokwave · 2011-09-04T02:25:02.393Z · LW(p) · GW(p)

I really like the distinction they draw between 'empirical logicism' (believing that thinking reflects some internalised form of classical logic) and 'prescriptive logicism' (believing that thinking should be measured against logic and evaluated on how closely it conforms). Not to say they have a point at all (I haven't read far enough to decide) but that distinction is going to be really useful in explaining parts of rationality - "I don't think human brains work this way; I think they should, though".

comment by Hyena · 2011-09-04T04:06:26.502Z · LW(p) · GW(p)

I can't finish this paper since it seems fairly confused. I'd just point out that the paper's arguments, being motivated by "what's necessary for research", are irrelevant. It doesn't particularly matter that researchers have a normative system, so long as they don't have a preconception about the depth of adherence. For example, if my normative system damns witches ans harlots, I don't really have a problem for research: I know how many witches and harlots there are, I also think they're bad people. In fact, I might think society is very much in trouble because of their number; so long as I don't engage in wishful thinking about social composition, this fact changes nothing.

Secondly, the argument ghat we can't arbitrate normative standards is silly. Since all of them have to be implemented physically and have calculable products, we can always guarantee that rationality is at least a meta standard via a simulation argument.

comment by Ed_Stupple · 2014-12-08T11:59:40.339Z · LW(p) · GW(p)

Frontiers in Psychology: Cognitive Science has a special issue which extends some of these arguments: http://journal.frontiersin.org/ResearchTopic/1185

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-09-04T02:14:56.720Z · LW(p) · GW(p)

Was this meant to be a Discussion post?

Replies from: wedrifid, lukeprog
comment by wedrifid · 2011-09-04T05:56:49.966Z · LW(p) · GW(p)

It's a shame you don't have moderator powers which could have allowed you to make this a "moved to discussion" comment!

comment by lukeprog · 2011-09-04T05:57:19.762Z · LW(p) · GW(p)

Oops, yes! Moved.

comment by lessdazed · 2011-09-04T04:51:40.506Z · LW(p) · GW(p)

We propose a critique of normativism, defined as the idea that human thinking reflects a normative system against which it should be measured and judged.

The average human brain has a measurable ability to turn nutrients into heat, dependent on circumstances it is in. The average human brain has a measurable ability to turn evidence into true conclusions, dependent on circumstances it is in.

There is an infinity of possible utility functions, with all sorts of goals. Among those are some with ultimate or instrumental goals of producing heat from nutrients and/or true conclusions from evidence. Also among those are some with ultimate or instrumental goals of having ultimate and/or instrumental goals of producing heat from nutrients and/or true conclusions from evidence, and so on.

The laws of the universe are not such that every being with a utility function has one of the utility functions with a relationship to producing heat from nutrients and/or true conclusions from evidence. I agree with the paper's authors on this, though I have thought this part of normativism.

I happen to have producing true conclusions from evidence as a goal. Many other beings near me seem to too, a fact I think is important - this justifies acting like a normativist. Do you think reading this paper will help me reach that goal, or would that only do things I don't care about, like convert nutrients to heat?