Ideas for heuristics and biases research topic?
post by Tesseract · 2011-09-25T18:20:53.170Z · LW · GW · Legacy · 25 commentsContents
25 comments
Hey Less Wrong,
I'm currently taking a cognitive psychology class, and will be designing and conducting a research project in the field — and I'd like to do it on human judgment, specifically heuristics and biases. I'm currently doing preliminary research to come up with a more specific topic to base my project on, and I figured Less Wrong would be the place to come to find questions about flawed human judgment. So: any ideas?
(I'll probably be using these ideas mostly as guidelines for forming my research question, since I doubt it would be academically honest to take them outright. The study will probably take the form of a questionnaire or online survey, but experimental manipulation is certainly possible and it might be possible to make use of other psych department resources.)
25 comments
Comments sorted by top scores.
comment by lukeprog · 2011-09-25T20:36:41.914Z · LW(p) · GW(p)
Rationality drugs. Many nootropics can increase cognitive capacity, which according to Stanovich's picture of the cognitive science of rationality, should help with performance on some rationality measures. However, good performance on many rationality measures requires not just cognitive capacity but also cognitive reflectiveness: the disposition to choose to think carefully about something and avoid bias. So: Are there drugs that increase cognitive reflectiveness / "need for cognition"?
Debiasing. I'm developing a huge, fully-referenced table of (1) thinking errors, (2) the normative models they violate, (3) their suspected causes, (4) rationality skills that can meliorate them, and (5) rationality exercises that can be used to develop those rationality skills. Filling out the whole thing is of course taking a while, and any help would be appreciated. A few places where I know there's literature but I haven't had time to summarize it yet include: how to debias framing effects, how to debias base rate neglect, and how to debias confirmation bias. (But I have, for example, already summarized everything on how to debias the planning fallacy.)
Replies from: lessdazed, steven0461, lessdazed, lessdazed, lessdazed, Tesseract, lessdazed↑ comment by lessdazed · 2011-09-25T21:28:54.535Z · LW(p) · GW(p)
I did my high school science experiment on nootropics in rats, to see if it affected the time it took them to learn to navigate the maze compared to a control group that didn't take them.
The test subjects I gave the drugs to all died. The control group eventually learned how to go through the maze without making any wrong turns.
I was given a B+ and an admonishment to never let anyone know the teacher had pre-approved my experimenting on mammals.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2011-09-25T22:25:40.389Z · LW(p) · GW(p)
What drugs did you give them?
Replies from: lessdazed↑ comment by lessdazed · 2011-09-25T23:11:31.836Z · LW(p) · GW(p)
Ginkgo biloba extract.
Replies from: wedrifid, fubarobfusco↑ comment by fubarobfusco · 2011-09-26T00:44:05.624Z · LW(p) · GW(p)
Ah, that explains my confusion; I saw "nootropics" and expected a racetam or something along those lines.
↑ comment by steven0461 · 2011-09-25T22:36:31.704Z · LW(p) · GW(p)
cognitive reflectiveness: the disposition to choose to think carefully about something and avoid bias
I sometimes worry that this disposition may be more important than everything we typically think of as "rationality skills" and more important than all the specific named biases that can be isolated and published, but that it's underemphasized on LW because "I'll teach you these cool thinking skills and you'll be a strictly more awesome person" makes for a better website sales pitch than "please be cognitively reflective to the point of near-neuroticism, I guess one thing that helps is to have the relevant genes".
Replies from: None↑ comment by lessdazed · 2011-09-30T20:54:39.995Z · LW(p) · GW(p)
Do you know of research supporting debiasing scope insensitivity by introducing differences in kind that approximately preserve the subjective quantitative relationship? If not I will look for it, but I don't want to if you already have it at hand.
I am thinking in particular of Project Steve. Rather than counter a list of many scientists who "Dissent from Darwinism" with a list of many scientists who believe evolution works, they made a list of hundreds of scientists named Steve who believe evolution works.
Many people is approximately equal to many people in the mind, be it hundreds or thousands, but many people is fewer than many Steves. That's the theory, anyway.
Intuitively it sounds like it should work, but I don't know if there are studies supporting this.
Replies from: steven0461↑ comment by steven0461 · 2011-09-30T22:24:41.461Z · LW(p) · GW(p)
There's our solution to scope insensitivity about existential risks. "If unfriendly AI undergoes an intelligence explosion, millions of Steves will die. Won't somebody please think of the Steves?"
↑ comment by lessdazed · 2011-09-25T22:21:15.679Z · LW(p) · GW(p)
how to debias base rate neglect
Convert numbers and rates into equivalent traits or dispositions: Convert "85% of the taxis in the city are green" to "85% of previous accidents involved drivers of green cabs". (Recent Kahneman interview)
Requisition social thinking: Convert "85%" to "85 out of 100", or "Which cards must you turn over" to "which people must you check further" (Wason test).
how to debias framing effects
Have people been trained in automatically thinking of "mortality rates" as "survival rates" and such? A good dojo game to play would be practicing thinking in terms of an opposite framing as quickly as possible, until it became pre-conscious, and one consciously became aware of what one heard and its opposite at the same time.
Fresh off the presses at Yale's American Political Science Review from August: http://bullock.research.yale.edu/papers/elite/elite.pdf
An enduring concern about democracies is that citizens conform too readily to the policy views of elites in their own parties, even to the point of ignoring other information about the policies in question. This article presents two experiments that undermine this concern, at least under one important condition. People rarely possess even a modicum of information about policies; but when they do, their attitudes seem to be affected at least as much by that information as by cues from party elites. The experiments also measure the extent to which people think about policy. Contrary to many accounts, they suggest that party cues do not inhibit such thinking. This is not cause for unbridled optimism about citizens’ ability to make good decisions, but it is reason to be more sanguine about their ability to use information about policy when they have it.
(Emphasis mine.)
If one knew the extent one was biased by cues, and one knew one's opinion based on cues and facts, it would be possible to calculate what one's views would be without cues.
Replies from: lukeprog↑ comment by lukeprog · 2011-09-27T00:46:48.118Z · LW(p) · GW(p)
Thanks! I knew some of that stuff, but not all. But for the table of thinking errors and debiasing techniques I need the references, too.
Replies from: lessdazed↑ comment by lessdazed · 2011-09-27T06:22:34.963Z · LW(p) · GW(p)
http://edge.org/conversation/the-marvels-and-flaws-of-intuitive-thinking
Now have a look at a very small variation that changes everything. There are two companies in the city; they're equally large. Eighty-five percent of cab accidents involve blue cabs. Now this is not ignored. Not at all ignored. It's combined almost accurately with a base rate. You have the witness who says the opposite. What's the difference between those two cases? The difference is that when you read this one, you immediately reach the conclusion that the drivers of the blue cabs are insane, they're reckless drivers. That is true for every driver. It's a stereotype that you have formed instantly, but it's a stereotype about individuals, it is no longer a statement about the ensemble. It is a statement about individual blue drivers. We operate on that completely differently from the way that we operate on merely statistical information that that cab is drawn from that ensemble.
...
A health survey was conducted in a sample of adult males in British Columbia of all ages and occupations. "Please give your best estimate of the following values: What percentage of the men surveyed have had one or more heart attacks? The average is 18 percent. What percentage of men surveyed both are over 55 years old, and have had one or more heart attacks? And the average is 30 percent." A large majority says that the second is more probable than the first.
Here is an alternative version of that which we proposed, a health survey, same story. It was conducted in a sample of 100 adult males, so you have a number. "How many of the 100 participants have had one or more heart attacks, and how many of the 100 participants both are over 55 years old and have had one or more heart attacks?" This is radically easier. From a large majority of people making mistakes, you get to a minority of people making mistakes. Percentages are terrible; the number of people out of 100 is easy.
Regarding framing effects, one could write a computer program into which one could plug in numbers and have a decision converted into an Allais paradox.
One could commit to donating an amount of money to charity any time a free thing is acquired. (Arieli Lindt/Hershey's experiment)
↑ comment by lessdazed · 2011-09-30T04:48:25.809Z · LW(p) · GW(p)
Are you including inducing biases as part of "debiasing"? For example, if people are generally too impulsive in spending money, a mechanism that merely made people more restrained could counteract that, but would be vulnerable to overshooting or undershooting. Here is the relevant study:
Replies from: lukeprog, wedrifidIn Studies 2 and 3, we found that higher levels of bladder pressure resulted in an increased ability to resist impulsive choices in monetary decision making.
↑ comment by lukeprog · 2011-09-30T05:43:54.190Z · LW(p) · GW(p)
Are you including inducing biases as part of "debiasing"?
I probably should. This is usually called "rebiasing."
Replies from: lessdazed↑ comment by lessdazed · 2011-09-30T06:00:15.196Z · LW(p) · GW(p)
I suggest making it a separate category, at least to start with. It will be easier to recombine them into debiasing later if it turns out the distinction makes little sense and there is a range of anti-biasing from debiasing to rebiasing, than it would be to separate them after everything is filled in.
↑ comment by Tesseract · 2011-09-27T22:13:42.488Z · LW(p) · GW(p)
Ah, I think you misunderstood me (on reflection, I wasn't very clear) — I'm doing an experiment, not a research project in the sense of looking over the existing literature.
(For the record, I decided on conducting something along the lines of the studies mentioned in this post to look at how distraction influences retention of false information.)
↑ comment by lessdazed · 2011-09-28T05:03:56.530Z · LW(p) · GW(p)
Have you included racism or its sub-components as fallacies? If so, what are the sub-components the fixing of which would ameliorate racism?
Replies from: lukeprog↑ comment by lukeprog · 2011-09-28T06:37:35.168Z · LW(p) · GW(p)
I have not. I'm not familiar with that literature, but Google is. Lemme know if you find anything especially interesting!
Replies from: lukeprog↑ comment by lukeprog · 2011-09-28T19:33:59.147Z · LW(p) · GW(p)
Uh.... was I downvoted for replying with helpful links to a comment that was already below the 0 thresshold?
Or perhaps I was downvoted for not including racism as a cognitive bias on my developing table of biases?
Replies from: dlthomas, lessdazed↑ comment by lessdazed · 2011-09-28T20:05:18.635Z · LW(p) · GW(p)
Probably the latter. I'm reading through links from the links from the links of what you linked to, perhaps you could list all the biases you could use help on? I think my Arieli Lindt/Hersheys solution of imposing a self penalty whenever accepting free things was a clever way of debiasing that bias (though I would think so, wouldn't I?) and in the course of reading through all kinds of these articles (in a topic I am interested in) I could provide similar things.
I really do go through a lot of this stuff independently, I had read the Bullock paper and Kahneman interview before you asked for help and only after you asked did I know I had information you wanted.
In any case my above comment was probably downvoted for it being perceived as posturing rather than because it isn't a common concern. That interpretation best explains my getting downvoted for raising the issue and you being downvoted for not taking it maximally seriously.
comment by lessdazed · 2011-09-25T20:13:57.233Z · LW(p) · GW(p)
Any situation where people make decisions and have a theory of how they make the decision, they can be systematically wrong. This includes people predicting what they would do in a set of future circumstances.
This is an easier form of irrationality to break ground in than preferences of A>B, B>C, C>A.
Another trope in such experiments is having subjects predict things about average people and also about themselves.
One research technique to be aware of is the one where questionnaires are handed out with a die, and respondents are instructed to roll it for every answer, and always answer "yes" on a roll of six and "no" on a roll of one, regardless of the true answer to that question. I forget what it and its variants are called.
comment by endoself · 2011-09-26T01:24:08.041Z · LW(p) · GW(p)
I recently watched the talk referenced in this comment and the speaker mentioned an ongoing effort to find out which biases people correct for in their models of others and which they do not, which sounds like a promising area of research.