What's wrong with psychology, anyway?
post by Anatoly_Vorobey · 2012-04-28T23:36:53.833Z · LW · GW · Legacy · 15 commentsContents
15 comments
Came across this article, published in 1991 but hardly dated:
David T. Lykken, What's Wrong With Psychology, Anyway? (PDF, 39 pages)
Anyone who's interested in psychology as a science might, I think, find it fascinating. Lots of stuff there about rationality-related failures of academic psychology. Several wonderful anecdotes, of which I'll quote one in full that had me laughing out loud --
In the 1940s and ’50s. there was a torrent of interest and research surrounding the debate between the S-R [Stimulus-Response] reinforcement theorists at Yale and Iowa City and the S-S [Stimulus-Stimulus] expectancy theorists headquartered at Berkeley. As is usual in these affairs, the two sides produced not only differing theoretical interpretations but also different empirical findings from their rat laboratories, differences that ultimately led Marshall Jones to wonder if the researchers in Iowa and California might not be working with genetically different animals. Jones obtained samples of rats from the two colonies: and tested them in the simple runway situation. Sure enough, when running time was plotted against trial number, the two strains showed little overlap in performance. The Iowa rats put their heads down and streaked for the goal box, while the Berkeley animals dawdled, retraced, investigated, appeared to be making “cognitive maps” just as Tolman always said. But by 1965 the torrent of interest in latent-learning had become a backwater and Jones's paper was published obscurely (Jones & Fennel, 1965).
(I came across the reference to the article in the HN discussion about a project, of independent interest, to try and replicate a sample of articles from three reputable journals in psychology in a given year)
15 comments
Comments sorted by top scores.
comment by Normal_Anomaly · 2012-04-29T17:02:25.622Z · LW(p) · GW(p)
I am confused by part of this. Under "Most Grant Applications are Bad," the primary piece of evidence for that assertion is that only about 25% of grant applications get funded, therefore 75% are bad. That could be because the government splits its pool of money among all the "good" ones, or it could be because there's only enough money to fund 25% of the projects. If the government decided to double its budget for psych research, would it then appear that only 50% of grant proposals are bad?
Replies from: Swimmy, John_Maxwell_IV↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-04-30T01:27:39.057Z · LW(p) · GW(p)
And can we even trust the government to choose the best grant proposals?
comment by rejuvyesh · 2012-04-29T06:49:37.899Z · LW(p) · GW(p)
Same here. I had to downgrade all my beliefs which were based on hard psychology textbooks. I was hardly aware of such prevalence of scientific fraud, in the field of psychology. On a side note, what about the psychological studies oft quoted in the sequences? How much can they be trusted?
Replies from: TimScomment by see · 2012-05-01T23:46:24.781Z · LW(p) · GW(p)
Cutting away everything else, the important symptom given in the paper is I. F. If you're not doing experiments that replicate, then you aren't finding out anything. All the other symptoms are basically irrelevant, or consequences of I. F. And the central cause of I. F seems to be given down in III. B. Apparently the only standard for psychological research is that you can mathematically torture at least one correlation of p < .05 out of the data.
Well, if you've got enough factors that you're measuring, and are willing to go enough orders of analysis, you can almost certainly find a correlation that is "significant". And finding it won't actually teach you anything.
So, assuming the paper is correct on those points, the problem with psychology-as-a-science is that it collects random noise and assigns meaning to it, and teaches its students to do the same.
comment by John_Maxwell (John_Maxwell_IV) · 2012-04-30T01:32:15.277Z · LW(p) · GW(p)
Within the narrow circles of our particular fields of interest, many of us learn that there are certain investigators who stand out from the herd because their findings can be trusted.
I wonder if it would be possible for psychology to "bootstrap itself" by studying these folks who produce trustworthy findings and figuring out what they're doing right.
On a related note, have there been any psychological studies of bias in researchers who study biases for their careers? I know there have been attempts at "debiasing" interventions, which have had only limited success at knocking out well-known biases. But if even the researchers who study biases fall prey to them then things really are hopeless.
Coming up with interesting ideas for psychology experiments seems pretty easy. Maybe the smart/intellectually curious folk are going into hard science?
comment by beoShaffer · 2012-04-29T06:33:46.438Z · LW(p) · GW(p)
The Hacker News link is broke.
Replies from: Anatoly_Vorobey↑ comment by Anatoly_Vorobey · 2012-04-29T07:07:27.203Z · LW(p) · GW(p)
Fixed, thank you.
comment by RomeoStevens · 2012-04-29T09:11:52.279Z · LW(p) · GW(p)
AFAIK Psychology doesn't hold the empirical findings of cogsci and econ in any particularly favorable light, so I ignore it.
Replies from: Swimmy↑ comment by Swimmy · 2012-04-30T01:43:22.886Z · LW(p) · GW(p)
Do you mean the average psychologist, the average elite academic psychologist, or what? Experimental econ is psychology, and lots of psychologists study it. I have no idea what the average psychologist thinks about supply and demand or eye tracking, though.
Replies from: RomeoStevens↑ comment by RomeoStevens · 2012-04-30T02:12:29.006Z · LW(p) · GW(p)
Spoke with several average psychologists, became concerned, then read widely cited psychology papers. I didn't see any evidence of high quality analysis. All struck me as a severe case of deformation professionelle.