Statistical error in half of neuroscience papers
post by Paul Crowley (ciphergoth) · 2011-09-09T23:07:33.743Z · LW · GW · Legacy · 7 commentsContents
7 comments
The statistical error that just keeps on coming, Ben Goldacre, Guardian, Friday 9 September 2011 20.59 BST
We all like to laugh at quacks when they misuse basic statistics. But what if academics, en masse, deploy errors that are equally foolish? This week Sander Nieuwenhuis and colleagues publish a mighty torpedo in the journal Nature Neuroscience.
They've identified one direct, stark statistical error so widespread it appears in about half of all the published papers surveyed from the academic psychology research literature.
[...]
How often? Nieuwenhuis looked at 513 papers published in five prestigious neuroscience journals over two years. In half the 157 studies where this error could have been made, it was. They broadened their search to 120 cellular and molecular articles in Nature Neuroscience, during 2009 and 2010: they found 25 studies committing this fallacy, and not one single paper analysed differences in effect sizes correctly.These errors are appearing throughout the most prestigious journals for the field of neuroscience.
Update: Erroneous analyses of interactions in neuroscience: a problem of significance (PDF)
7 comments
Comments sorted by top scores.
comment by lessdazed · 2011-09-09T23:58:49.124Z · LW(p) · GW(p)
This provides an excellent way for readers to infer the competence of the experimenter.
What surprises me is that the abstract doesn't mention the number of papers done incorrectly for which having a statistically significant result depended on making the error. This would give us some information about how much of this is due to fraud. If all of the incorrect papers depended on misinterpretation to have publishable p values, that would be very disturbing.
comment by satt · 2011-09-10T02:51:36.412Z · LW(p) · GW(p)
Andrew Gelman has been quite rightly beating this drum for a while.
comment by BillyOblivion · 2011-09-11T08:18:48.525Z · LW(p) · GW(p)
I suspect that if you were to offer 100 dollars for every statistical error found in published scientific and medical papers a lot of stats majors could get their student loans paid off.
comment by falenas108 · 2011-09-10T15:24:56.749Z · LW(p) · GW(p)
My coworker had this problem in my lab where she was trying to say which of related measures changed the most, and refused to listen when I said she couldn't support that claim statistically.
Incidentally, she had no formal training in statistics (or it was so long ago that she didn't remember the connection between standard deviation and variance).
comment by RobertLumley · 2011-09-10T00:21:08.868Z · LW(p) · GW(p)
Well, as purely anecdotal evidence, the (neruoscience) lab I was working in this summer analyzed our data correctly (with regard to this). Although, to be fair, I didn't notice it until my PI pointed it out to me...
Replies from: DanielLC↑ comment by DanielLC · 2011-09-10T05:48:16.530Z · LW(p) · GW(p)
You didn't notice that your data was analyzed correctly?
Replies from: RobertLumley↑ comment by RobertLumley · 2011-09-10T13:18:22.286Z · LW(p) · GW(p)
I was trying to say more than we actually could say. I was the one analyzing the data.
We were studying the growth of neurons and patterns of gene regulation under the presence of various inhibitors of the mechanism of depolarization regulation. Say we had inhibitors A and B.
We would have done expression profiles for cultures that were, for example, this:
5 mM KCl (physiological conditions); 25 mM KCl (depolarizing conditions); 25 mM KCl + inhibitor A; 25 mM KCl + inhibitor B; 25 mM KCl + inhibitor A and B;
(Upon editing, I can't make those appear as a line by line list for some reason? Don't know what's up with that.)
I wanted to make comparisons between the A+B+25 cultures and the 25 mM Culture (and the 5 mM one), but you can't do that. You can only compare A+B to it's controls, A and B alone.