She Blinded Me With Science
post by Jonathan_Graehl · 2009-08-04T19:10:49.712Z · LW · GW · Legacy · 38 commentsContents
38 comments
Scrutinize claims of scientific fact in support of opinion journalism.
Even with honest intent, it's difficult to apply science correctly, and it's rare that dishonest uses are punished. Citing a scientific result gives an easy patina of authority, which is rarely scratched by a casual reader. Without actually lying, the arguer may select from dozens of studies only the few with the strongest effect in their favor, when the overall body of evidence may point at no effect or even in the opposite direction. The reader only sees "statistically significant evidence for X". In some fields, the majority of published studies claim unjustified significance in order to gain publication, inciting these abuses.
Here are two recent examples:
Women are often better communicators because their brains are more networked for language. The majority of women are better at "mind-reading," than most men; they can read the emotions written on people's faces more quickly and easily, a talent jump-started by the vast swaths of neural real estate dedicated to processing emotions in the female brain.
- Susan Pinker, a psychologist, in NYT's "DO Women Make Better Bosses"
Twin studies and adoptive studies show that the overwhelming determinant of your weight is not your willpower; it's your genes. The heritability of weight is between .75 and .85. The heritability of height is between .9 and .95. And the older you are, the more heritable weight is.
- Megan McArdle, linked from the LW article The Obesity Myth
Mike, a biologist, gives an exasperated explanation of what heritability actually means:
Quantitative geneticists use [heritability] to calculate the changes to be expected from artificial or natural selection in a statistically steady environment. It says nothing about how much the over-all level of the trait is under genetic control, and it says nothing about how much the trait can change under environmental interventions.
Susan Pinker's female-boss-brain cheerleading is refuted by Gabriel Arana. A specific scientific claim Pinker makes ("the thicker corpus callosum connecting women's two hemispheres provides a swifter superhighway for processing social messages") is contradicted by a meta-analysis (Sex Differences in the Human Corpus Callosum: Myth or Reality?), and without that, you have only just-so evolutionary psychology argument.
The Bishop and Wahlsten meta-analysis claims that the only consistent finding is for slightly larger average whole brain size and a very slightly larger corpus callosum in adult males. Here are some highlights:
Given that the CC interconnects so many functionally different regions of cerebral cortex, there is no reason to believe that a small difference in overall CC size will pertain to any specific psychological construct. Total absence of the corpus callosum tends to be associated with a ten-point or greater reduction in full-scale IQ, but more specific functional differences from IQ-matched controls are difficult to identify.
In one recent study, a modest correlation between cerebrum size and IQ within a sex was detected. At the same time, males and females differ substantially in brain size but not IQ. There could easily be some third factor or array of processes that acts to increase both brain size and IQ score for people of the same sex, even though brain size per se does not mediate the effect of the other factor on IQ.
The journal Science has refused to publish failures to replicate the 1982 claims of de Lacoste-Utamsing and Holloway (Byne, personal communication).
Obviously, if journals won't publish negative results, then this weakens the effective statistical significance of the positive results we do read. The authors don't find this to be significant for the topic (the above complaint isn't typical).
When many small-scale studies of small effects are published, the chances are good that a few will report a statistically significant sex difference. ... One of our local newspapers has indeed printed claims promulgated over wire services about new studies finding a sex difference in the corpus callosum but has yet to print a word about contrary findings which, as we have shown, far outnumber the statistically significant differences.
This effect is especially notable in media coverage of health and diet research.
The gold-standard in the medical literature is a cumulative meta-analysis conducted using the raw data. We urge investigators to make their raw data or, better yet, the actual tracings available for cumulative meta-analysis. We attempted to collect the raw data from studies of sex differences in the CC cited in an earlier version of this paper by writing to the authors. The level of response was astoundingly poor. In several studies that used MRI, the authors even stated that the original observations were no longer available.
This is disturbing. I suspect that many authors are hesitant to subject themselves to the sort of scrutiny they ought to welcome.
By convention, we are taught that the null hypothesis of no sex difference should be rejected if the probability of erroneously rejecting the null on the basis of a set of data is 5% or less. If 10 independent measures are analysed in one study, each with the α = 0.05 criterion, the probability of finding at least one ‘significant’ sex difference by chance alone is 1 − (1 − 0.05)10 = 0.40 or 40%. Consequently, when J tests involving the same object, e.g. the corpus callosum, are done in one study, the criterion for significance of each test might better be adjusted to α/J, the Dunn or Bonferroni criterion that is described in many textbooks. All but two of 49 studies of the CC adopted α = 0.05 or even 0.10, and for 45 of these studies, an average of 10.2 measures were assessed with independent tests.
This is either rank incompetence, or even worse, the temptation to get some positive result out of the costly data collection.
38 comments
Comments sorted by top scores.
comment by Douglas_Knight · 2009-08-04T20:47:23.539Z · LW(p) · GW(p)
I suspect that many authors are hesitant to subject themselves to the sort of scrutiny they ought to welcome.
Normative language ("ought") is not helpful here. Journals that nominally require publication of data or calculations don't enforce it, either.
One way to deal with selection bias and fraud that I have occasionally seen, and only in economics and parapsychology ("the control group for science"), is to compare the effect size to the study size. If it's a real effect, it will not depend on the study size. But if it's fake, it will always just barely be statistically significant and thus it will decline with study size.
This kind of meta-analysis come from not trusting one's peers. This is rude, hence rare. But it's a lot more useful than pooling the data, the usual meta-analysis.
Replies from: Eliezer_Yudkowsky, CronoDAS↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-04T22:04:13.380Z · LW(p) · GW(p)
The obvious solution, IMO, is to have journals approve study designs for publication in advance, including all statistical tools to be used; and then you do the study and run the preselected analysis and publish the results, regardless of whether positive or negative.
But just like many other obvious improvements we can all think of to the process of science, this one will not be carried out.
parapsychology ("the control group for science")
Did you get that off me? I was planning a post on it at some point or another.
Replies from: bentarm, Douglas_Knight, Jonathan_Graehl↑ comment by bentarm · 2009-08-05T13:46:31.031Z · LW(p) · GW(p)
That's the obvious brute force solution, but a possibly more elegant route is just to have an international trials register. This suggestion has been around for a while, and should be significantly less costly (and controversial) than the pre-commit to publishing route while still giving some useful tools for checking on things like publication bias, double publication, etc.
↑ comment by Douglas_Knight · 2009-08-05T05:02:40.571Z · LW(p) · GW(p)
But just like many other obvious improvements we can all think of to the process of science, this one will not be carried out.
To a certain extent, it is being carried out for drug studies, but it requires centralization. At least, various central authorities have promised to require some pre-registration, but they may fail, as in the data availability story. Individuals can do meta-analyses that are skeptical of the original publications. and they do, on special occasions.
I think I've heard the line about parapsychology as a joke in a number of places, but I heard it seriously from Vassar.
↑ comment by Jonathan_Graehl · 2009-08-04T22:52:57.782Z · LW(p) · GW(p)
have journals approve study designs for publication in advance, including all statistical tools to be used; and then you do the study and run the preselected analysis and publish the results, regardless of whether positive or negative
Brilliant.
Maybe a notary service for such plans would become popular from the ground up. Of course, to get voluntary adoption, you'd have to implement a guarantee of secrecy for a desired time period (even though the interests of science would be best served by early publicity, those scientists want their priority).
Let's see, just the right protocol for signing/encrypting, and ... never mind, it will never be used until some high status scientists want to show off ;)
↑ comment by CronoDAS · 2009-08-04T21:22:08.716Z · LW(p) · GW(p)
Parapsychology: The control group for science.
Excellent quote. May I steal it?
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2009-08-04T21:26:28.566Z · LW(p) · GW(p)
It's too good to ask permission for. I'll wait to get forgiveness ;).
comment by teageegeepea · 2009-08-04T20:04:59.398Z · LW(p) · GW(p)
Michelle Malkin is not Megan McArdle. All of their names begin with the letter "M" though.
McArdle also presents a false dichotomy between willpower and genes. Your genes presumably have a lot to do with your willpower (which is not to say that it's highly heritable, though in fact it may be). Perhaps she has not fully chucked the "ghost in the machine" and thinks willpower is the real, internal you as opposed to that genetic stuff from your parents.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2009-08-04T20:19:33.208Z · LW(p) · GW(p)
Corrected. Whoops.
comment by cousin_it · 2009-08-04T21:18:18.634Z · LW(p) · GW(p)
Not to defend dishonest interpretations of science here, but... "heritability" sounds like a unfortunate choice of word for the concept described. It invites inadvertent misrepresentations.
I'm reminded of an old OB comment by Anatoly Vorobey that made the reasonable point that Kolmogorov complexity captures the human notion of "complexity" very lousily at best. (WTF, the whole universe is less complex than one planet within it?) So too it seems with "heritability". People clearly want a number that would describe "how much the over-all level of the trait is under genetic control, and... how much the trait can change under environmental interventions" - why can't the biologists just give them that?
Replies from: Eliezer_Yudkowsky, kess3r, timtyler↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-04T22:02:15.813Z · LW(p) · GW(p)
Because there is no such number. More variance in the environment will mean "less heritability".
Replies from: cousin_it↑ comment by cousin_it · 2009-08-05T08:54:43.209Z · LW(p) · GW(p)
Fix: "...under the strongest environmental interventions known today".
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2009-08-05T12:46:02.546Z · LW(p) · GW(p)
Have fun trying to define what is accepted as an "environmental intervention" and what isn't.
(Getting your head smashed in with a hammer will end up reducing your body weight rather quickly, so going by your suggestion obesity is 0% heritable.)
Replies from: cousin_it↑ comment by cousin_it · 2009-08-05T12:56:24.665Z · LW(p) · GW(p)
How about "malleability"? Obesity is malleable either way (overeating, liposuction). IQ is highly malleable downwards (hammer to head), not so much upwards (a year of schooling gives +2 points). Eye color, 0% malleable. Maybe take a derivative in effort/time/money to change a trait in the desired direction.
This will be both more useful socially and vastly easier to estimate than "heritability", if we accept Shalizi's proof that "heritability" is almost impossible to measure. By the way, the original post relies upon that proof.
comment by lavalamp · 2009-08-04T20:10:38.689Z · LW(p) · GW(p)
This would be easier to follow and more enjoyable to read if you stated what point you're trying to make, preferably before the large quotations.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2009-08-04T20:46:39.740Z · LW(p) · GW(p)
You're right. I've made an attempt at editing the intro in that direction, which may be somewhat lame, as I had no point to make originally :)
I originally intended only to share the two examples without commentary, but as I examined the meta-analysis, it occurred to me that not everyone would have access to the full text.
Replies from: lavalampcomment by Kaj_Sotala · 2009-08-05T12:58:45.317Z · LW(p) · GW(p)
Note that McArdle responds in the comments:
Reread the post. I did not say that environment wasn't interacting with genes--indeed, that was the entire purpose of the height comparison. I was responding to people who claim that individual outcomes can't be rooted in genetics because after all, there were no fat people in Auschwitz, plus we're all getting fatter. I understand heritability quite well, thanks--or at least, I already knew everything you wrote before you wrote it, and nothing in my post contradicts it.
(Not saying anything about whether her defence is right or wrong, just pointing it out.)
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2009-08-06T01:21:44.604Z · LW(p) · GW(p)
Interesting. I don't believe her. I think her purpose was to suggest that weight is nearly as immutable under changes in diet and exercise as height.
Well, average height is also increasing in the population. Does that mean that you could be as tall as me, if you weren't too lazy to grow?
Twin studies and adoptive studies show that the overwhelming determinant of your weight is not your willpower; it's your genes. The heritability of weight is between .75 and .85. The heritability of height is between .9 and .95.
On the other hand, I do think there's far more to appetite and obesity than willpower[1].
Are there people with genetics such that, had they been given a diabetes+obesity inducing diet as children, they would still be rail thin and fidgety, burning tons of calories without explicit exercise? I think there are.
But I expect interaction between environment and genes to be very high in obesity[2], so heritability can't be used on its own to draw that conclusion.
[1] "studies of monozygotic and dizygotic twins have unambiguously shown that there is a much greater resemblance in the degree of obesity between genetically identical monozygotic twins" - http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=1119832 - this is evidence for genetic variation in obesity with nearly constant environmental factors (clearly the availability of calories is a prerequisite for obesity).
[2] Extensive fictional evidence exists in the rotund mother who tries to fatten up her offspring when they return for the holidays: "put some meat on your bones!". Seriously,
comment by taw · 2009-08-04T20:46:19.295Z · LW(p) · GW(p)
This misuse of statistical significance is a standard practice in science. At least they have decency to tell you about insignificant results, as opposed to sweeping them under the carpet.
This pretend significance, while mathematically false, isn't all that bad. With the bias journals have against publishing research in support of null hypotheses, very little would get published - or more likely some other tricks would be invented as bad as this one. And no matter how much you abused statistical significance by doing multiple independent tests, if someone repeats your study and finds the same result, it is honestly statistically significant then.
comment by Jonathan_Graehl · 2009-08-06T01:25:37.137Z · LW(p) · GW(p)
An article in Nature appears to contradict McArdle's claim that "the older you are, the more heritable weight is":
These findings suggest that adult body size, shape, and composition are highly heritable in both women and men, although a decreasing tendency is seen with advancing age.
I don't think this is McArdle's fault; there happen to be contradictory findings which have probably reported excessive confidence. An increase in heritability means that the genes really expressed themselves more strongly, or variation in environment decreased (or both, obviously). Looking at the reported research myself, I can only think "it's not safe to say whether age will result in more (or less) similar weights amongst genetically similar individuals".
comment by JamesAndrix · 2009-08-05T06:07:23.172Z · LW(p) · GW(p)
This makes me wonder how much science will advance when it can be done by amateurs with better policies. (scrutiny)
Amateur astronomers make discoveries because telescopes are cheap, and findings can be easily corroborated online.
I'm sure people will find interesting things with cheap ultrasounds.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-08-05T10:31:43.127Z · LW(p) · GW(p)
Why isn't there more amateur computer science and math then?
Replies from: Z_M_Davis, JamesAndrix↑ comment by JamesAndrix · 2009-08-05T17:47:23.858Z · LW(p) · GW(p)
I think there's plenty of it. There's people making their own rendering engines, playing with game AI's, experimenting with novel modes of human interaction (wiimotes,multitouch,iphone, openCV), evolving compressed paintings in javascript, and woking on distrubuted/parallelized computing.
It's not all formal but a lot of it is open, and it occasionally explodes into something useful.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-08-05T18:39:08.059Z · LW(p) · GW(p)
But people who are good at programming/research have a tendency to get hired to do it professionally, so there is little point in the status of productive amateur (in the large scheme of things), except, perhaps, easier entry to these fields.
comment by PhilGoetz · 2009-08-05T00:02:15.975Z · LW(p) · GW(p)
Quantitative geneticists use [heritability] to calculate the changes to be expected from artificial or natural selection in a statistically steady environment. It says nothing about how much the over-all level of the trait is under genetic control, and it says nothing about how much the trait can change under environmental interventions.
I don't think that's right. The term "heritability" is used in twin studies, which do not involve a steady environment, and which are all about how much the trait is under genetic control.
Replies from: barrkel↑ comment by barrkel · 2009-08-05T01:29:40.106Z · LW(p) · GW(p)
Have you actually read the linked-to article? Heritability != genetic control. The textbook example:
The textbook example is that (essentially) all of the variance in the number of eyes, hearts, hands, kidneys, heads, etc. people have is environmental. (There are very, very few mutations which alter how many eyes people have, because those are strongly selected against, but people do lose eyes to environmental causes, such as accident, disease, torture, etc.) The heritability of these numbers is about as close to zero as possible, but the genetic control of them is about as absolute as possible.
That text is actually a quote from here, and that article is even more interesting and explicit on this point.
Replies from: PhilGoetz, PhilGoetz↑ comment by PhilGoetz · 2009-08-05T04:05:15.914Z · LW(p) · GW(p)
Okay, I'm reading the article now. I am no expert in this area, but it seems to just be wrong.
First, it is patently false that "heritability says nothing about how much the over-all level of the trait is under genetic control." Heritability is defined in a way that is designed to tell you how much of the trait is under genetic control. That's its purpose. It's not a perfect measure, but it's wrong to say that it tells you nothing about what it's designed to tell you something about.
I expect the textbook example of heritability of number of arms being misleading is a textbook example of when heritability measurements go wrong, not a textbook example of what heritability is supposed to measure.
The author's argument is that heritability is variance associated with different genotypes over total variance; all members of the population have different genes; therefore, everything has 100% heritability. Furthermore, the author goes on to say, there are interactions between genetics and environment, and other factors that are correlated with genetics, and so your heritability measurement isn't meaningful anyway.
This is wrong, for several reasons:
It would require psychologists to sequence the DNA of their subjects.
If it were correct, psychologists would eventually notice that everything had 100% heritability.
Psychologists design experiments measuring heritability so that some pairs in the population share more genes than other pairs.
Psychologists design experiments to try to control for those other factors correlated with genetics. If they don't, that's a design flaw.
I don't think the author is really saying that people are misunderstanding the technical definition of 'heritability'. He is saying that all of the studies of IQ have been poorly designed, and so didn't measure actual heritability.
The web page linked to seems to be politically-motivated, to show that IQ is not genetic. I also note that I read half of the book he refers to, which was written in response to The Bell Curve, and as science it was a lousy book. My recollection is that it was long on moralizing and attempts to create associations between The Bell Curve and Bad Things; but was not good at finding errors in the book it condemned so vigorously. It was also motivated by the same politics. It reminded me of what Einstein said when 30 Nazi scientists wrote a book against Relativity: "If they had been right, it would have taken only one scientist."
Godwin's Law! I win!
I think I can even call "large group of eminent scientists write a politically-motivated but scientifically weak book refuting another book" a trope, since the same thing happened with the "Against Sociobiology" letter of Gould etc.
↑ comment by PhilGoetz · 2009-08-05T03:37:42.144Z · LW(p) · GW(p)
I don't need to read the linked-to article, as I've read other articles using the term "heritability".
Wikipedia says: "In genetics, Heritability is the proportion of phenotypic variation in a population that is attributable to genetic variation among individuals." It defines it as
heritability^2 = variance due to genes / variance in the population
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2009-08-05T05:24:02.848Z · LW(p) · GW(p)
heritability : genetic control :: correlation : causation
Replies from: PhilGoetz↑ comment by PhilGoetz · 2009-08-05T05:41:05.382Z · LW(p) · GW(p)
That's a partly-valid analogy, because things other than genetic control can cause high heritability measurements. But I don't think it's a strong analogy. You can't say, "Well, I might have the interpretation in the completely wrong direction here; the phenotypes might be controlling the genes."
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2009-08-05T18:43:23.157Z · LW(p) · GW(p)
Heritability is unary. Correlation is binary (I'm talking about arity, not domain). You shouldn't "wrong direction" on a unary relation, but I guess that's just another reason I shouldn't have put that in the form of an analogy. I see that you're taking "heritability(trait) X" as "causes(gene-variance,trait-variance) X". That's definitely not what I intended.
I certainly wasn't trying to convince anyone of "heritability is nonsense!". According to Wikipedia, it seems that narrow-sense heritability, with gene-environment correlation removed, would be a direct indication of "genetic variation causes phenotypic variation" (within a framework of simple linear combination of each gene, and environment). I don't know how to tell if someone has actually obtained this number properly, though.
comment by timtyler · 2009-08-04T22:46:29.820Z · LW(p) · GW(p)
I preferred the examples to their critiques.
Replies from: Jonathan_Graehl↑ comment by Jonathan_Graehl · 2009-08-04T22:55:26.091Z · LW(p) · GW(p)
The critiques are both just two unrelated complaints I stumbled upon today. They're not exceptional. But I do think the two examples of science in service of opinion journalism deserve criticism.