The null model of science

post by Johnicholas · 2011-03-26T13:53:12.303Z · LW · GW · Legacy · 17 comments

Jonah Lehrer wrote about the (surprising?) power of publication bias.

http://m.newyorker.com/reporting/2010/12/13/101213fa_fact_lehrer?currentPage=all

Cosma Shalizi (I think) said something, or pointed to something, about the null model of science - what science would look like if there were no actual effects, just statistical anomalies that look good at first. I can't find the reference, though.

 


17 comments

Comments sorted by top scores.

comment by cousin_it · 2011-03-26T14:58:15.959Z · LW(p) · GW(p)

Here's the Shalizi link: The Neutral Model of Inquiry. Good stuff, I remember enjoying it a lot. Choice quote: "...the first published p-value for any phenomenon is uniformly distributed between 0 and 0.05."

Replies from: Johnicholas, AlephNeil
comment by Johnicholas · 2011-03-26T17:45:38.949Z · LW(p) · GW(p)

Thank you very much!

comment by AlephNeil · 2011-03-26T17:12:47.973Z · LW(p) · GW(p)

I love that he called it "The Neutral Model".

(To the OP: I think it's a reference to Motoo Kimura's theory that at the molecular level, most evolutionary change is neutral rather than adaptive.)

comment by NancyLebovitz · 2011-03-26T14:35:41.892Z · LW(p) · GW(p)

I wish I could vote this up several times because of, well, confirmation bias. It's seemed to me that evolutionary psych makes a lot of stew from very little meat, and it looks as though there's even less meat than I thought-- the article says that there's much less evidence of a female preference for male symmetry than was previously believed.

Meanwhile, I wonder if some of the fading of results isn't just publication bias, it's that biological details change faster than we realize. Drugs that work for schizophrenia might stop working as well because people are eating different additives or somesuch.

A simulation hypothesis is fun, of course-- we're being toyed with and/or the program is slightly unstable.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-03-26T14:43:34.228Z · LW(p) · GW(p)

the article says that there's much less evidence of a female preference for male symmetry than was previously believed.

I totally predicted that one. Hmph. Is there a discussion post somewhere for people to post predictions? Ideally it'd be near the top of Top, if people voted it up enough. I like the idea of prediction markets but they seem cumbersome and many sorts of predictions need to be made super precise before you can bet on them, even if they wouldn't have to be that precise to be socially acknowledged as sticking their necks out.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2011-03-26T15:04:48.005Z · LW(p) · GW(p)

I don't know of a best place on LW, but Predictionbook.com is a handy site for publishing your predictions. IIRC, gwern is an afficiannado.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-03-26T15:09:40.211Z · LW(p) · GW(p)

Thanks! I'd wrongly assumed that all such sites were based on some sort of economic transaction.

Replies from: DanielLC
comment by DanielLC · 2011-03-26T21:42:32.037Z · LW(p) · GW(p)

I've read prediction markets that use money tend to violate gambling laws.

comment by Dreaded_Anomaly · 2011-03-26T19:24:26.870Z · LW(p) · GW(p)

This article was previously discussed here.

comment by [deleted] · 2011-03-29T22:43:16.570Z · LW(p) · GW(p)

Is it possible that there is too much science today?

I mean, in the raw-numbers sense of number of professional scientists and number of papers published. You could, conceivably, increase the volume of "science" without increasing its accuracy. How do we know we're not doing that?

Replies from: Vladimir_M, drethelin
comment by Vladimir_M · 2011-03-30T00:01:01.322Z · LW(p) · GW(p)

You could, conceivably, increase the volume of "science" without increasing its accuracy. How do we know we're not doing that?

To me it seems pretty obvious that we are doing that, and have been for many decades. But I suppose spelling out an argument for this conclusion suitable for a general audience would require bridging some significant inferential distances.

comment by drethelin · 2011-03-29T23:43:17.643Z · LW(p) · GW(p)

I would say it's possible, but not in a way that's easily findable or fixable. How do you tell the difference between all the scientific research projects that haven't found anything useful yet? Which will go nowhere and waste time and money and which will lead to small but useful discoveries?

It's probably easier to usefully think about this in terms of specific fields rather than science in general. I could easily imagine that for example, there are way more people with anthropology degrees than useful anthropology going on.

comment by Eneasz · 2011-03-28T16:12:11.199Z · LW(p) · GW(p)

I'm a bit late - the dangers of not checking Less Wrong over the weekend. :/ But in rebuttal: The Decline Effect Is Stupid

Jonah Lehrer is the Decline Effect. ... The trouble for the Earth is he writes for The New Yorker. ... If they didn't, I, and those who are real scientists, wouldn't have to explain why the Decline Effect doesn't exist

Replies from: byrnema
comment by byrnema · 2011-03-29T22:48:27.998Z · LW(p) · GW(p)

I read that article on the Last Psychiatrist... the way he described the article and the way Johnicolas did, I never would have guessed they were the same..

Guess I need to read the original.

Replies from: byrnema
comment by byrnema · 2011-03-30T21:38:19.667Z · LW(p) · GW(p)

At my first reading, I agreed with Alone's interpretation in 'The decline effect is stupid'. The article seems to describe anti-science, spooky, the-world-is-"connected"-and-affected-by-our-perception metaphysics.

For example, this doesn't sound like it wants to describe publication bias:

It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology.

And certainly not this:

The next year, the size of the effect shrank another thirty per cent. When other labs repeated Schooler’s experiments, they got a similar spread of data, with a distinct downward trend. “This was profoundly frustrating,” he says. “It was as if nature gave me this great result and then tried to take it back.” In private, Schooler began referring to the problem as “cosmic habituation,” by analogy to the decrease in response that occurs when individuals habituate to particular stimuli.

[Consider, what work is "cosmic" doing in the last sentence?]

The article nods at scientific explanations, but then says they're not sufficient to explain what's going on. What is the article trying to imply? That something can be true at first, for a while, and then the truth value wears off? Because the scientist was getting too successful, the people were too confidant, the cosmos was feeling weary of being consistent? This idea tugs familiar grooves -- it's the superstition we're all programmed with.

But the article is somewhat long, and as I meander through, I consider that perhaps it intends that there should be a scientific explanation for "the effect" after all. Maybe the language and supernatural insinuations within the article are playfully meant as bait to goad scientists into thinking about it and dissolving it. (If it reflects a "real" trend, what is the scientific explanation then?).

I appreciate other things that Dr. Lehrer has written -- he seems to have a scientific worldview through and through -- so this latter interpretation is the one I finally settle on.

comment by Normal_Anomaly · 2011-03-26T16:59:26.649Z · LW(p) · GW(p)

Good article, but scary.

One possible explanation is that for any given thing being investigated, there's some chance of an effect size initially larger than it should be, and some chance of it initially being smaller that it should be. If by chance the effect size starts out being too small, the investigation of the thing never takes off. If by chance the effect size starts out too large, a ton more studies are done and regression to the mean happens.

Alternately, it's possible that lots of initial studies have flawed methodology. Then, as more studies are done on a topic, the methodology becomes slowly more refined and the effect slowly goes away.