"Trials and Errors: Why Science Is Failing Us"

post by gwern · 2011-12-19T18:48:08.015Z · LW · GW · Legacy · 20 comments

Jonah Lehrer has up another of his contrarian science articles: "Trials and Errors: Why Science Is Failing Us".

Main topics: the failure of drugs in clinical trials, diminishing returns to pharmaceutical research, doctors over-treating, and Humean causality-correlation distinction, with some Ioannidis mixed through-out.

See also "Why epidemiology will not correct itself"


In completely unrelated news, Nick Bostrom is stepping down from IEET's Chairman of the Board.

20 comments

Comments sorted by top scores.

comment by Manfred · 2011-12-19T19:44:02.927Z · LW(p) · GW(p)

Huh, I was expecting Science to mean "science as a social institution," but he really is making the strong claim that science as a way of learning things is "failing us" because the human body is complicated. Where of course the problem is, "failing us relative to what?"

Replies from: FAWS, Pfft
comment by FAWS · 2011-12-19T20:03:35.264Z · LW(p) · GW(p)

Failing us relative to our expectations.

Replies from: Manfred, DanArmak
comment by Manfred · 2011-12-19T20:20:31.986Z · LW(p) · GW(p)

It's not particularly failing me relative to my expectations. And why does he use, say, the Pfizer executive's expectations as an example of something that science is failing by? "Our expectations" seems suspiciously similar to "all expectations ever." Or, more likely, "expectations the author thought it would be a good idea to have had of science when writing the article."

Replies from: FAWS
comment by FAWS · 2011-12-19T20:35:02.555Z · LW(p) · GW(p)

Well, most people seem to be surprised that the majority of medical science results (or at least a high percentage) turns out to be bogus.

Replies from: Manfred, None
comment by Manfred · 2011-12-19T23:26:09.889Z · LW(p) · GW(p)

see: social institution vs. way of learning things.

comment by [deleted] · 2011-12-19T21:20:13.248Z · LW(p) · GW(p)

the majority medical science results (or at least a high percentage) turns out to be bogus.

I assume that you really mean "the majority drug results (or at least a high percentage) turns to be ineffective"? A claim that is still far from uncontroversial.

Edit: Change "drug" results to "epidemiology".

Replies from: FAWS
comment by FAWS · 2011-12-19T21:35:53.286Z · LW(p) · GW(p)

Drug results and correlation studies, both environmental and genetic, mostly. Which should be high enough volume that the "at least a high percentage" part should be true even if you add more reliable types of research, no? Or is medical science the wrong word for the category that includes both?

Replies from: None
comment by [deleted] · 2011-12-19T22:10:49.480Z · LW(p) · GW(p)

How much is a high percentage?

Or is medical science the wrong word for the category that includes both?

I do think so. A lot of pre-clinical medical science is more about understanding specific mechanism, not looking at correlations and mapping out risk factors.

Drug results and correlation studies, both environmental and genetic, mostly,

Do you have some data? I do agree that it's hard to actually learn something solid from epidemiology, biology is complicated and factors do not usually add in any intuitive way. But then there are categories where epidemiology is invaluable take for example people with hereditary colon cancer where the majority (with a specific set of mutations) get colon cancer. But you might be right that a lot is not really useful information . . .

Replies from: FAWS, gwern, Kaj_Sotala
comment by FAWS · 2011-12-19T22:27:48.389Z · LW(p) · GW(p)

How much is a high percentage?

Let's say more than 20%

I do think so. A lot of pre-clinical medical science is more about understanding specific mechanism, not looking at correlations and mapping out risk factors.

I didn't necessarily mean to exclude things like that, just to include both of the categories mentioned.

Do you have some data?

http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0020124

comment by gwern · 2011-12-19T22:28:36.639Z · LW(p) · GW(p)

Of course there is data. Besides the Ionnidis citations in the linked article, I also linked my previous post on the topic which, among other things, links to my section in the DNB FAQ on this topic with dozens of links/citations.

Replies from: None
comment by [deleted] · 2011-12-19T23:59:21.914Z · LW(p) · GW(p)

My bad, only browsed through "Why Science Is Failing Us", behaved kind of like a politician, will do my homework before opening my mouth next time.

But I still think that one should use medical epidemiology instead of the cluster word medical science.

comment by Kaj_Sotala · 2011-12-20T18:43:10.788Z · LW(p) · GW(p)

How much is a high percentage?

From the article:

One study, for instance, analyzed 432 different claims of genetic links for various health risks that vary between men and women. Only one of these claims proved to be consistently replicable. Another meta review, meanwhile, looked at the 49 most-cited clinical research studies published between 1990 and 2003. Most of these were the culmination of years of careful work. Nevertheless, more than 40 percent of them were later shown to be either totally wrong or significantly incorrect.

Those didn't analyze all of medicine, of course, but it does sound pretty bad for the overall percentage.

comment by DanArmak · 2011-12-20T12:07:56.501Z · LW(p) · GW(p)

Then our expectations are wrong. The effectiveness of science should add up to normality.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2011-12-21T05:37:15.571Z · LW(p) · GW(p)

Correct, of course that's still a problem.

comment by Pfft · 2011-12-22T10:58:41.783Z · LW(p) · GW(p)

I guess one qualitative difference is the fact that drug companies now cut down on research, suggesting that area of science has passed the point where it can no longer pay for itself.

Something similar happened in particle physics: in the early 20th century experiments were cheap (and fit on a tabletop), yet the value of the discoveries was immense (x-rays, nuclear power). Nowadays the experiments needed to make new discoveries are staggeringly expensive (LHC, SSC), and they are not expected to have any technological implications at all (since the new science will only be relevant under extreme conditions). So investing in particle physics research went from being free money to being a net cost.

comment by Daniel_Burfoot · 2011-12-20T03:11:37.055Z · LW(p) · GW(p)

A better subtitle for the article would be "why statistics is failing us".

comment by MixedNuts · 2011-12-19T19:57:19.832Z · LW(p) · GW(p)

Summary: "Coincidences exist."

comment by Kaj_Sotala · 2011-12-20T18:46:40.726Z · LW(p) · GW(p)

Most recently, two leading drug firms, AstraZeneca and GlaxoSmithKline, announced that they were scaling back research into the brain. The organ is simply too complicated, too full of networks we don’t comprehend.

Of course, pharmaceutical research to the brain isn't the same as cognitive science research to the brain, but still, I'm updating to have a somewhat lower estimate of "P(the brain will be reverse engineered during the next 50 years)" as a result of reading this. (Though there are still partial algorithmic replications of the hippocampus and the cerebellum which do make it seem relatively probable that the reverse engineering will succeed nevertheless.)

comment by byrnema · 2011-12-20T13:57:56.102Z · LW(p) · GW(p)

Interesting article, thanks for directing my attentions towards it.

Reading through the comments, we all seem to agree: there's nothing wrong with science. (I've grown to expect misleading titles and thesis statements that push too far, it seems part of the blog/internet culture, and the article can still be read for the interesting connecting bits.)

There's nothing wrong with "science" ... I interpret the article as pointing out the problem of induction in the context of a complex system with a limited number of observations. For example, an animation is a very complex system -- much more complex than Newtonian physics, requiring a model of the specific intentions of a human mind.

Given an observation, the scientific method goes, you form a hypothesis. Then you test that hypothesis, especially for the broader context that you would like to apply it to. Michotte's subjects formed a hypothesis about blue and red balls in a set of animations that would not hold up to further observations. Likewise, Pfizer formed hypotheses about cholesterol interactions in human systems that did not hold up. This is the scientific method working, just as well as ever.

An example of the scientific method not working would be experiments that change their behavior depending on what your expectations are and what hypotheses you are forming (exclude anything in psychology for the moment). For example, it would be really weird if objects knew you expected to them to fall down due to gravity and were just obliging. (The scientific worldview is rejecting that sort of hypothesis universally in the absence of any evidence for it.)

comment by Dreaded_Anomaly · 2011-12-24T03:59:51.001Z · LW(p) · GW(p)

It's unfortunate that the important points in this article are surrounded by such fallacious statements as "It’s mystery all the way down." I would love to use the good parts as a discussion piece elsewhere, but it's not worth the risk that people will only take away the author's irrational conclusions.