"No evidence" as a Valley of Bad Rationality

post by adamzerner · 2020-03-28T23:45:44.927Z · score: 106 (43 votes) · LW · GW · 14 comments

Quick summary of Doctor, There are Two Kinds of “No Evidence”:

Let me be clear about the mistake the doctor is making: he's focused on conclusive evidence. To him, if the evidence isn't conclusive, it doesn't count.

I think this doctor is stuck in a Valley of Bad Rationality. Here's what I mean:

I think that a lot of people are stuck in this same valley.

14 comments

Comments sorted by top scores.

comment by Jay Molstad (jay-molstad) · 2020-03-29T02:58:42.068Z · score: 30 (15 votes) · LW(p) · GW(p)

I've definitely seen this in the academic literature. And it's extra annoying if the study used a small sample; the p-values are going to be large simply because the study didn't collect much evidence.

OTOH, chemotherapy isn't a very good example because there are other factors at work:

  • Chemotherapy has serious side effects. There are good reasons to be cautious in using extra.
  • There are also not-as-good reasons to avoid using extra chemotherapy. Medical care is highly regulated and liability-prone (to varying extents in various areas). In the US, insurers are notoriously reluctant to pay for any treatment they consider unnecessary. Departing from standard practice is likely to be expensive.
comment by Richard_Ngo (ricraz) · 2020-03-30T13:22:12.438Z · score: 10 (4 votes) · LW(p) · GW(p)

I think the fact that chemotherapy isn't a very good example demonstrates a broader problem with this post: that maybe in general your beliefs will be more accurate if you stick with the null hypothesis until you have significant evidence otherwise. Doing so often protects you from confirmation bias, bias towards doing something, and the more general failure to imagine alternative possibilities. Sure, there are some cases where, on the inside view, you should update before the studies come in, but there are also plenty of cases where your inside view is just wrong.

comment by PatrickDFarley · 2020-03-29T00:49:43.231Z · score: 11 (8 votes) · LW(p) · GW(p)

I like this, thanks for posting. I've noticed there's a contrarian thrill in declaring, "Actually there's no evidence for that" / "Actually that doesn't count as evidence."

Academics love it when some application of math/statistics allows them to say the opposite of what people expect. There's this sense that anything that contradicts "common sense" must be the enlightened way of thinking, rising above the "common," "ignorant" thinking of the masses (aka non-coastal America).

comment by Andrew Jacob Sauer (andrew-jacob-sauer) · 2020-03-29T20:16:51.773Z · score: 3 (2 votes) · LW(p) · GW(p)

It's hard to tell, since while common sense is sometimes wrong, it's right more often than not. An idea being common sense shouldn't count against it, even though like the article said it's not conclusive.

comment by Jacobian · 2020-03-30T14:19:11.706Z · score: 7 (3 votes) · LW(p) · GW(p)

I just thought of this in the context of this study on hydroxychloroquine in which 14/15 patients on the drug improved vs 13/15 patients treated with something else. To the average Joe, HCQ curing 14/15 people is an amazing positive result, and it's heartening to know that other antivirals are almost as good. To the galaxy-brained journalist, there's p>0.05 and so "the new study casts doubt on hydroxychloroquine effectiveness... a prime example of why Trump shouldn't be endorsing... actually isn't any more effective."

comment by Nick_Tarleton · 2020-03-30T17:27:07.079Z · score: 11 (6 votes) · LW(p) · GW(p)

And the correct reaction (and the study's own conclusion) is that the sample is too small to say much of anything.

(Also, the "something else" was "conventional treatment", not another antiviral.)

comment by Jay Molstad (jay-molstad) · 2020-03-30T21:45:04.682Z · score: 1 (1 votes) · LW(p) · GW(p)

Well, we can say that 27/30 (90%) patients improved. With a very high level of confidence, we can say that this disease is less fatal than Ebola (which would have killed 26 or so).

comment by Nick_Tarleton · 2020-03-30T17:33:47.963Z · score: 6 (4 votes) · LW(p) · GW(p)

Upvoted, but weighing in the other direction: Average Joe also updates on things he shouldn't, like marketing. I expect the doctor to have moved forward some in resistance to BS (though in practice, not as much as he would if he were consistently applying his education).

comment by Pattern · 2020-03-29T05:47:26.292Z · score: 5 (3 votes) · LW(p) · GW(p)

Upon seeing the title, I guessed this piece was going to argue that people are often right without evidence. Instead the OP argued against believing something without evidence.

In the intro class, he took one step backwards. At that point he's in the Valley of Bad Rationality: education made him worse than where he started.

But is the doctor worse or better for it (even assuming that this story, second hand or third hand or more is accurate)? And how do we know?

comment by orthonormal · 2020-03-29T06:13:18.778Z · score: 33 (17 votes) · LW(p) · GW(p)

In general, it's good to check your intuitions against evidence where possible (so, seek out experiments and treat experimentally validated hypotheses as much stronger than intuitions).

The valley being described here is the idea that you should just discard your intuitions in favor of the null hypothesis, not just when experiments have failed to reject the null hypothesis (though even here, they could just be underpowered!), but when experiments haven't been done at all!

It's a generalized form of an isolated demand for rigor, where whatever gets defined as a null hypothesis gets a free pass, but anything else has to prove itself to a high standard. And that leads to really poor performance in domains where evidence is hard to come by (quickly enough), relative to trusting intuitive priors and weak evidence when that's all that's available.

comment by Pattern · 2020-03-30T02:27:17.608Z · score: 3 (2 votes) · LW(p) · GW(p)

Having the reverse as the null hypothesis is also bad. Which is worse?

comment by elriggs · 2020-04-01T21:20:41.390Z · score: 3 (2 votes) · LW(p) · GW(p)

Correct, favoring hypothesis H or NOT H simply because you label one "null hypothesis" are both bad. Equally bad when you don't have evidence either way.

In this case, intuition favors "more chemo should kill more cancer cells", and intuition counts as some evidence. The doctor ignores intuition (which is the only evidence we have here) and favors the opposite hypothesis because it's labeled "null hypothesis".

comment by Pattern · 2020-04-01T23:14:58.750Z · score: 2 (1 votes) · LW(p) · GW(p)

I was suggesting that there might be ways of assigning the label of "null hypothesis".

X is good, more X is good. (intuition favors "more chemo should kill more cancer cells")

X has a cost, we go as far as the standards say, and stop there. (Chemo kills cells. This works on your cells, and cancer cells. Maybe chemo isn't like shooting someone - they aren't that likely to die as a result - but just as you wouldn't shoot someone to improve their health unless it was absolutely necessary, and no more, chemo should be treated the same way.) "Do no harm." (This may implicitly distinguish between action and inaction.)

comment by jmh · 2020-03-30T22:45:17.665Z · score: 2 (2 votes) · LW(p) · GW(p)

Shouldn't the follow up to no evidence showing that it does any good be "Is there any evidence showing it does harm?"

Have you seen this before? Any thoughts on how it might inform on your examples?

I am not defending the arrogance of some doctors but I do wonder if you are truly giving the doctor in question here a full opportunity and might have biased the discussion by stating things in a way that did not allow a good discussion to ensue but perhaps setup a more adversarial framework.

I wonder how much a believe in the Hippocratic Oath might be at play here.