Hydroxychloroquine: the day science stopped
post by Filipe Marchesini (filipe-marchesini) · 2020-04-05T18:09:58.290Z · LW · GW · 2 commentsContents
2 comments
This is a translation of (what I consider the most important bits) part (32%) of the original article "Hidroxicloroquina: o dia em que a ciência parou" from Luis Cláudio Correia, posted on Evidence Based Medicine Blog, 20th March 2020. Please correct mispellings, any English mistakes should be attributed to my translation, not to Luis Claudio Correia. (There are no good translators from Portuguese to English, or vice-versa, up to date, except DeepL. Google Translator does terrible job. Most of the time I use DeepL.com and manually correct mistaken translations). Let me know if you want me to translate the whole article. I also would like to know if there exist a LessWrong Discord.
[...]
The French Article
This article is a bias cluster. But it's not just classic biases, there are conducts that aren't even on the traditional checklists. Something caricatured. Twenty-six patients used hydroxychloroquine versus 16 control patients. A hospital in Marseille recruited patients for treatment and other centers in other regions recruited the controls. The outcome was a substitute: the virologic negation of the nasal swab on the sixth day.
At day6 post-inclusion, 70% of hydroxychloroquine-treated patients were virologicaly cured comparing with 12.5% in the control group (p= 0.001).
First, confusion bias. Randomization would serve to avoid confusion bias, which did not occur. But this study goes further, it causes confusion bias. Not preventing is not the same as causing it.
The unusual: patients in Marseille who refused treatment continued in the study as a control group! This causes great basal heterogeneity between the groups, because patients who refuse are different from patients who accept. When refusing a treatment, the patients should not be included in the study. In fact, in a clinical trial, potential volunteers do not refuse treatment. What they refuse is to enter the trial.
Following the pattern of irrationality, patients who met exclusion criteria (comorbidities, contraindications to the drug) were included in the study as a control group. More severe patients, previously in the drug group, were transferred to the control group.
Second, the study excludes from the treatment group patients who did not complete the treatment, in a gross violation of the 'intention to treat' principle. Inadequately they call this "loss of follow-up". There was no loss of follow-up, patients were still available to be evaluated. In fact, this is a protocol analysis, in which 6 out of 26 patients left the study: 3 because they went to the ICU, 1 because the patient died (!!), 1 because the patient was nauseated and another was discharged from hospital. Of the 6, 5 patients did not continue treatment because they got worse! And these were excluded from the control group.
Third, there is a possibility of performance bias promoted by differences in general behaviors between the groups. Open study, hospital treatment group different from the control group. We should look for signs: spontaneously 6 patients from the treatment group received azithromycin. Not that azithromycin will solve anything, but this is a sign indicative of greater attention or indication of adjunct care.
Fourth, risk of random error. This is clearly a small study, which greatly increases the probability of random error. The sample calculation brings the illusion of 85% power. However, it does not present the premise of swab positivity in the control group and estimates an unusual efficacy of 50%. Something too good to be true for most treatments, let alone for this one of low pre-test probability. Therefore, here we have a small study with high risk of random error. And for a small study to demonstrate "statistical significance," it needs to be so large that it becomes "too good to be true".
It's an 82% relative reduction in positivity. This usually does not happen. However, we must recognize that for antibiotic treatment this great magnitude of effect can happen. For example, if we compare antibiotic to placebo in severe bacterial infection, the size of the effect would be very large. On the other hand, we should avoid the illusion that we are using an antibiotic for the coronavirus.
Finally, here we are faced with a laboratory result, what we know as a surrogate endpoint. Even if we considered this result reliable, there would still be a great uncertainty of clinical benefit.
For all that, this is a study to be discarded, for its high risk of bias and chance. In Bayesian scientific thought, this study does not increase the probability of the hypothesis being true.
The Pre-Test Probability
There is evidence that the drug has antiviral action in vitro. This brings plausibility, which should not be confused with probability. Probability is mainly influenced by hypothesis-generating clinical data (specific analysis) and scientific ecosystem behavior in similar cases (generic analysis). We have no previous empirical data and it is unusual that treatment of one type of condition (malaria, protozoan) has relevant efficacy in another (viral). The probabilistic estimate should be made this way, thinking about the whole, and not the specificity brought by plausibility arguments.
Thus, we start with a very low pre-test probability treatment, we come across a study that has no value in modifying this probability, and we end up with the illusion of a treatment with a high probability of benefit. What happened to our scientific mind?
[...]
2 comments
Comments sorted by top scores.
comment by renato · 2020-04-05T19:01:06.856Z · LW(p) · GW(p)
The paragraph that starts with "Fourth, ..." is shown twice.
Replies from: filipe-marchesini↑ comment by Filipe Marchesini (filipe-marchesini) · 2020-04-05T19:58:01.022Z · LW(p) · GW(p)
Thanks, corrected it now.