Posts

Comments

Comment by Norman_Hall on Beautiful Probability · 2008-01-14T18:11:43.000Z · LW · GW

I am by no means an expert in statistics, but I do appreciate Eliezer Yudkowsky's essay, and think I get his point that, given only experiment A and experiment B, as reported, there may be no reason to treat them differently IF WE DON'T KNOW of the difference in protocol (if those thoughts are truly private). But It does seem rather obvious that, if there were a number of independent experiments with protocol A and B, and we were attempting to do a meta-analysis to combine the results of all such experiments, there would be quite a number of experiments where n would be greater than 100 (from protocol B). With the protocol as stated, these would all end when cures were greater than but very close to 60%. If we assume that the "real" cure rate in the population is close to 70%, then, unless some Bayesian term is introduced to account for the bias in methodology, the meta-analysis would seem to be biased toward the incorrect conclusion that the lower 60% figure was closer to reality. Presumably, that kind of bias would be noticed in the experiments with n > 100, and could not have been kept as a private thought with a large number of repeat experiments.

I am not sure, but I would think that, if Bayesian analysis is (or can be) as rigorous as it is claimed, then even the analysis of the original pair might be expected to include some terms that would reflect that potential bias due to a difference in protocol IF THAT DIFFERENCE IS KNOWN to the Bayesian statistician doing the analysis. I find it disturbing that experiment A could have come out to have n = 100 and cure rate = 60%, or n = 1000 and cure rate = 60%, but not with cure rate = 59%, no matter how large n might have become.