Are calibration and rational decisions mutually exclusive? (Part one)
post by Cyan · 2009-07-23T05:15:45.853Z · LW · GW · Legacy · 19 commentsContents
19 comments
I'm planning a two-part sequence with the aim of throwing open the question in the title to the LW commentariat. In this part I’ll briefly go over the concept of calibration of probability distributions and point out a discrepancy between calibration and Bayesian updating.
It's a tenet of rationality that we should seek to be well-calibrated. That is, suppose that we are called on to give interval estimates for a large number of quantities; we give each interval an associated epistemic probability. We declare ourselves well-calibrated if the relative frequency with which the quantities fall within our specified intervals matches our claimed probability. (The Technical Explanation of Technical Explanations discusses calibration in more detail, although it mostly discusses discrete estimands, while here I'm thinking about continuous estimands.)
Frequentists also produce interval estimates, at least when "random" data is available. A frequentist "confidence interval" is really a function from the data and a user-specified confidence level (a number from 0 to 1) to an interval. The confidence interval procedure is "valid" if in a hypothetical infinite sequence of replications of the experiment, the relative frequency with which the realized intervals contain the estimand is equal to the confidence level. (Less strictly, we may require "greater than or equal" rather than "equal".) The similarity between valid confidence coverage and well-calibrated epistemic probability intervals is evident.
This similarity suggests an approach for specifying non-informative prior distributions, i.e., we require that such priors yield posterior intervals that are also valid confidence intervals in a frequentist sense. This "matching prior" program does not succeed in full generality. There are a few special cases of data distributions where a matching prior exists, but by and large, posterior intervals can at best produce only asymptotically valid confidence coverage. Furthurmore, according to my understanding of the material, if your model of the data-generating process contains more than one scalar parameter, you have to pick one "interest parameter" and be satisfied with good confidence coverage for the marginal posterior intervals for that parameter alone. For approximate matching priors with the highest order of accuracy, a different choice of interest parameter usually implies a different prior.
The upshot is that we have good reason to think that Bayesian posterior intervals will not be perfectly calibrated in general. I have good justifications, I think, for using the Bayesian updating procedure, even if it means the resulting posterior intervals are not as well-calibrated as frequentist confidence intervals. (And I mean good confidence intervals, not the obviously pathological ones.) But my justifications are grounded in an epistemic view of probability, and no committed frequentist would find them as compelling as I do. However, there is an argument for Bayesian posteriors over confidence intervals than even a frequentist would have to credit. That will be the focus of the second part.
19 comments
Comments sorted by top scores.
comment by Psychohistorian · 2009-07-23T06:00:51.959Z · LW(p) · GW(p)
I don't get it.
I admit my math background is limited to upper-division undergraduate, and I admit I could have tried harder to make sense of the jargon, but after reading this a few times, I really just don't get what your point is, or even what kind of thing your point is supposed to be.
Replies from: Cyan, None↑ comment by Cyan · 2009-07-23T20:38:51.441Z · LW(p) · GW(p)
The short short version of this part of the argument reads:
What Bayesians call calibration, frequentists call valid confidence coverage. Bayesian posterior probability intervals do not have valid confidence coverage in general; priors that can guarantee it do not exist.
↑ comment by [deleted] · 2009-07-23T06:20:48.443Z · LW(p) · GW(p)
Suppose the actual frequentist probability of an event is 90%. Your prior distribution for the frequentist probability of the event is uniform. Your Bayesian probability of the event will start at 50% and approach 90%; in the long run, the average will be less than 90%.
If the post is getting at more than this, I understand as little as you do. My answer to the title question was "no, they can't be" going in, and if the post is trying to say something I haven't understood, then I hope to convince the author e's wrong through sheer disagreement.
Replies from: Cyan↑ comment by Cyan · 2009-07-23T19:27:19.011Z · LW(p) · GW(p)
Try rephrasing your first paragraph when the quantity of interest is not a frequency but, say, Avogadro's number, and you're Jean Perrin trying to determine exactly what that number is.
A frequentist would take a probability model for the data you're generating and give you a confidence interval. A billion scientists repeat your experiments, getting their own data and their own intervals. Among those intervals, the proportion that contain the true value of Avogadro's number is equal to the confidence (up to sampling error).
A Bayesian would take the same probability model, plus a prior, and combine them using Bayes. Each scientist may have her own prior, and posterior calibration is only guaranteed if (i) all the priors taken as a group were calibrated, or, (ii) everyone is using the matching prior if it exists (these are typically improper, so prior calibration cannot be calculated).
comment by cousin_it · 2009-07-23T09:22:32.510Z · LW(p) · GW(p)
Please provide an example where frequentists get exact answers and Bayesians get only approximations, all from the same data. This looks highly improbable to me. Or did you mean something else?
Replies from: Cyan↑ comment by Cyan · 2009-07-23T19:14:18.062Z · LW(p) · GW(p)
No, this is more-or-less what I meant. I equivocate on "exact," because I regard the Bayesian answer as exactly what one actually wants, and perfect frequentist validity as a secondary consideration. To provide the example you requested, I'll have to go searching for one of the papers that set off this line of thought -- the bloody thing's not online, so it might take a while.
comment by Vladimir_Nesov · 2009-07-23T10:33:37.946Z · LW(p) · GW(p)
Could you state your point with math? I don't understand what you are saying.
Replies from: Cyan↑ comment by Cyan · 2009-07-23T20:33:51.896Z · LW(p) · GW(p)
You can find some of the math, and pointers into the literature, in this paper
comment by Venu · 2011-08-02T21:39:06.407Z · LW(p) · GW(p)
I came to this post via a Google search (hence this late comment). The problem that Cyan's pointing out - the lack of calibration of Bayesian posteriors - is a real problem, and in fact something I'm facing in my own research currently. Upvoted for raising an important, and under-discussed, issue.
comment by PhilGoetz · 2009-07-27T16:48:44.441Z · LW(p) · GW(p)
"The upshot is that we have good reason to think that Bayesian posterior intervals will not be perfectly calibrated in general."
This seems to be the main point of your post; and nothing in the post seems to be connected to it.
Replies from: Cyan↑ comment by Cyan · 2009-07-27T19:04:56.986Z · LW(p) · GW(p)
The ideas of the post are: calibration seems to me to be equivalent to confidence coverage (second and third paragraphs); in general, Bayesian posterior intervals do not have valid confidence coverage (fourth paragraph). The sentence you quote above follows from these two ideas.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2009-08-04T21:29:56.266Z · LW(p) · GW(p)
Okay, that helps. My problem is that, on re-reading, I still don't know what the 4th paragraph means.
This similarity suggests an approach for specifying non-informative prior distributions
Why would anybody want non-informative distributions?
by and large, posterior intervals can at best produce only asymptotically valid confidence coverage.
I don't know what it means for a confidence interval to be asymptotically valid, or why posterior intervals have this effect. This seems like an important point that should be justified.
if your model of the data-generating process contains more than one scalar parameter, you have to pick one "interest parameter" and be satisfied with good confidence coverage for the marginal posterior intervals for that parameter alone
You lost me entirely.
Replies from: Cyan↑ comment by Cyan · 2009-08-04T21:48:05.132Z · LW(p) · GW(p)
Why would anybody want non-informative distributions?
To have a prior distribution to use when very little is known about the estimand. It's meant to somehow capture the notion of minimal prior knowledge contributing to the posterior distribution, so that the data drive the conclusions, not the prior.
I don't know what it means for a confidence interval to be asymptotically valid.
The confidence coverage of a posterior interval is equal to the posterior probability mass of the interval plus a term which goes to zero as the amount of data increases without bound.
if your model of the data-generating process contains more than one scalar parameter...
E.g., a regression with more than one predictor. Each predictor has its own coefficient, so the model of the data-generating process contains more than one scalar parameter.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-23T17:21:15.697Z · LW(p) · GW(p)
Is this a standard frequentist idea? Is there a link to a longer explanation somewhere? Well-calibrated priors and well-calibrated likelihood ratios should result in well-calibrated posteriors.
Replies from: Cyan↑ comment by Cyan · 2009-07-23T19:05:47.817Z · LW(p) · GW(p)
Valid confidence coverage is a standard frequentist idea. Wikipedia's article on the subject is a good introduction. I've added the link to the post.
The problem is exactly: how do you get a well-calibrated prior when you know very little about the question at hand? If your posterior is well-calibrated, your prior must have been as well. So, seek a prior that guarantees posterior calibration. This is the "matching prior" program I described above.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2009-07-27T16:50:28.855Z · LW(p) · GW(p)
This sounds like Gibbs sampling or expectation maximization. Are Gibbs and/or EM considered Bayesian or frequentist? (And what's the difference between them?)
Replies from: Cyan↑ comment by Cyan · 2009-07-27T16:57:51.153Z · LW(p) · GW(p)
Gibbs sampling and EM aren't relevant to the ideas of this post.
Neither Gibbs sampling nor EM is intrinsically Bayesian or frequentist. EM is just a maximization algorithm useful for certain special cases; the maximized function could be a likelihood or a posterior density. Gibbs sampling is just a MCMC algorithm; usually the target distribution is a Bayesian posterior distribution, but it doesn't have to be.
Replies from: PhilGoetz