Summaries of uncertain priors

post by skybrian · 2021-06-03T02:43:15.726Z · LW · GW · 1 comment

This is a question post.

Contents

  Answers
    1 PatrickDFarley
None
1 comment

In Bayesian jargon, people talk about having a "flat prior" when they mean being very uncertain about a question. But more often you just give a probability that you think a question is true. We don't normally draw probability distributions to share with each other.

I'm wondering if there's a natural way to talk not just about the probability you think something is true, but an estimate of your confidence, in some quantitative way? And what would it mean to be "well-calibrated" in your uncertainty?

For example, I might be quite confident that a particular coin is fair and will come up heads 50% of the time (because I've gathered a lot of data), while being much less confident about another 50% bet even though I think it's as likely as not.

Answers

answer by PatrickDFarley · 2021-06-04T15:52:17.355Z · LW(p) · GW(p)

not just about the probability you think something is true, but an estimate of your confidence, in some quantitative way?

I don't think these are actually different things.

The coin example is misleading. Your confidence in the next toss being heads is exactly the same as any other independent 50% bet. Your confidence that "this is a fair coin", which could be approximated by, say, getting between 45-55 heads in the next 100 tosses, is a different bet and will give a different answer than 50%.

1 comment

Comments sorted by top scores.

comment by Daniel V · 2021-06-03T03:14:38.349Z · LW(p) · GW(p)

Aleatory and epistemic uncertainty often get wrapped up together so these estimates are not always proper probabilities nor measuresof confidence. You're separating them, good for you!