How to quantify uncertainty about a probability estimate?
post by mic (michael-chen) · 2020-05-06T12:06:08.217Z · LW · GW · 2 commentsThis is a question post.
Contents
2 comments
How does one express how confident or uncertain about a probability estimate one is, in numeric terms?
Consider some different situations where you might have 50% confidence in a "yes" answer to a question:
- Will this atom of hydrogen have decayed after its half-life?
- After this fair coin is flipped, will it land heads?
- After this biased coin is flipped, will it land heads?
- You didn't quite hear some question and all you know is that it's a yes/no question and has an objective answer. Is the answer “yes”? You don't really know whether in general, “yes” answers are more likely to be correct than "no" answers.
- Will Biden win the 2020 election?
These are all things where you might assign a 50% probability, but you'd probably be more confident in some of your answers than others, and that confidence depends on your knowledge in the area. How do we express this difference in confidence?
Here are some possible ways I've thought of:
- Using fuzzy words like “I feel pretty confident in this 50% prediction”.
- Completely avoiding assigning numerical probabilities, in an attempt to prevent people from taking the probabilities more seriously than they should.
- Writing a range of probabilities, such as “5–10%”. It's not clear what a range of probabilities actually means. If that is an x% confidence bound, what would a confidence bound of probabilities even mean?
- Drawing a probability density graph. A wider spread might indicate more uncertainty. When we have only have two outcomes like “yes” and “no”, for example, however, we might use a workaround such as “What probability would you assign to tribbles [1] being sentient after 1000 more hours of research?” (elaborated on in this comment by NunoSempere [EA(p) · GW(p)]).
- Using a theory of imprecise probabilities (https://plato.stanford.edu/entries/imprecise-probabilities/). I'd like a theory that boils down to regular probability theory.
- Describing the amount of evidence it would take for you to change your mind. It'd be hard to use this to compare different problems.
Answers
2 comments
Comments sorted by top scores.
comment by Isaac King (KingSupernova) · 2023-12-10T04:38:47.762Z · LW(p) · GW(p)
I don't have an answer for you, as this is also something I'm confused about. I felt bad seeing 0 answers here, so I just wanted to mention that I asked about this on Manifold and got some interesting discussion, see here:
Replies from: michael-chen↑ comment by mic (michael-chen) · 2023-12-16T03:31:40.934Z · LW(p) · GW(p)
Thanks for setting this up :)