Confidence intervals seem to be rarely useful, in and of themselves
post by anorangicc (anorangic) · 2022-02-05T11:23:16.132Z · LW · GW · 4 commentsContents
4 comments
Context: I read Eliezer Yudkowski's post "Frequentist statistics are frequently subjective" [LW · GW] and it inspired me to get a better first-principles understanding of some statistical concepts. Now these are my personal thoughts about the topic. Don't believe me; if I had attached confidence intervals to the results of my investigation, they would be very broad!
Confidence intervals frequently cause confusion. For example, the U.S. National Institutes of Health (NIH) claim in their course on "Finding and using health statistics":
Confidence intervals are frequently reported in scientific literature and indicate how close research results are to reality, or how reliable they are, based on statistical theory. The confidence interval uses the sample to estimate the interval of probable values of the population; the parameters of the population.
For example, if a study is 95% reliable, with a confidence interval of 47-53, that means if researchers did the same study over and over and over again with samples of the whole population, they would get results between 47 and 53 exactly 95% of the time. The reliability in this example refers to the consistency of the measurement, or the ability to repeat it. Poor reliability can happen with a small population, or if the health event being studied does not happen often or at regular times.
(emphasis added)
This paragraph left me confused but I believe the statement is wrong. To get to this point, let's first give a definition of a confidence interval.
Let's assume a situation where we want to determine some value telling us something important about the world (global mean temperature, ratio of defective items in a sample of products, ...). So we design some experiment resulting in a vector of measurable results of random variables. The distribution of depends in a assumed-to-be-known way on .
Does the outcome of the experiment, the realization of instead of its distribution, tell us something about ? Not with certainty. However, we can use strategies that will often succeed giving us true information about .
When I say "confidence inverval" for inferred from , this isn't a canonically well-defined concept in and of itself. A method to assign confidence intervals is a function that maps each realization of X to an interval of real numbers (e.g., ) that satisfies the following property: For every theoretically possible value of and the corresponding assumed distribution of , we get , where is some probability called the confidence level of . (Note that is a fixed number and not a random variable!)
There can be many different methods to assign confidence intervals and given a certain realization of and interval , there is almost always a way to construct a method to find confidence intervals with (Eliezer Yudkowski gives a humorous illustration of this fact, pointing out that there even are methods that sometimes return ["Cheesecake"-"Cheddar"] and can still be counted as methods to assign confidence intervals, if I understand him correctly).
So, assume I want to create a start-up that sells food, but just one kind of food to achieve lower costs. To decide of which food to ramp up production, I run a study assessing what ratio of U.S. citizens believe that sandwiches are tastier than tomato soup. I ask 100 people, selected uniformly, for their opinion. Depending on , the number of sandwich fans among those 100 probands is approximately distributed. I use some method to assign confidence intervals with confidence level 95% and this method returns .
What does this tell me?
Consider the case . This seems to be strong (95%) confidence that people virtually always prefer sandwiches over tomato soup. I decide to invest all my money into sandwich production pipelines.
Now consider the case . Unfortunately, confidence interval doesn't seem to tell me enough to justify any investment, so I discard my startup idea.
If there's high confidence of people preferring either sandwiches or tomato soup, I go with the start-up, otherwise I do nothing. This algorithm seems fairly reasonable, doesn't it?
Now, it can happen that has a strange property: High effect sizes, as expressed by and so on, are only ever attained when (as long as this doesn't happen too often, it's totally possible). In this case, whenever I decide to invest into the startup, the American population seems evenly divided between sandwiches and tomato soup and my start-up will fail. So the expected profit is negative!
What happened? Well, by definition, the method of attaching confidence intervals only guarantees that it only fails to assign a correct interval one out of twenty times. These 5% of cases cause serious costs. The remaining 95% where the confidence interval is correct don't really help because in these cases, the interval isn't clearly in favor of neither a low nor a high value of , causing me to refrain from investing and neither lose nor win anything. (This shouldn't happen if the NIH's claim was true!)
Stated differently, the amount of evidence entailed by a certain confidence interval can heavily depend on the choice of the method of attaching confidence intervals. In other words, a plain confidence interval is no evidence for any value of , except we know the (if you're a Bayesian: conditional) distribution of dependent on , because this would allow us to compute odds ratios. But then we wouldn't use the definition of a confidence interval anymore.
What I conclude from this anomaly is (1) that confidence intervals don't err most of the time (e.g. only in 5% of experiments) and (2) one has to be horribly cautions when inferring anything practical from them since that would imply to make one's own behavior dependent on the confidence intervals, which aren't evidence per se. However, confidence intervals are quite handy (and better than a point estimate!) from a practical perspective and I haven't settled to a final conclusion of when they are appropriate.
I'd love to hear your opinions on this anomaly! I'll also appreciate your criticisms if I overlooked something or made a mistake.
4 comments
Comments sorted by top scores.
comment by JBlack · 2022-02-10T01:53:56.088Z · LW(p) · GW(p)
The NIH explanation is terrible, which is not surprising. The concept of a confidence interval is one of the hardest topics to explain and to learn.
Firstly and most importantly, any confidence interval depends upon a model. If the model itself is incorrect, the interval means nothing. The parameter itself may also mean nothing, depending upon how badly the model fails.
You address much of the rest in your post, in particular their butchering of what a confidence interval means.
In practice, methods for assigning confidence intervals to data are usually monotonic, so that an increase in the true value for q monotonically shifts the distribution for the interval endpoints. This isn't a necessary property for confidence intervals, but it's a useful one and by design holds very often. This behaviour rules out the pathological behaviour in the post: if you get a lower endpoint of at least 0.9 for some q in [0.4, 0.6] with some probability p, then for all q > 0.6 the probability is not less than p. That is, with any such function when the population really does have a strong preference toward sandwiches, then your probability of detecting it will always be greater than for weaker or opposite preferences.
So yes, if you choose a bad confidence interval function then you can get horrible results.
comment by Maxwell Peterson (maxwell-peterson) · 2022-02-06T22:15:11.179Z · LW(p) · GW(p)
It’s not clear to me whether you’re challenging the validity of uncertainty intervals in general, or just the specific definition given by the NIH. If we’re only talking about the NIH quoted definition, I agree that they make it sound like the 5% of times q falls outside the interval, that q could be anything, and all bets are off, which if true, makes it dangerous to depend on the intervals.
But for uncertainty intervals derived from the basic Bayesian algorithms, e.g. finding the highest-density-interval of a sandwich-tomato Beta distribution, I don’t think that q is uniformly likely to be anything from 0 to 1 in the 5% of cases it falls outside the interval. If I have a 95% Beta interval from 0.9 - 1, in the cases where q falls outside the interval, it’s still more probable that q is 0.8 than that q is 0.2. So if this is what you mean, I don’t agree that basing practical inferences on them requires horrible caution.
It’s also unclear to me when C has the property that intervals near 0 and 1 only occur when q is between 0.4 and 0.6. If an algorithm to produce intervals has this property, it sounds like a bad algorithm with one or more mistakes in it that no one should use. Using the word “confidence interval” to include both this bad algorithm, and better algorithms that avoid the bad property, then concluding that intervals in general are dangerous, is technically a true conclusion, but… not interesting, if that makes sense? Sort of like if someone said “a mathematician can make mistakes in calculations, thereby causing their results to have counterintuitive properties; therefore, be cautious when relying on math.” Totally true! But hearing this, I wouldn’t feel I had learned anything about math.
Do you have an example of the kind of interval calculation that would be more likely to produce intervals near 0 or 1 when q is between 0.4 and 0.6?
Replies from: anorangic↑ comment by anorangicc (anorangic) · 2022-02-07T23:55:47.581Z · LW(p) · GW(p)
Thanks for your scrutiny :) (and sorry for the long-winded response...)
Let me try to clarify the bottom line of the post:
This post clarifies some subtle points about the ways in which confidence intervals are useful. In the way that a confidence interval is defined mathematically (as far as I understand), without any further axioms, it does not give lots of guarantees. As a side note, the NIH claim seems to be just wrong (and is not what I suppose to be the standard definition the rest of the article is about), and there isn't any method of attaching confidence intervals that can live up to their claim.
It's not that we shouldn't use confidence intervals in any form. But when some practical consequences are drawn conditional on a confidence interval, one has to be wary that there will be some error. In many situations, confidence intervals might be sufficiently "nice" such that these errors are negligible and the conclusions still point in the right direction, but there will be some error, at least in how strong the evidence is regarded to be (except if you don't just use the definition of a confidence interval but use the narrowness of the interval as an intuitive indicator of the strength of evidence if that's possible with your given method of attaching confidence intervals, but then you don't really use that it's a confidence interval).
Here's an example of a maliciously constructed confidence interval for the scenario in the post. If more than, say, 90 or less than 10 people from the sample prefer sandwiches, output as confidence interval. If exactly 50 people prefer sandwiches, output . Otherwise, output the interval centered at the mean of the sample and adjust the narrowness to account for the standard deviation. Note that it's rare to have exactly 50 people prefer sandwiches (a bound independent of q is 8%), so this trick doesn't worsen the confidence level of the interval too much. If one plans to only act upon clear-cut intervals such as , one will almost always lose when these intervals occur (50:50 will be obtained most of the time when q is near 0.5).
Will something similarly bad but less drastic happen in reality when the confidence interval method is not constructed in a malicious way? When it's only about rough estimates probably not, but I don't know yet.
I should probably give the article a question as title. The current title seems a bit too harsh and overshadows my conclusion that confidence intervals seem to be handy while I don't understand when they are safe to use in practice. In view of the frequent use of confidence intervals in science (and their relevance for calibrated predictions), I'd like to understand how much I can infer from them in which situations. Do you know any good heuristics for this?
Replies from: maxwell-peterson↑ comment by Maxwell Peterson (maxwell-peterson) · 2022-02-08T02:13:13.988Z · LW(p) · GW(p)
Gotcha - thanks for clarifying and providing the example - it helps!
Everything I know is from the Bayesian way of doing things, so I'm going to talk about uncertainty intervals, which I think are mostly the same as confidence intervals; the main difference, as far as I can tell, is philosophy. (People also call uncertainty intervals "credible intervals" or "credibility intervals".)
With regard to evaluating the dependability of a given interval, I think it's important to think about the underlying distribution the interval is being drawn over. I've drawn 3 examples in this image: I think you're worried about situations like the third case (#C). In #C, when q doesn't fall in the interval, it probably is far from the interval, because the rest of the probability is concentrated in the left & right bounds of the range.
I'm gonna come out strong and say that this can never happen in the tomato-sandwich case, when you use the correct calculations to build the interval. The correct calculations are:
- Specify a Beta distribution, B(1, 1) as your prior. (The 1's can be other numbers; doesn't change my broader argument).
- Because the tomato-sandwich question is isomorphic to a coin flip, the data distribution is most naturally modeled as a Bernoulli. So treat your data as being drawn from a Bernoulli distribution.
- Then the posterior distribution is Beta(1 + # tomato, 1 + # sandwich). [Since the Beta and Bernoulli are conjugate, this is always the form of the posterior].
- Use either the equal-tails or highest-probability-density method to construct the interval.
Since the posterior distribution is a Beta, and a Beta with a few data points always has exactly one hump, C won't happen.[1] . So if you know a calculation was done correctly, and that it is modeling a Bernoulli[2] situation, you're safe - the risks of C won't be there. (You can play with different Beta distributions easily here to see that nothing like C ever happens).
Things are very often modeled as Gaussian (even things that are technically better-modeled as Beta), and for the Gaussian, it's the same: one hump, never looks like #C. The intervals here are also well-behaved.
If you're constructing intervals over the data distribution, then things get weird, yeah. But I don't think it makes sense to construct intervals over the data distribution; or at the very least, if you do, you are leaving behind some of the safety guarantees of Bayesian calculations like the above. It is hard to imagine what doing so would mean in the tomato-sandwich case: the data is a bunch of "Tomato" and a bunch of "Sandwich". There are four possible 'intervals' here (really they are sets): the one that contains only Tomato, and one that contains only Sandwich, the one that has both, and the one with neither. Other data distributions look more like probability distributions, but even there, going strictly off the data distribution, with no prior or posterior distributions anywhere... yeah, things could definitely get weird.
So maybe one heuristic is: beware of intervals constructed directly on the data distribution. I've done this sometimes (actually, often) when I'm lazy and things seem like they'll be fine, so this is definitely a thing people do! If someone says "we modeled this as a [Gaussian/Beta/Gamma/etc.]", then they probably have well-behaved calculations going on.
If the data distribution is bimodal, making a two-peaked distribution the approriate posterior, and you use a Gaussian to model it, your conclusions will be way wrong, and your interval will have the kind of problems you're worried about. But there's no way to modify the interval-creation algorithm to identify the two modes from a Gaussian posterior; the problem was in choosing to model with a Gaussian in the first place. So I wouldn't blame the interval algorithm here.
On the other hand, if you do know your posterior is bimodal, model it appropriately, and obtain a two-peaked posterior... hm. I think both the equal-tailed and highest-probability-density intervals would be super-wide, since they would have to stretch over both peaks to get all the density. So this is OK too - your interval isn't useful, but it would be super-wide, so you'd notice. The real problem is #C, and for posteriors that look like #C, I think you're totally right - the interval can mislead someone badly, if all they know is the interval and assume it came from something that looks like #A or #B.
Also, AFAIK, the Bayesian calculations for.. anything..? always result in a posterior full probability distribution. So you can always look at the distribution and check if it has some bad #C-like property! Once satisfied it doesn't, bang, make the interval. But like you say, this doesn't really help when reading intervals published by other people...