If there was one element of statistical literacy that you could magically implant in every head, what would it be?
post by enfascination · 2016-02-22T19:53:47.523Z · LW · GW · Legacy · 67 commentsContents
67 comments
Alternatively, what single concept from statistics would most improve people's interpretations of popular news and daily life events?
67 comments
Comments sorted by top scores.
comment by Raiden · 2016-02-22T21:22:33.129Z · LW(p) · GW(p)
The idea of ALL beliefs being probabilities on a continuum, not just belief vs disbelief.
Replies from: James_Miller, MrMind↑ comment by James_Miller · 2016-02-23T22:58:04.806Z · LW(p) · GW(p)
Doesn't the word "ALL" make your statement self-contradictory?
Replies from: RowanE, Raiden, fubarobfusco↑ comment by Raiden · 2016-02-24T21:16:40.809Z · LW(p) · GW(p)
My statement itself isn't something I believe with certainty, but adding that qualifier to everything I say would be a pointless hassle, especially for things that I believe with a near-enough certainty that my mind feels it is certain. The part with the "ALL" is itself a part of the statement I believe with near certainty, not a qualifier of the statement I believe. Sorry I didn't make that clearer.
Replies from: James_Miller↑ comment by James_Miller · 2016-02-24T22:11:38.226Z · LW(p) · GW(p)
OK, and appropriate when writing on LW. But I wonder if part of the reason most people don't think of "beliefs being probabilities on a continuum" is that even statistically literate people don't usually bother qualifying statements that if taken literally would mean they held some belief with probability 1.
↑ comment by fubarobfusco · 2016-02-23T23:49:58.211Z · LW(p) · GW(p)
No, it just makes it something other than a belief: an axiom, a game-rule, a definition, a tautology, etc.
Replies from: James_Miller↑ comment by James_Miller · 2016-02-24T02:56:48.712Z · LW(p) · GW(p)
It's a belief about beliefs.
Replies from: entirelyuseless↑ comment by entirelyuseless · 2016-02-24T05:49:48.730Z · LW(p) · GW(p)
That's true, but it's hard to see why that means that it would be a contradiction. It's true that there is a contradiction if you say that all beliefs have a specific mathematical probability of less than one (e.g. including that 1+1=2), since probability theory also assumes that the probability of a mathematical claim is 1. But probability theory isn't supposed to be an exact representation of human beliefs in the first place, but a formalized and idealized representation. In reality we are not always completely certain even of mathematical truths, and this does not cause the existence of a contradiction, because this uncertainty, considered in itself, is not something mathematical.
You could say in the same way that all beliefs are uncertain, including this one, without any contradiction, just as it is not a contradiction to say that all sentences are made of words, including this one.
Replies from: James_Miller↑ comment by James_Miller · 2016-02-24T17:22:32.409Z · LW(p) · GW(p)
I interpreted the statement as basically "I am CERTAIN that you can never be certain of anything." I almost didn't post a response because I thought the author might have been deliberately being sarcastic.
comment by enfascination · 2016-02-23T05:52:30.229Z · LW(p) · GW(p)
I think a lot about signal detection theory, and I think that's still the best I can come up with for this question. There are false positives, there are false negatives, they are both important to keep in mind, the cost of reducing one is an increase in the other, humans and human systems will always have both.
So, for example, even the most over-generous public welfare system will have deserving people off the dole and even the most stingy system will have undeserving recipients (by whatever definition), so the question (for a welfare system, say) isn't how do we prevent abuse, but how many abusers are we willing to tolerate for every 100 deserving recipients we reject? Also useful in lots of medical discussions, legal discussions, pop science discussions, etc.
comment by HungryHobo · 2016-02-23T11:08:35.931Z · LW(p) · GW(p)
The very basics of probability. I'm talking to the level of "there is about a 1 in 6 chance of a reasonably fair dice coming up 3 on a single roll"
I remember a friend telling me about a game some of his classmates played which was basically about calling high/low on the next card dealt.
He'd made a modest and steady income simply calling based on whether it was greater or less than 7 for the first few cards and he was known as being "lucky". They honestly couldn't comprehend something as simple as that.
Replies from: Simon79, Brillyant↑ comment by Simon79 · 2016-02-25T15:54:13.125Z · LW(p) · GW(p)
Absolutely. Not to mention all the "after a string of red, black is more likely" people....and there are a lot out there
Replies from: Lumifer↑ comment by Brillyant · 2016-02-23T16:25:33.465Z · LW(p) · GW(p)
This reminds me of something I've heard in regard to fixed games in sports.
People have this idea that fixed games are unlikely because it's too big a conspiracy to not be found out. It would be obvious that one team was throwing the game, or that a referee was being unfair.
However, corruption in sports can be pretty simple and hard to notice. For instance, in a basketball game, an official could make the over-under more likely to pay out the over bet just by calling ~10% more fouls in any given game. This could mean blowing the whistle for a foul just 5-7 more times in a 48 minute game, allowing the teams extra free throws, which are high probability opportunities for extra points. Since fouls in basketball are very subjective, it would be very difficult to detect this method of corruption.
More importantly to this discussion, the type of game fixing described above need not be guaranteed to cause the desired outcome in any given game. In fact, it's better for the scheme to be very subtle over the course of many games so as to avoid detection.
If you wager enough money, it would be statistically quite lucrative to push the probability in your favor by just a few percentage points. $1M per game x 82 games in a season x 30 teams x 52% or 53% probability of winning.
Replies from: CronoDAScomment by HungryHobo · 2016-02-23T11:00:06.769Z · LW(p) · GW(p)
That the statement "X causes Y" is almost meaningless without knowing how much.
Bread gives you cancer? Really? OH MY GOD! (small print, only 1 in a [huge number] chance)
But most people seem to only have 3 levels of belief about things.
"X does"," X does not", "X maybe does" and round them to 1,0 and 0.5 respectively.
You'll find yourself having conversations with these people along the lines of
Nutter:"You shouldn't let children do that! it causes cancer!"
You:"there's less than one in a million chance assuming they do this every day of their lives"
Nutter:"SEE! IT CAUSES CANCER! I KNEW IT! YOU MONSTER "
Most people are depressingly thick.
comment by Brillyant · 2016-02-22T20:22:20.461Z · LW(p) · GW(p)
The Law of Truly Large Numbers. And that 1 in a million experiences are actually super common in a world with 7B+ people. I have a background in the sort of Christianity that emphasizes the reality of miracles and apparently unexplained phenomena, so this would likely help sooth that annoyance.
comment by fubarobfusco · 2016-02-24T00:00:32.105Z · LW(p) · GW(p)
Noise happens. Even if X is predictive of Y, it's rarely perfectly predictive.
For instance, suppose that 1000 students take a math test, then take a different math test that covers the same material with different problems. It is highly likely that their rankings on the two tests will be strongly correlated. It is highly unlikely that their rankings on the two tests will be exactly the same.
And it is quite possible that a few students will do vastly better on one test than the other, due to things that have nothing particularly to do with their mathematical ability. If you give a math test to a sufficiently large student population, then some student's boyfriend will have gotten hit by a car on the morning of the math test. That will probably mess with their scores.
comment by HungryHobo · 2016-02-25T18:09:14.506Z · LW(p) · GW(p)
That when you've already picked someone/something out from the general population based on a particular property you cannot then use the same criteria to come to conclusions.
If you use a DNA matching technique with a 1 in a million chance of a false positive to pick your suspect out of a large database of people you cannot then use that "1 in a million chance" as part of the evidence against them. Yet courts absolutely would. (doing it the other way round, selecting one person then using the test is perfectly reasonable)
comment by DanArmak · 2016-02-23T15:06:04.222Z · LW(p) · GW(p)
95% of all statistics are made up. It's very easy to make up data or confuse people with bad statistical treatment, but most science reporters / news media don't bother, they just honestly misunderstand the source data instead. If you can't check the statistical technique in detail yourself, and you don't very highly trust the source to do so (hint: news media are almost never trustworthy), you should ignore any statistical claims as being basically uncorrelated with reality.
Replies from: cousin_itcomment by Galap · 2016-02-23T09:52:47.964Z · LW(p) · GW(p)
This may not be strictly statistical, but I would choose the idea that in order to make any meaningful statement with data, you always have to have something to compare it to.
Like someone will always come in some political thread and say , "X will increase/decrease Y by Z%.) And my first thought in response is always, "Is that a lot?"
For a recent example I saw, someone showed a graph of Japanese student suicides as a function of day of the year. There were pretty high spikes (about double the baseline value) on the days corresponding to the first day of each school semester. The poster was attributing this to Japanese school bullying and other problems with Japan's school system.
My first thought was, "wait. Show me that graph for other countries. For the world, if such data has been reliably gathered." If it looks the same, it's not a uniquely Japanese problem. What if it's worse in other countries, even?
Yeah, I'd really like to see people stop using information where it doesn't mean anything in isolation. A lot of people think that controls in science exist to make sure that the effects you see aren't spurious or adventitious. It's not like that's wrong, but it's deeper and even more fundamental than that.
I'm a scientist, so let me give you an example from my research (grossly simiplified and generalized for brevity).
Substance A was designed such that it manifests an as-of-yet unexplored type of structural situation. We then carried out a reaction on substance A to see what some of the effects of this situation are. Something happened.
So, if we were to leave it at that, what would we have learned? Nothing. We need substance B, which does not have that siutation going on but is otherwise as similar to A as we can make it, to see what IT does, to see if it does anything different than A. See, we need to do the experiments on both B and A not to see whether the results of A are 'real'. We need to do it to see what the results even ARE in the first place.
comment by PhilGoetz · 2016-02-26T06:58:48.387Z · LW(p) · GW(p)
I'd give people the ability to do multiple regressions in their head. Because I want to be able to do multiple regression in my head.
Replies from: Anders_H↑ comment by Anders_H · 2016-02-26T07:40:28.484Z · LW(p) · GW(p)
Why do you want to be able to do that? Do you mean that you want to be able to look at a spreadsheet and move around numbers in your head until you know what the parameter estimates are? If you have access to a statistical software package, this would not give you the ability to do anything you couldn't have done otherwise. However, that is obvious, so I am going to assume you are more interested in groking some part of the underlying the epistemic process. But if that is indeed your goal, the ability to do the parameter estimation in your head seems like a very low priority, almost more of a party trick than actually useful.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2016-02-26T08:16:52.217Z · LW(p) · GW(p)
I think it would be very useful. I have access to software packages, but it takes effort to gather data, type it in, etc. If I could do it in my head--my mind mentally keeping track of observations and updating the parameters as I go through life, for all sorts of questions--does it look like rain today? how energetic do I feel today? -- I'd be building accurate models of everything important in my life. It would be a different level of rationality.
comment by buybuydandavis · 2016-02-23T09:02:06.190Z · LW(p) · GW(p)
All probabilities are Bayesian, i.e., conditioned on some information I.
comment by Strangeattractor · 2016-02-25T21:55:39.431Z · LW(p) · GW(p)
I would explain about blocking, how people can be matched up by profession, socio-economic status, smoker or non-smoker, and various other traits, to make comparisons where those factors are assumed to be equal.
comment by James_Miller · 2016-02-22T22:47:14.797Z · LW(p) · GW(p)
Generalized method of moments because "implanting" it in every head would require greatly increasing the intelligence of most of mankind.
comment by kithpendragon · 2016-02-22T20:54:04.070Z · LW(p) · GW(p)
I'd like to see people have a clue what a probability actually is. I'm tired of hearing how the weather forecast was "wrong".
Replies from: Lumifer, ChristianKl↑ comment by Lumifer · 2016-02-22T22:16:01.151Z · LW(p) · GW(p)
I'd like to see people have a clue what a probability actually is.
Heh. It isn't that simple.
What precisely does "There is a 70% chance of rain tomorrow" mean?
Replies from: CronoDAS, philh, mkf, DanArmak, kithpendragon↑ comment by CronoDAS · 2016-02-24T22:25:03.846Z · LW(p) · GW(p)
What precisely does "There is a 70% chance of rain tomorrow" mean?
If you offered me the choice of two "lottery tickets", one of which paid $30 if it rained tomorrow and one of which paid $70 if it didn't, I wouldn't care which one I took.
Replies from: gjm, Lumifer↑ comment by gjm · 2016-02-25T00:14:43.166Z · LW(p) · GW(p)
That surely can't be the right general answer, because the relationship between your attitude to getting $30 and to getting $70 will depend on your wealth now. (And also on your tolerance for risk, but you might be willing to argue that risk aversion is always irrational except in so far as it derives from diminishing marginal utility.)
You could switch from dollars to utilons, but then I think you have a different problem: we don't have direct access to our utility functions, and I think the best techniques for figuring them out depend on probability, which is going to lead to trouble if probabilities are defined in terms of utilities.
↑ comment by Lumifer · 2016-02-25T02:44:41.371Z · LW(p) · GW(p)
My question was about the probability of rain, not about what you would be willing to bet on. Besides, who's that "me", a perfect rational Homo Economicus or a real person? Offering bets to an idealized concept seems like an iffy idea :-)
↑ comment by philh · 2016-02-23T11:12:29.954Z · LW(p) · GW(p)
"Probability of appreciable rainfall" * "fraction of specified area which will receive it" is 0.7.
Or, I guess more properly it should be an integral over possible rainfall patterns. But "70% of London will definitely see lots of rain, and 30% will see none" and "we have 70% credence that all of London will see lots of rain, and 30% credence that no rain will fall in London" would both be reported as 70% chance of rain in London.
https://en.wikipedia.org/wiki/Probability_of_precipitation
Replies from: Lumifer, ChristianKl↑ comment by ChristianKl · 2016-02-23T11:30:49.907Z · LW(p) · GW(p)
You just replaced the word probability with credence/chance without explaining what's meant with it on a more basic level. The people where you complain that they don't know what probability means also won't know what credence means.
Replies from: philh↑ comment by philh · 2016-02-23T12:56:50.342Z · LW(p) · GW(p)
I was talking about weather forecasts, not trying to explain probability.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-02-23T16:16:14.881Z · LW(p) · GW(p)
I think then you evaded the question Lumifer asked. The original post stated I'd like to see people have a clue what a probability actually is.
Then Lumifer asked what it actually is. Explaining weather forcasts is besides the main point.
↑ comment by philh · 2016-02-23T16:58:48.191Z · LW(p) · GW(p)
Yes, I wasn't answering the question as intended. But both kithpendragon and Lumifer were talking about the weather forecast, and it does seem at least vaguely relevant that even if you know exactly what probability is, that's not sufficient to understand "70% chance of rain".
Replies from: ChristianKl↑ comment by ChristianKl · 2016-02-23T17:08:00.217Z · LW(p) · GW(p)
Okay, I might have been to harsh.
↑ comment by DanArmak · 2016-02-23T15:02:45.296Z · LW(p) · GW(p)
Naively, I would expect it to mean that if you take sufficiently many predictions (i.e. there's one made every day), and you group them by predicted chance (70%, 80%, etc. at e.g. 10% granularity), then in each bin, the proportion of correct predictions should match the bin's assigned chance (e.g. between 75% and 85% for the 80% bin). And so given enough predictions, your expected probability for a single prediction coming true should approach the predicted chance. With more predictions, you can make smaller bins (to within 1%, etc).
Replies from: Lumifer↑ comment by Lumifer · 2016-02-23T15:58:06.311Z · LW(p) · GW(p)
So, you're taking the frequentist approach, the probability is the fraction of the times the event happened as n goes to infinity? But tomorrow is unique. It will never repeat again -- n is always equal to 1.
And, as mentioned in another reply, calibration and probability are different things.
Replies from: DanArmak↑ comment by DanArmak · 2016-02-24T14:40:23.717Z · LW(p) · GW(p)
But tomorrow is unique. It will never repeat again -- n is always equal to 1.
The prediction is not unique. I group predictions (with some binning of similar-enough predictions), not days. Then if I've seen enough past predictions to be justified that they're well calibrated, I can use the predicted probability as my subjective probability (or a factor of it).
Replies from: Vaniver↑ comment by Vaniver · 2016-02-24T14:58:29.701Z · LW(p) · GW(p)
The prediction is not unique.
The trouble with this approach is that it breaks down when we want to describe uncertain events that are unique. The question of who will win the 2016 presidential election is one that we still want to be able to describe with probabilities, even though it doesn't make great sense to aggregate probabilities across different presidential elections.
In order to explain what a single probability means, instead of what calibration means, you need to describe it as a measure of uncertainty. The three main 'correctness' questions then are 1) how well it corresponds to the actual future, 2) how well it corresponds to known clues at the time, and 3) how precisely I'm reporting it.
Replies from: DanArmak↑ comment by DanArmak · 2016-02-24T15:40:53.333Z · LW(p) · GW(p)
That's correct: my approach doesn't generalize to unique/rare events. The 'naive' or frequentist approach seems to work for weather predictions, and creates a simple intuition that's easier IMO to explain to laymen than more general approaches.
Replies from: Lumifer↑ comment by kithpendragon · 2016-02-23T01:13:16.506Z · LW(p) · GW(p)
What precisely does "There is a 70% chance of rain tomorrow" mean?
It means that the proportion of meteorological models that predict rain to those that don't is 7:3. Take an umbrella. ;)
Replies from: Lumifer↑ comment by Lumifer · 2016-02-23T02:41:23.600Z · LW(p) · GW(p)
It means that the proportion of meteorological models that predict rain to those that don't is 7:3
Yeah, that's an old joke, except it's told about meteorologists and not models.
But the question of "what a probability actually is" stands. You are not going to argue that it's a ratio of model outcomes, are you?
Replies from: kithpendragon↑ comment by kithpendragon · 2016-02-23T15:05:32.581Z · LW(p) · GW(p)
Perhaps I could have better phrased the complaint; I wasn't attempting to dive into the philosophical. The point was that the meteorologist is not "wrong" if it rains on a 30% chance or if the high temperature is off by a couple of degrees. Meteorologists deal with a lot of uncertainty (that they don't always communicate to us effectively). People need to understand that a 30% chance of rain only means that it likely won't rain (roughly 2:1 against). Still wouldn't hurt to take an umbrella.
As for the philosophical, I'd have to claim that a Probability is a quantitative expression of predictive uncertainty that exists within an informational system such as the human brain or, yes, weather prediction models. Come to think of it, that might actually be helpful for people to understand the weather report. I just don't trust my coworkers to be able to parse most of those words.
Replies from: Lumifer↑ comment by Lumifer · 2016-02-23T15:59:47.822Z · LW(p) · GW(p)
The point was that the meteorologist is not "wrong" if it rains on a 30% chance
Well, is the forecast falsifiable, then? Can it be wrong? How would you know?
Probability is a quantitative expression of predictive uncertainty that exists within an informational system such as the human brain or, yes, weather prediction models.
So the probability exists purely in the map, but not in the territory? I am not sure quantum mechanics would agree.
Replies from: kithpendragon↑ comment by kithpendragon · 2016-02-23T16:49:12.837Z · LW(p) · GW(p)
Is the forecast falsifiable, then? Can it be wrong? How would you know?
Same way you know if other probabilistic prediction systems are "wrong": keep track of accurate and inaccurate predictions, weighted by confidence levels, and develop model of the system's reliability. Unreliable systems are probably "wrong" in some way. Individual predictions that express extreme confidence in an outcome that is not observed are "wrong". But I cannot recall having reason to accuse any meteorologists of either error. (Full disclosure: I don't care enough to make detailed records.)
I would also point out that the audience adds another level down the predictive rabbit hole. Weather forecasts usually predict for a large area. I've observed that weather can be significantly different between Hershey and Harrisburg in Pennsylvania. The two are less than a half-hour apart, and usually have identical forecast conditions. This further confounds the issue by adding the question of who is included in that 30% chance of rain. You could interpret it to mean a high degree of confidence that 30% of the forecast area will see rain. I have not seen an interview with a meteorologist that addressed that particular wrinkle.
So the probability exists purely in the map, but not in the territory? I am not sure quantum mechanics would agree.
Can't speak on quantum mechanics with much authority, but my suspicion is that there's something going on that we haven't yet learned to predict (or maybe don't have direct access to) on a quantum level. I seem to remember that quantum physics predicts more than [3 space + 1 time] dimensions. Since I don't appear to have access to these "extra" dimensions, it seems intuitive that I would be as ineffective at predicting events within them as Flatlanders would be at predicting a game of pool as seen from a single slice perpendicular to the table. They might be able to state a likelihood that (for example) the red circle would appear between times T1 and T2 and between points P1 and P2, but without a view of the plane parallel to the table and intersecting with the balls they would really only be making an educated guess. The uncertainty exists in my mind (as limited by my view), not in the game. I suspect something similar is likely true of Physics, though I'm aware that there are plenty of other theories competing with that one. The fact of multiple competing theories is, in itself, evidence that we are missing some important piece of information.
I expect time will tell.
Replies from: Lumifer↑ comment by Lumifer · 2016-02-23T17:55:48.145Z · LW(p) · GW(p)
Same way you know if other probabilistic prediction systems are "wrong"
I asked about a single forecast, not about a prediction system (for which, of course, it's possible to come up with various metrics of accuracy, etc.). Can the forecast of 70% chance of rain tomorrow be wrong, without the quotes? How could you tell without access to the underlying forecasting system?
but my suspicion is that there's something going on that we haven't yet learned to predict
So your position is that reality is entirely deterministic, there is no "probability" at all in the territory?
Replies from: kithpendragon↑ comment by kithpendragon · 2016-02-23T21:14:04.047Z · LW(p) · GW(p)
So your position is that reality is entirely deterministic, there is no "probability" at all in the territory?
I feel that is most likely, yes.
↑ comment by ChristianKl · 2016-02-22T22:48:21.518Z · LW(p) · GW(p)
Unfortunately my weather forcast doesn't tell me it's between 10 and 15 degrees with 80% probability but the weather forcast for tomorrow is 12 degrees. As such it makes more sense to say it was wrong.
Replies from: kithpendragon↑ comment by kithpendragon · 2016-02-23T01:22:10.016Z · LW(p) · GW(p)
Certainly it is easier to say it was wrong. Meteorologists actually do see the error bars &c., then they dumb it down so most people can grasp what they're saying. I understand there is ongoing discussion as to what kind of balance is appropriate between being precise and being understandable. Unfortunately, status quo bias seems to be dictating the outcome of that discussion, and much of the information in meteorological models is never provided to the general public as a result.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-02-23T11:20:39.961Z · LW(p) · GW(p)
I think most people would be perfectly able to understand: The temperature is going to be between 10 and 15 degrees instead of the temperature is going to be 12 degrees.
Then the metrologist can use whatever probability he considers to be appropriate.
Unfortunately, status quo bias seems to be dictating the outcome of that discussion
Yes, and the status quo is wrong. It's makes sense to say it's wrong. People in charge really do screw up by staying with the status quo. Making excuses for it doesn't help.
That's especially true today where I get my weather information from Google or from Windows. In both cases it would be easy to provide a UX interface that allows me to see proper statistics about the weather.
Google knows a lot about me. It could even guess that I want proper statistics.
Replies from: kithpendragon↑ comment by kithpendragon · 2016-02-23T15:19:05.984Z · LW(p) · GW(p)
The status quo is certainly wrong when it comes to the presentation of weather related data. The report is badly oversimplified due to several effects including the (over)estimated gap in understanding of statistics between meteorologists and the general public.
A 30% chance of precipitation is not, however, "wrong" if it does in fact rain. It merely expresses a fairly high degree of uncertainty in the claim "it will/won't rain today". The claim that such a report means the meteorologist was wrong (or somehow lying) is the subject of my complaint, not the format of the report itself (which I agree is abysmally deficient).
Replies from: ChristianKl↑ comment by ChristianKl · 2016-02-23T16:18:47.746Z · LW(p) · GW(p)
Do you think I was just dumbing things down
is generally a valid excuse when people state that you are making wrong statements?
I think lying does include an attempt at deception which I agree isn't there on the part of meteorologists.
comment by Gunslinger (LessWrong1) · 2016-02-23T06:04:17.450Z · LW(p) · GW(p)
Wait a sec.. didn't we have a thread like this some time ago?
Replies from: Brillyant