You are way more fallible than you think
post by Shmi (shminux) · 2021-11-25T05:52:50.036Z · LW · GW · 14 commentsContents
TL;DR: And therefore you should be very suspicious of any personal probability estimates that are near or below your fallibility level: they are complete junk. None 14 comments
Yes, you. (And me, of course, including everything I write here.)
Epistemic status: annoyed, a quick writeup.
TL;DR: And therefore you should be very suspicious of any personal probability estimates that are near or below your fallibility level: they are complete junk.
Most of us are very much uncalibrated. We give 80%+ chance of success/completion to the next project even after failing a bunch of similar ones before. Even those of us who are well calibrated are still bad at the margins, where the probability is low (or, equivalently, high). Events we give the odds of 1% to can happen with the frequency of 20%. Or 0.01%. Remember, you are an embedded agent, and you only have a limited access to your own source code. Some examples:
- Pascal's mugging [? · GW]:
"Give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3^^^^3 people."
The usual resolutions are the discussions of bounded utility and such, while a fallible-self approach advocated here would imply that you as an agent are too noisy to make reliable calculations, and shouldn't even try. - Arguments of the sort "what if God parted the sky and appeared to you in person, would you believe then?"
Well, it helps to take a third person perspective there. If someone told you it happened to them, what would be your reaction? Probably that they are either lying or delusional, not that they literally witnessed this momentous event. This fallible-other logic should apply equally to the self. So if you witness an event like that, the first thing is to doubt your observations, and hopefully check yourself into the nearest psych ward to get evaluated for a psychotic episode. - For a more controversial and less central example, consider strong and very strong longtermism [EA · GW]. Despite the arguments to the contrary [EA · GW], it is very hard to avoid privileging a few pet scenarios because of the availability heuristic, and assign non-negligible probabilities to them. In this neck of the woods it is the AGI takeover, in the prog community it might be the climate change wiping out almost all of humanity, in some evangelical circles it can be hastening the rapture.
Now, to address the but-whatab-autists like myself, there are definitely cases where estimated tiny probabilities can be pretty accurate, like, say, playing the lottery, and it is worth calculating actions based on them. Even there you ought to be very skeptical that you have found a way to circumvent your own fallibility. There might also be hail-mary situations where the alternatives to relying on a low-probability high-payoff event are even worse, which is a premise of many a fiction story. But in general, it is always worth remembering that you are an unreliable sack of meat optimized for propagating your genes, not for logical thinking, and your odds of success in estimating small probabilities are generally lower than the estimates themselves.
14 comments
Comments sorted by top scores.
comment by philh · 2021-12-01T07:17:38.156Z · LW(p) · GW(p)
It seems like you're framing this in terms of "extreme probabilities are unlikely to be accurate", but...
- You give an example of 80% probabilities being inaccurate.
- You use AGI risk as an example, which around here I often see estimates like "50% by this date, 75% by this date" and I get the impression you meant it to apply to that sort of thing too.
- You can always make an extreme probability less extreme. Silly example: "99% chance of AGI tomorrow" becomes "49.5% chance of AGI tomorrow and I get heads on this coin toss".
I feel like this kind of thing needs to be about inputs, not outputs. "If you find yourself calculating a probability under these circumstances, be suspicious", not "if you find you calculated a probability of this level, be suspicious".
Replies from: philh↑ comment by philh · 2021-12-01T12:55:12.251Z · LW(p) · GW(p)
Also... it seems like you're assuming this as background:
Most of us are very much uncalibrated. We give 80%+ chance of success/completion to the next project even after failing a bunch of similar ones before. Even those of us who are well calibrated are still bad at the margins, where the probability is low (or, equivalently, high). Events we give the odds of 1% to can happen with the frequency of 20%.
And the rest of the post riffs off that. (Like, your examples seem like not "here are examples to convince you you're uncalibrated" but "here are examples of how to deal with the fact that you're uncalibrated" or something.)
But, citation needed.
I'll grant the "most of us". I recall the studies mentioned in HPMOR, along the lines of "you ask people when they're 95% likely to finish only like a quarter finish by then. And you ask when they're 50% likely to finish and it's statistically indistinguishable". I think to (reliably, reproducibly, robustly) get results like that, most of the people in those studies need to be poorly calibrated on the questions they're being asked.
But the "even those of us"? Given that the first two words of the post are "yes, you" - that is, you project extreme confidence that this applies to the reader... how do you know that the reader is bad at the margins even if they're well calibrated elsewhere?
Is this also true of superforecasters? Is it true of the sorts of people who say "man, I really don't know. I think I'd be comfortable buying an implied probability of 0.01% and selling an implied probability of 1%, I know that's a crazy big range but that's where I am"?
(This seems like the sort of extreme confidence that you warn about in this very post. I know you admit to being more fallible than you think, but...)
Replies from: shminux↑ comment by Shmi (shminux) · 2021-12-04T01:56:56.710Z · LW(p) · GW(p)
I agree that there are people who don't need this warning most of the time. Because they already double and triple check their estimates and are the first ones to admit to their fallibility. "Most of us" are habitually overconfident though. I also agree that the circumstances matter a lot, and some people in some circumstances can be accurate at 1% level, but most people in most circumstances aren't. I'm guessing that superforecasters would not even try to estimate anything at 1% level, realizing they cannot do it well enough. We are most fallible when we don't even realize we are calculating odds (there is a suitable HPMOR quote about that, too). Your example of giving a confidence interval or a range of probabilities is definitely an improvement over the usual Bayesian point estimates, but I don't see any easily accessible version of the Bayes formula for ranges, though admittedly I'm not looking hard enough. In general, thinking in terms of distributions, not point estimates, seems like it would be progress. Mathematicians and physicists do that already in a professional setting.
comment by FeepingCreature · 2021-11-25T06:29:03.259Z · LW(p) · GW(p)
Common question: "Well, but what if God was real and actually appeared to you in flame and glory, wouldn't it be silly to not be convinced in that case?"
My answer: "I don't know, do you think my thought patterns are likely to be deployed in such an environment?"
Replies from: shminux, boris-kashirin↑ comment by Shmi (shminux) · 2021-11-25T09:19:38.525Z · LW(p) · GW(p)
That's another way to look at it. The usual implicit assumptions break down on the margins. Though, given the odds of this happening (once in a bush, at best, and the flame was not all that glorious), I would bet on hallucinations as a much likelier explanation. Happens to people quite often.
↑ comment by Boris Kashirin (boris-kashirin) · 2021-11-25T14:22:47.553Z · LW(p) · GW(p)
How about historical precedent? Gods did arrive on Spanish ships to South America.
comment by dkirmani · 2021-11-25T14:36:00.505Z · LW(p) · GW(p)
This is one of the main themes of Nassim Taleb's books. You can't really predict the future and you especially can't predict improbable things, so minimize left-tail risk, maximize your exposure to right-tail events, and hope for the best.
comment by Slider · 2021-11-26T01:58:18.615Z · LW(p) · GW(p)
Statements like "John thinks its raining and it is not raining" and "I think it is raining and it is not raining" are not always exchangeable. Would you literally not believe your eyes if you saw something you could not explain? I also think it is easy to believe that even a third person subject had the experience but whether it is mapping anything or is just a disconnected doodle is up for grabs.
Replies from: Ericf↑ comment by Ericf · 2021-11-26T02:26:52.157Z · LW(p) · GW(p)
Regarding disbelief of your own senses: As I've commented before: winners of mega lotteries routinely tripple check thier tickets, get 3rd party verification, and even while sitting in thier new mansion will express thier disbelief at winning.
comment by Gunnar_Zarncke · 2021-11-25T08:48:22.146Z · LW(p) · GW(p)
Can you add some more explanation of what ones "fallibility level" might be?
Replies from: shminux↑ comment by Shmi (shminux) · 2021-11-25T09:22:23.710Z · LW(p) · GW(p)
It's a good question. If you ever do, say, project estimates at work, and look back at your track record, most of us would give 99% odds of completion to some project within a given time (well padded to make it that high), and still notice that we go over time and/or over budget way more often than that. There are exceptions, but in general we suck at taking into account long tails.
comment by TurnTrout · 2021-11-25T06:47:54.334Z · LW(p) · GW(p)
This fallible-other logic should apply equally to the self.
Why?
Replies from: shminux↑ comment by Shmi (shminux) · 2021-11-25T09:16:59.540Z · LW(p) · GW(p)
Uh... because belief feels like truth from the inside, and so you cannot trust the inside view unless you are extremely well calibrated on tiny probabilities? So all you are left with is the outside view. If that is what you are asking.
Replies from: TurnTrout↑ comment by TurnTrout · 2021-11-25T18:26:13.092Z · LW(p) · GW(p)
I think you should distrust the other person for other reasons. They may have ulterior motives, for example—as you wrote, they may be lying. There are many social reasons for them to make claims like that. Whatever lies we tell ourselves, they are usually not conscious and explicit.
Which is why I found it strange that you stated this claim without further explanation.