What epsilon do you subtract from "certainty" in your own probability estimates?
post by Dagon · 2024-11-26T19:13:46.795Z · LW · GW · 3 commentsThis is a question post.
Contents
Answers 14 Gordon Seidoh Worley 5 AnthonyC 1 Richard_Kennaway None 3 comments
Ok, nobody is actually a strict, or even particularly careful bayesean reasoner. Still, what probability do you reserve to "my model doesn't apply, everything I know is wrong"? If you SEE a coin flip come up heads (and examine the coin and perform whatever tests you like), what's your posterior probability that the coin actually exists and it wasn't a false memory or trick in some way?
Answers
Back when I tried playing some calibration games, I found I was not able to get successfully calibrated above 95%. At that point I start making errors from things like "misinterpreting the question" or "randomly hit the wrong button" and things like that.
The math is not quite right on this, but from this I've adopted a personal 5% error margin policy, this seems to practically be about the limit of my ability to make accurate predictions, and it's served me well.
I don't.
Every probability estimate I make is implicitly contingent on a whole host of things I can't fully list or don't bother to specify because it's not worth the overhead. This is one of them. Somewhere in my head, implicitly or explicitly, is a world model that includes the rest, including ones I'm not aware of. I do not know if the set of assumptions this implies is finite or not. I know false memories and hallucinations and tricks and so on exist, but unless I already have specific reason to expect such, I reason without keeping track. When I say P(Heads), I actually mean it as shorthand for P(Heads|all the model assumptions needed for "Heads" to make sense as a concept or event). When I find a reason to think one of my unstated model assumptions is wrong, I'm changing which set of conditional probabilities I'm even thinking about.
Over time, as I improve my world model and my level of understanding of my world model, I am more able to explicitly identify my underlying assumptions and choose to question them or question other things in light of them, when I deem it worth the effort, in order to get a little bit closer to a more general underlying probability distribution.
If you SEE a coin flip come up heads (and examine the coin and perform whatever tests you like), what's your posterior probability that the coin actually exists and it wasn't a false memory or trick in some way?
Not enough to make any practical difference to any decision I am going to make. Only when I see the extraordinary evidence required to support an extraordinary hypothesis will it be raised to my attention.
3 comments
Comments sorted by top scores.
comment by JBlack · 2024-11-27T05:56:36.619Z · LW(p) · GW(p)
For all practical purposes, such credences don't matter. Such scenarios certainly can and do happen, but in almost all cases there's nothing you can do about them without exceeding your own bounded rationality and agency.
If the stakes are very high then it may make sense to consider the probability of some sort of trick, and attempt to get further evidence of the physical existence of the coin and that its current state matches what you are seeing.
There is essentially no point in assigning probabilities to hypotheses of failures of your mind itself. You can't reason your way out of serious mind malfunction using arithmetic. At best you could hope to recognize that it is malfunctioning, and try not to do anything that will make things worse. In the case of mental impairment severe enough to have false memories or sensations this blatant, a rational person should expect that a person so affected wouldn't be capable of correctly carrying out quantified Bayesian reasoning.
My own background credences are generally not insignificant for something like this or even stranger, but they play essentially zero role in my life and definitely not in any probability calculations. Such hypotheses are essentially untestable and unactionable.
comment by Celarix · 2024-11-28T12:59:54.091Z · LW(p) · GW(p)
My opinion is that whatever value of epsilon you pick should be low enough such that it never happens once in your life. "I flipped a coin but it doesn't actually exist" should never happen. Maybe it would happen if you lived for millions of years, but in a normal human lifespan, never once.
comment by tailcalled · 2024-11-26T21:24:08.177Z · LW(p) · GW(p)
Bayesianism was a mistake.