What should a Bayesian do given probability of proving X vs. of disproving X?
post by PhilGoetz · 2014-06-07T18:40:38.419Z · LW · GW · Legacy · 19 commentsContents
19 comments
Consider some disputed proposition X. Suppose there appeared to be a limited number of ways of proving and of disproving X. No one has yet constructed a proof or disproof, but you have a feeling for how likely it is that someone will.
For instance, take Fermat's Last Theorem or the 4-color problem. For each of them, at one point in time, there was no proof, but people had some sense of the prior probability of observing the lack of a counterexample given the space searched so far. They could use that to assign a probability of there being a counterexample (and hence a disproof) [1]. Later, there was an alleged proof, and people could estimate the probability that the proof was correct based on the reputation of the prover and the approach used. At that point, people could assign values to both P(will_be_proven(X)) and P(will_be_disproven(X)).
Is it reasonable to assign P(X) = P(will_be_proven(X)) / (P(will_be_proven(X)) + P(will_be_disproven(X))) ?
If so, consider X = "free will exists". One could argue that the term "free will" is defined such that it is impossible to detect it, or to prove that it exists. But if one could prove that the many worlds interpretation of quantum mechanics is correct, that would constitute a disproof of X. Then P(will_be_proven(X)) / (P(will_be_proven(X)) + P(will_be_disproven(X))) = 0.
Is it possible for this to happen when you know that X is not undecidable? If so, what do you do then?
1. The computation is not as simple as it might appear, because you need to adjust for the selection effect of mathematicians being interested only in conjectures with no counterexamples.
19 comments
Comments sorted by top scores.
comment by NoSuchPlace · 2014-06-07T22:18:03.984Z · LW(p) · GW(p)
Is it reasonable to assign P(X) = P(will_be_proven(X)) / (P(will_be_proven(X)) + P(will_be_disproven(X))) ?
No I don't think so, consider the following example:
I flip a coin. If it comes up heads I take two green marbles, else I take one red and one green marble. Then I offer to let you see a random marble and I destroy the other one without showing you.
Then, suppose you wish to test whether my coin came up tails. if the marble is red, you have proven the coin came up tails and the chance of tails being disproven is zero, so your expression is 1, but it should be 0.5.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2014-06-11T01:19:46.409Z · LW(p) · GW(p)
Yes, good answer, with a minor correction, since in this case P(coin came up tails) is in fact 1, not 0.5. The problem is that before I look at a marble, it is possible for doing that to prove that the coin came up tails, but not that it came up heads. And yet, the probability I should assign to the proposition that it came up heads is 0.5.
comment by Luke_A_Somers · 2014-06-10T15:09:39.847Z · LW(p) · GW(p)
If so, consider X = "free will exists". One could argue that the term "free will" is defined such that it is impossible to detect it, or to prove that it exists. But if one could prove that the many worlds interpretation of quantum mechanics is correct, that would constitute a disproof of X. Then P(will_be_proven(X)) / (P(will_be_proven(X)) + P(will_be_disproven(X))) = 0.
This has to be one of the worst examples I've ever seen that wasn't actively trying to push an agenda.
comment by Shmi (shminux) · 2014-06-07T19:31:57.256Z · LW(p) · GW(p)
Not a great example. "But if one could prove that the many worlds interpretation of quantum mechanics is correct, that would constitute a disproof of X." -- where X is free will? Probably not by many definitions of it.
In general, I don't see how this question is different from "what a Bayesian should do" with any other probabilistic statement.
Replies from: pragmatist↑ comment by pragmatist · 2014-06-08T05:20:58.577Z · LW(p) · GW(p)
One of the ways in which it is different is that it is dealing with logical uncertainty, which has not yet received a conclusive formulation, as far as I'm aware.
ETA: Actually, I misread the OP. It talks about whether X will be proven, not whether it can be proven. So it's not logical uncertainty. Retracting.
comment by DanArmak · 2014-06-07T20:25:44.790Z · LW(p) · GW(p)
Is it reasonable to assign P(X) = P(will_be_proven(X)) / (P(will_be_proven(X)) + P(will_be_disproven(X))) ?
It's also possible that X will never be either proven or disproven.
For a proposed proof, now that proofs can be checked by machines, it does make sense to expect the proof to be validated or invalidated.
But if one could prove that the many worlds interpretation of quantum mechanics is correct, that would constitute a disproof of X.
I'm not sure I follow. But then, I don't know of any definition of free will that isn't self-contradictory, so it's not surprising that I don't understand what would convince those who seriously consider free will.
comment by Manfred · 2014-06-08T20:20:58.856Z · LW(p) · GW(p)
In general, the assumption you're asking about is whether will_be_proven(X) and will_be_disproven(X) are mutually exclusive and exhaustive. If the objects you're calling "proof" and "disproof" can ever coexist, the mutual exclusivity part breaks. If there's any option other than a proof existing or a disproof existing, then the exhaustiveness part breaks down.
These assumptions are omnipresent because they're incredibly useful. But they're really just quantitative common sense. If using them feels like a "trick," that's evidence there's something fishy.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2014-06-11T01:12:54.495Z · LW(p) · GW(p)
They're not necessarily exhaustive. Part of my question was how your answer changes in the cases where you know those possibilities are exhaustive (because the question is decidable).
Replies from: Manfred↑ comment by Manfred · 2014-06-11T02:27:01.997Z · LW(p) · GW(p)
If they are mutually exclusive but not exhaustive, and you have no other information besides P(X|proven)=1, P(X|disproven)=0, (usually not true, but useful for toy cases), then we have P(X)=P(proven)+1/2*(1-P(disproven) -P(proven)).
This is only equivalent to P(X) = P(proven) / (P(proven) + P(disproven)) if P(proven)+P(disproven)=1.
Basically what's going on is that if you can neither prove nor disprove X, and you don't have any information about what to think when there's neither proof nor disproof, your estimate just goes to 1/2.
Appendix: The equations we want in this case come from P(X)= P(X|A)*P(A)+P(X|B)*P(B)+P(X|C)*P(C), for A, B, C mutually exclusive and exhaustive. So for example if "proven" and "disproven" are mutually exclusive but not exhaustive, then we can add some extra category "neither." We take as starting information that P(X|proven)=1 and P(X|disproven)=0, and in the lack of other information P(X|neither)=1/2. So P(X)=P(X|proven)*P(proven)+P(X|disproven)*P(disproven)+P(X|neither)*P(neither).
comment by Decius · 2014-06-09T04:17:41.270Z · LW(p) · GW(p)
Is it reasonable to assign P(X) = P(will_be_proven(X)) / (P(will_be_proven(X)) + P(will_be_disproven(X)))
No. For example, there exist statements that are unprovable but true. To assign a numerator of 0 (or epsilon) to all unprovable statements is wrong.
comment by geniuslevel20 · 2014-06-08T20:34:50.788Z · LW(p) · GW(p)
Commenters minunderstand your problem and your argument for its solution. I take your problem to be "What could the probability of a mathematical proposition be besides its comparative likelihood of proof or disproof?
Perhaps the answer is that there are reasons besides proof to believe even a mathematical proposition. Empirical reasons, that is.
Replies from: lmm, Richard_Kennaway↑ comment by lmm · 2014-06-09T18:23:22.412Z · LW(p) · GW(p)
If we know that for some particular class of propositions, some number were proven false quite easily, and some number were proven true but only after hundreds of years and great struggle, and the rest have remained unproven, it seems intuitively reasonable to suspect that more of the unproven ones are true than a naive ratio would suggest. I don't know how to make this rigorous though.
↑ comment by Richard_Kennaway · 2014-06-10T10:22:24.854Z · LW(p) · GW(p)
I take your problem to be "What could the probability of a mathematical proposition be besides its comparative likelihood of proof or disproof?
Except that the example he gives is not a mathematical one, but the question of free will. I have to wonder if the question of free will is not the real subject of the post.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2014-06-11T01:11:06.277Z · LW(p) · GW(p)
No; free will was the question that got me thinking about it. It seems to me at first glance that the existence of free will could be disproven, but couldn't be proven. Should that impact my belief in it?
(Short answer: No, because the payoff matrix for believing and not believing in free will is peculiar. But that peculiar case doesn't affect the question I'm asking here. The free will question itself is not interesting to me.)
comment by ChristianKl · 2014-06-08T07:42:17.599Z · LW(p) · GW(p)
Doing means acting. There's nothing to be done about the 4-color problem as it's mostly about information. To do you have to use an utility function that ways possible outcomes and then you can decide to do A or do B.
comment by DanielLC · 2014-06-07T19:25:33.285Z · LW(p) · GW(p)
Is it reasonable to assign P(X) = P(will be proven(X)) / (P(will be proven(X)) + P(will be disproven(X))) ?
No. Consider Goldbach's conjecture: any even number above four is the sum of two odd primes. If it's false, it can be disproved via counterexample. If it's true, the proof must be much more sophisticated. At this point, we might be able to say that if it were feasible to disprove via counterexample that would have already happened, but that certainly wasn't always the case.
Edit: Fixed Goldbach's conjecture.
Replies from: Stuart_Armstrong, Falacer↑ comment by Stuart_Armstrong · 2014-06-10T15:42:16.411Z · LW(p) · GW(p)
Fixed Goldbach's conjecture.
Now that would be an impressive feat...
↑ comment by Falacer · 2014-06-07T23:00:22.873Z · LW(p) · GW(p)
Goldbach's conjecture is "Every even integer above four is the sum of two primes," surely?
Also, Gödel's incompleteness theorem states that there are theorems which are true but non-provable, so you get something like:
P(X) = (P(will be proven(X)) + P(is true but unprovable(X))) / (P(will be proven(X)) + P(will be disproven(X)) + P(is true but unprovable(X)))
Is there a reason to suspect that a counterexample wouldn't be a very large number that hasn't been considered yet? Consider sublime numbers: the first (12) is a number which will be checked by any search process, but there is another which has 76 digits and, I would suspect, could be missed by some searches.
Replies from: DanielLC↑ comment by DanielLC · 2014-06-08T00:04:14.726Z · LW(p) · GW(p)
Goldbach's conjecture is ...
Fixed. Thanks.
P(X) = (P(will be proven(X)) + P(is true but unprovable(X))) / (P(will be proven(X)) + P(will be disproven(X)) + P(is true but unprovable(X)))
It still only works on assumptions that might generally make good approximations, but aren't necessarily true.
More to the point, Phil was suggesting that something is false on the basis that it would be difficult to prove if true, but easy to prove if false. In other words, he was using it specifically because that example was one where the implicit assumption in his approximation, that the relative values of P(will be proven(X)) to P(will be disproven(X)) are about the same as P(is true but will not be proven(X)) to P(is false but will not be disproven(X)), was particularly far from the truth.
Is there a reason to suspect that a counterexample wouldn't be a very large number that hasn't been considered yet?
Suppose they'll check every number up to n. Suppose also that we're using a logarithmic prior as to where the first counterexample is. Suppose also we'll eventually find it. It has the same probability of being from one to sqrt(n) as from sqrt(n) to n. This means that if m, the number of numbers we've already checked, is more than sqrt(n), we've probably already found it. Since m is pretty big, we're probably not going to manage to check as many numbers as we've already checked m times over.
Looking into it more, they've been using more clever ways to prove that it's true up to certain numbers, and we know it's true for up to about 4*10^18. It would be impossible to even check a number that size without some kind of clever method, let alone check enough to find the counter-example.