Does Probability Theory Require Deductive or Merely Boolean Omniscience?

post by Ronny Fernandez (ronny-fernandez) · 2015-08-03T06:54:31.012Z · LW · GW · Legacy · 10 comments

Contents

  Some Ways To Prove This Wrong:
None
10 comments

It is often said that a Bayesian agent has to assign probability 1 to all tautologies, and probability 0 to all contradictions. My question is... exactly what sort of tautologies are we talking about here? Does that include all mathematical theorems? Does that include assigning 1 to "Every bachelor is an unmarried male"?1 Perhaps the only tautologies that need to be assigned probability 1 are those that are Boolean theorems implied by atomic sentences that appear in the prior distribution, such as: "S or ~ S".

It seems that I do not need to assign probability 1 to Fermat's last conjecture in order to use probability theory when I play poker, or try to predict the color of the next ball to come from an urn. I must assign a probability of 1 to "The next ball will be white or it will not be white", but Fermat's last theorem seems to be quite irrelevant. Perhaps that's because these specialized puzzles do not require sufficiently general probability distributions; perhaps, when I try to build a general Bayesian reasoner, it will turn out that it must assign 1 to Fermat's last theorem. 

Imagine a (completely impractical, ideal, and esoteric) first order language, who's particular subjects were discrete point-like regions of space-time. There can be an arbitrarily large number of points, but it must be a finite number. This language also contains a long list of predicates like: is blue, is within the volume of a carbon atom, is within the volume of an elephant, etc. and generally any predicate type you'd like (including n place predicates).2 The atomic propositions in this language might look something like: "5, 0.487, -7098.6, 6000s is Blue" or "(1, 1, 1, 1s), (-1, -1, -1, 1s) contains an elephant." The first of these propositions says that a certain point in space-time is blue; the second says that there is an elephant between two points at one second after the universe starts. Presumably, at least the denotational content of most english propositions could be expressed in such a language (I think, mathematical claims aside).

Now imagine that we collect all of the atomic propositions in this language, and assign a joint distribution over them. Maybe we choose max entropy, doesn't matter. Would doing so really require us to assign 1 to every mathematical theorem? I can see why it would require us to assign 1 to every tautological Boolean combination of atomic propositions [for instance: "(1, 1, 1, 1s), (-1, -1, -1, 1s) contains an elephant OR ~((1, 1, 1, 1s), (-1, -1, -1, 1s) contains an elephant)], but that would follow naturally as a consequence of filling out the joint distribution. Similarly, all the Boolean contradictions would be assigned zero, just as a consequence of filling out the joint distribution table with a set of reals that sum to 1. 

A similar argument could be made using intuitions from algorithmic probability theory. Imagine that we know that some data was produced by a distribution which is output by a program of length n in a binary programming language. We want to figure out which distribution it is. So, we assign each binary string a prior probability of 2^-n. If the language allows for comments, then simpler distributions will be output by more programs, and we will add the probability of all programs that print that distribution.3 Sure, we might need an oracle to figure out if a given program outputs anything at all, but we would not need to assign a probability of 1 to Fermat's last theorem (or at least I can't figure out why we would). The data might be all of your sensory inputs, and n might be Graham's number; still, there's no reason such a distribution would need to assign 1 to every mathematical theorem. 

Conclusion

A Bayesian agent does not require mathematical omniscience, or logical (if that means anything more than Boolean) omniscience, but merely Boolean omniscience. All that Boolean omniscience means is that for whatever atomic propositions appear in the language (e.g., the language that forms the set of propositions that constitute the domain of the probability function) of the agent, any tautological Boolean combination of those propositions must be assigned a probability of 1, and any contradictory Boolean combination of those propositions must be assigned 0. As far as I can tell, the whole notion that Bayesian agents must assign 1 to tautologies and 0 to contradictions comes from the fact that when you fill out a table of joint distributions (or follow the Komolgorov axioms in some other way) all of the Boolean theorems get a probability of 1. This does not imply that you need to assign 1 to Fermat's last theorem, even if you are reasoning probabilistically in a language that is very expressive.4 

Some Ways To Prove This Wrong:

Show that a really expressive semantic language, like the one I gave above, implies PA if you allow Boolean operations on its atomic propositions. Alternatively, you could show that Solomonoff induction can express PA theorems as propositions with probabilities, and that it assigns them 1. This is what I tried to do, but I failed on both occasions, which is why I wrote this. 


[1] There are also interesting questions about the role of tautologies that rely on synonymy in probability theory, and whether they must be assigned a probability of 1, but I decided to keep it to mathematics for the sake of this post. 

[2] I think this language is ridiculous, and openly admit it has next to no real world application. I stole the idea for the language from Carnap.

[3] This is a sloppily presented approximation to Solomonoff induction as n goes to infinity. 

[4] The argument above is not a mathematical proof, and I am not sure that it is airtight. I am posting this to the discussion board instead of a full-blown post because I want feedback and criticism. !!!HOWEVER!!! if I am right, it does seem that folks on here, at MIRI, and in the Bayesian world at large, should start being more careful when they think or write about logical omniscience. 

 

 

10 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2015-08-03T21:36:11.930Z · LW(p) · GW(p)

Sure, we might need an oracle to figure out if a given program outputs anything at all, but we would not need to assign a probability of 1 to Fermat's last theorem (or at least I can't figure out why we would).

Fermat's Last Theorem states that no three positive integers a, b, and c can satisfy the equation a^n + b^n = c^n for any integer value of n greater than two. Consider a program that iterates over all possible values of a, b, c, n looking for counterexamples for FLT, then if it finds one, calls a subroutine that eventually prints out X (where X is your current observation). In order to do Solomonoff induction, you need to query a halting oracle on this program. But knowing whether this program halts or not is equivalent to knowing whether FLT is true or false.

Replies from: ronny-fernandez, ronny-fernandez
comment by Ronny Fernandez (ronny-fernandez) · 2015-08-03T23:50:44.466Z · LW(p) · GW(p)

Let's forget about the oracle. What about the program that outputs X only if 1 + 1 = 2, and else prints 0? Let's call it A(1,1). The formalism requires that P(X|A(1,1)) = 1, and it requires that P(A(1,1)) = 2 ^-K(A(1,1,)), but does it need to know that "1 + 1 = 2" is somehow proven by A(1,1) printing X?

In either case, you've shown me something that I explicitly doubted before: one can prove any provable theorem if they have access to a Solomonoff agent's distribution, and they know how to make a program that prints X iff theorem S is provable. All they have to do is check the probability the agent assigns to X conditional on that program.

comment by Ronny Fernandez (ronny-fernandez) · 2015-08-03T22:53:04.679Z · LW(p) · GW(p)

Awesome. I'm pretty sure you're right; that's the most convincing counterexample I've come across.

I have a weak doubt, but I think you can get rid of it:

let's name the program FTL()

I'm just not sure this means that the theorem itself is assigned a probability. Yes, I have an oracle, but it doesn't assign a probability to a program halting; it tells me whether it halts or not. What the Solomoff formalism requires is that "if (halts(FTL()) == true) then P(X|FTL()) = 1" and "if (halts(FTL()) == false) then P(X|FTL()) = 0" and "P(FTL()) = 2^-K(FTL())". Where in all this is the probability of Fermat's last theorem? Having an oracle may imply knowing whether or not FTL is a theorem, but it does not imply that we must assign that theorem a probability of 1. (Or maybe, it does and I'm not seeing it.)

Edit: Come to think of it... I'm not sure there's a relevant difference between knowing whether a program that outputs True iff theorem S is provable will end up halting, and assigning probability 1 to theorem S. It does seem that I must assign 1 to statements of the form "A or ~ A" or else it won't work; whereas if the theorem S is is not in the domain of our probability function, nothing seems to go wrong.

In either case, this probably isn't the standard reason for believing in, or thinking about logical omniscience because the concept of logical omniscience is probably older than Solomonoff induction. (I am of course only realizing that in hindsight; now that I've seen a powerful counter example to my argument.)

comment by David_Bolin · 2015-08-03T08:48:15.369Z · LW(p) · GW(p)

Basically the problem is that a Bayesian should not be able to change its probabilities without new evidence, and if you assign a probability other than 1 to a mathematical truth, you will run into problems when you deduce that it follows of necessity from other things that have a probability of 1.

Replies from: KnaveOfAllTrades, ronny-fernandez
comment by KnaveOfAllTrades · 2015-08-08T22:23:29.364Z · LW(p) · GW(p)

Why can't the deduction be the evidence? If I start with a 50-50 prior that 4 is prime, I can then use the subsequent observation that I've found a factor to update downwards. This feels like it relies on the reasoner's embedding though, so maybe it's cheating, but it's not clear and non-confusing to me why it doesn't count.

comment by Ronny Fernandez (ronny-fernandez) · 2015-08-03T08:51:11.074Z · LW(p) · GW(p)

How do you express, Fermat's last theorem for instance, as a boolean combination of the language I gave, or as a boolean combination of programs? Boolean algebra is not strong enough to derive, or even express all of math.

edit: Let's start simple. How do you express 1 + 1 = 2 in the language I gave, or as a boolean combination of programs?

Replies from: David_Bolin
comment by David_Bolin · 2015-08-03T09:04:19.931Z · LW(p) · GW(p)

Probability that there are two elephants given one on the left and one on the right.

In any case, if your language can't express Fermat's last theorem then of course you don't assign a probability of 1 to it, not because you assign it a different probability, but because you don't assign it a probability at all.

Replies from: ronny-fernandez
comment by Ronny Fernandez (ronny-fernandez) · 2015-08-03T09:16:05.996Z · LW(p) · GW(p)

I agree. I am saying that we need not assign it a probability at all. Your solution assumes that there is a way to express "two" in the language. Also, the proposition you made is more like "one elephant and another elephant makes two elephants" not "1 + 1 = 2".

I think we'd be better off trying to find a way to express 1 + 1 = 2 as a boolean function on programs.

Replies from: Lumifer
comment by Lumifer · 2015-08-03T14:38:27.895Z · LW(p) · GW(p)

I think we'd be better off trying to find a way to express 1 + 1 = 2 as a boolean function on programs.

This goes into the "shit LW people say" collection :-)

Replies from: ronny-fernandez
comment by Ronny Fernandez (ronny-fernandez) · 2015-08-03T14:51:14.962Z · LW(p) · GW(p)

Upvoted for cracking me up.