Top-Down and Bottom-Up Logical Probabilities

post by Manfred · 2014-07-22T08:53:15.473Z · LW · GW · Legacy · 4 comments

Contents

  I.
  II.
  III.
None
4 comments

I.

I don't know very much model theory, and thus I don't fully understand Hutter et al.'s logical prior, detailed here, but nonetheless I can tell you that it uses a very top-down approach. About 60% of what I mean is that the prior is presented as a completed object with few moving parts, which fits the authors' mathematical tastes and proposed abstract properties the function should have. And for another thing, it uses model theory - a dead giveaway.

There are plenty of reasons to take a top-down approach. Yes, Hutter et al.'s function isn't computable, but sometimes the properties you want require uncomputability. And it's easier to come up with something vaguely satisfactory if you don't have to have many moving parts. This can range from "the prior is defined as a thing that fulfills the properties I want" on the lawful good side of the spectrum, to "clearly the right answer is just the exponential of the negative complexity of the statement, duh".

Probably the best reason to use a top-down approach to logical uncertainty is so you can do math to it. When you have some elegant description of global properties, it's a lot easier to prove that your logical probability function has nice properties, or to use it in abstract proofs. Hence why model theory is a dead giveaway.

There's one other advantage to designing a logical prior from the top down, which is that you can insert useful stuff like a complexity penalty without worrying too much. After all, you're basically making it up as you go anyhow, you don't have to worry about where it comes from like you would if you were going form the bottom up.

A bottom-up approach, by contrast, starts with an imagined agent with some state of information and asks what the right probabilities to assign are. Rather than pursuing mathematical elegance, you'll see a lot of comparisons to what humans do when reasoning through similar problems, and demands for computability from the outset.

For me, a big opportunity of the bottom-up approach is to use desiderata that look like principles of reasoning. This leads to more moving parts, but also outlaws some global properties that don't have very compelling reasons behind them.

 

II.

Before we get to the similarities, rather than the differences, we'll have to impose the condition of limited computational resources. A common playing field, as it were. It would probably serve just as well to extend bottom-up approaches to uncomputable heights, but I am the author here, and I happen to be biased towards the limited-resources case.

The part of top-down assignment using limited resources will be played by a skeletonized pastiche of Paul Christiano's recent report:

i. No matter what, with limited resources we can only assign probabilities to a limited pool of statements. Accordingly, step one is to use some process to choose the set S0 of statements (and their negations) to assign probabilities.

ii. Then we use something a weakened consistency condition (that can be decided between pairs of sentences in polynomial time) to set constraints on the probability function over S0. For example, sentences that are identical except for a double-negation have to be given the same probability.

iii. Christiano constructs a description-length-based "pre-prior" function that is bigger for shorter sentences. There are lots of options for different pre-priors, and I think this is a pretty good one.

iv. Finally, assign a logical probability function over S0 that is as similar as possible to the pre-prior while fulfilling the consistency condition. Christiano measures similarity using cross-entropy between the two functions, so that the problem is one of minimizing cross-entropy subject to a finite list of constraints. (Even if the pre-prior decreases exponentially, this doesn't mean that complicated statements will have exponentially low logical probability, because of the condition from step two that P(a statement) + P(its negation) = 1 - in a state of ignorance, everything still gets probability 1/2. The pre-prior only kicks in when there are more options with different description lengths.)

Next, let's look at the totally different world of a bottom-up assignment of logical probabilities, played here by a mildly rephrased version of my past proposal.

i. Pick a set of sentences S1 to try and figure out the logical probabilities of.

ii. Prove the truth or falsity of a bunch of statements in the closure of S1 under conjugation and negation (i.e. if sentences a and b are in S1, a&b is in the closure of S1).

iii. Assign a logical probability function over the closure of S1 under conjugation with maximum entropy, subject to the constraints proved in part two, plus the constraints that each sentence && its negation has probability 0.

These turn out to be really similar! Look in step three of my bottom-up example - there's a even a sneakily-inserted top-down condition about going through every single statement and checking an aspect of consistency. In the top-down approach, every theorem of a certain sort is proved, while in the bottom-up approach there are allowed to be lots of gaps - but the same sorts of theorems are proved. I've portrayed one as using proofs only about sentences in S0, and the other as using proofs in the entire closure of S1 under conjunction, but those are just points on an available continuum (for more discussion, see Christiano's section on positive semidefinite methods).

The biggest difference is this "pre-prior" thing. On the one hand, it's essential for giving us guarantees about inductive learning. On the other hand, what piece of information do we have that tells us that longer sentences really are less likely? I have unresolved reservations, despite the practical advantages.

 

III.

A minor confession - my choice of Christiano's report was not coincidental at all. The causal structure went like this:

Last week - Notice dramatic similarities in what gets proved and how it gets used between my bottom-up proposal and Christiano's top-down proposal.

Now - Write post talking about generalities of top-down and bottom-up approaches to logical probability, and then find as a startling conclusion the thing that motivated me to write the post in the first place.

The teeensy bit of selection bias here means that though these similarities are cool, it's hard to draw general conclusions.

So let's look at one more proposal, this one due to Abram Demski, modified by to use limited resources.

i. Pick a set of sentences S2 to care about.

ii. Construct a function on sentences in S2 that is big for short sentences and small for long sentences.

iii. Start with the set of sentences that are axioms - we'll shortly add new sentences to the set.

iv. Draw a sentence from S2 with probability proportional to the function from step two.

v. Do a short consistency check (can use a weakened consistency condition, or just limited time) between this sentence and the sentences already in the set. If it's passed, add the sentence to the set.

vi. Keep doing steps four and five until you've either added or ruled out all the sentences in S2.

vii. The logical probability of a sentence is defined as the probability that it ends up in our set after going through this process. We can find this probability using Monte Carlo by just running the process a bunch of times and counting up what portion of the time each sentences is in the set by the end.

Okay, so this one looks pretty different. But let's look for the similarities. The exact same kinds of things get proved again - weakened or scattershot consistency checks between different sentences. If all you have in S2 are three mutually exclusive and exhaustive sentences, the one that's picked first wins - meaning that the probability function over what sentence gets picked first is acting like our pre-prior.

So even though the method is completely different, what's really going on is that sentences are being given measure that looks like the pre-prior, subject to the constraints of weakened consistency (via rejection sampling) and normalization (keep repeating until all statements are checked).

In conclusion: not everything is like everything else, but some things are like some other things.

4 comments

Comments sorted by top scores.

comment by Lumifer · 2014-07-22T15:57:24.539Z · LW(p) · GW(p)

Badly in need of tl;dr

Replies from: Manfred, Skeptityke
comment by Manfred · 2014-07-22T18:45:31.647Z · LW(p) · GW(p)

tl;dr Some methods of assigning logical probability seem to be convergent. That's neat.

comment by Skeptityke · 2014-07-22T18:29:55.927Z · LW(p) · GW(p)

I disagree strongly, but here is a prototype of one anyways.

There are top-down and bottom-up approaches to logical probabilities. Top-down approaches typically involve distributions selected to fit certain properties, and, while being elegant and easy to apply math to, are often quite uncomputable. Bottom-up approaches take an agent with some information, and ask what they should do to assign probabilities/find out more, leading to a more "hacky" probability distribution, but they also tend to be easier to compute. Interestingly enough, given limited computing resources, these two sorts of distributions have distinct similarities. They both involve a starting probability distribution modified by iterated consistency checks.

Did I get it mostly right?

Replies from: Manfred
comment by Manfred · 2014-07-22T23:24:32.107Z · LW(p) · GW(p)

They both involve a starting probability distribution modified by iterated consistency checks.

This part is a bit misleading because there's nothing special about having a starting distribution and updating it (thought that's definitely a bottom-up trait). It's also okay to create the logical probability function all at once, or through Monte Carlo sampling, or other weird stuff we haven't thought up yet.