The Principle of Predicted Improvement

post by Ronny Fernandez (ronny-fernandez) · 2019-04-23T21:21:41.018Z · LW · GW · 17 comments

Contents

  Proof:
    Equality:
None
17 comments

I made a conjecture I think is cool. Mark Sellke proved it. I don't know what else to do with it, so I will explain why I think it's cool and give the proof here. Hopefully, you will think it's cool, too.

______________________________________________________________________

Suppose we are trying to assign as much probability as possible to whichever of several hypotheses is true. The law of conservation of expected evidence [LW · GW] tells us that for any hypothesis, we should expect to assign the same probability to that hypothesis after observing a test result that we assign to it now. Suppose that takes values . We can express the law of conservation of expected evidence as, for any fixed :

In English this says that the probability we should expect to assign to after observing the value of equals the probability we assign to before we observe the value of

This law raises a question. If all I want is to assign as much probability to the true hypothesis as possible, and I should expect to assign the same probability I currently assign to each hypothesis after getting a new piece of data, why would I ever collect more data? A. J. Ayer pointed out this puzzle in The Conception of Probability as a Logical Relation (I unfortunately cannot find a link). I. J. Good solved Ayer's puzzle in On the Principle of Total Evidence. Good shows that if I need to act on a hypothesis, the expected value of gaining an extra piece of data is always greater than or equal to the expected value of not gaining that new piece of data. Although there is nothing wrong with Good's solution, I found it somewhat unsatisfying. Ayer's puzzle is purely epistemic, and while there is nothing wrong with a pragmatic solution to an epistemic puzzle, I still felt that there should be a solution that makes no reference to acts or utility at all.

Herein I present a theorem that I think constitutes such a solution. I have decided to call it the principle of predicted improvement (PPI):

In English the theorem says that the probability we should expect to assign to the true value of after observing the true value of is greater than or equal to the expected probability we assign to the true value of before observing the value of . This inequality is strict when and are not independent. In other words, you should predict that your epistemic state will improve (e.g. you will assign more probability to the truth) after making any relevant observation.

This is a solution to Ayer's puzzle because it says that I should always expect to assign more probability to the true hypothesis after making a relevant observation. It is a purely epistemic solution because it makes no reference to acts or utility. So long as I want to assign more probability to the true hypothesis than I currently do, I should want to make relevant observations.

Importantly, this is completely consistent with the law of conservation of expected evidence. Although for any particular hypothesis I should expect to assign it the same probability after performing a test that I do now, I should also expect to assign more probability to whichever hypothesis is actually true.

Aside from being a solution to Ayer's puzzle, the PPI is cool just because it tells you that you should expect to assign more probability to the truth as you observe stuff.

______________________________________________________________________

There is a similar more well known theorem from information theory that my friend Alex Davis showed me:

In English this says that you should expect the entropy of your distribution to go down after you make an observation. If we use the law of iterated expectation, multiply both sides by minus one, and reverse the inequality, we get something that looks a lot like the PPI:

It does not imply PPI in any obvious way because we can have two distributions such that one is higher in expected negentropy but lower in expected probability assigned to the truth, and vice versa. They are similar theorems in that one says you should predict that the probability assigned to the true outcome will be higher after an observation, while the other says you should predict that the log probability will be higher. They are different in that they use different measures of confidence.

The advantage of the PPI is that it is phrased in the same terms as Ayer's puzzle: probabilities rather than log probabilities. I also claim that the PPI is easier to read and interpret, so it might be pedagogically useful to teach it before teaching that expected entropy after an observation is less than or equal to current entropy.

Anyway, here's Sellke's proof.

______________________________________________________________________

Proof:

We want to show

Let's say that takes values and takes values . The left hand side is

The right hand side is

The left hand side is equivalent to

Titu's lemma: for any sequences of a's and b's

If we apply this to each then we just get . This is because:


and

For each fixed i, set:

and

To conclude, for each fixed i we have

and hence

Equality:

Here we explain why the only equality case is when and are independent.

Titu's Lemma is an equality iff the two vectors and are parallel, that is, if there exists a constant such that for all . If we translate this equality condition over to our application of Titu's Lemma above, we see that our proof preserves equality if and only if there exist constants such that (We applied Titu once for each value of , so we need a value for each inequality to be an equality. But these 's can be different.)

Now if we sum over there we get

and so

Plugging this back in, we see that equality is true iff

which is equivalent to independence of and , or mutual information.

17 comments

Comments sorted by top scores.

comment by ESRogs · 2019-04-25T02:35:19.607Z · LW(p) · GW(p)
E[P(H|D)]≥E[P(H)]
In English the theorem says that the probability we should expect to assign to the true value of H after observing the true value of D is greater than or equal to the expected probability we assign to the true value of H before observing the value of D.

I have a very basic question about notation -- what tells me that H in the equation refers to the true hypothesis?

Put another way, I don't really understand why that equation has a different interpretation than the conservation-of-expected-evidence equation: E[P(H=hi|D)]=P(H=hi).

In both cases I would interpret it as talking about the expected probability of some hypothesis, given some evidence, compared to the prior probability of that hypothesis.

Replies from: adrusi, DanielFilan, habryka4, ronny-fernandez
comment by adrusi · 2019-04-25T06:40:11.444Z · LW(p) · GW(p)

I also had trouble with the notation. Here's how I've come to understand it:

Suppose I want to know whether the first person to drive a car was wearing shoes, just socks, or no footwear at all when they did so. I don't know what the truth is, so I represent it with a random variable , which could be any of "the driver wore shoes," "the driver wore socks" or "the driver was barefoot."

This means that is a random variable equal to the probability I assign to the true hypothesis (it's random because I don't know which hypothesis is true). It's distinct from and which are both the same constant, non-random value, namely the credence I have in the specific hypothesis (i.e. "the driver wore shoes").

( is roughly "the credence I have that 'the driver wore shoes' is true," while is "the credence I have that the driver wore shoes," so they're equal, and semantically equivalent if you're a deflationist about truth)

Now suppose I find the driver's great-great-granddaughter on Discord, and I ask her what she thinks her great-great-grandfather wore on his feet when he drove the car for the first time. I don't know what her response will be, so I denote it with the random variable . Then is the credence I assign to the correct hypothesis after I hear whatever she has to say.

So is equivalent to and means "I shouldn't expect my credence in 'the driver wore shoes' to change after I hear the great-great-granddaughter's response," while means "I should expect my credence in whatever is the correct hypothesis about the driver's footwear to increase when I get the great-great-granddaughter's response."

I think there are two sources of confusion here. First, was not explicitly defined as "the true hypothesis" in the article. I had to infer that from the English translation of the inequality,

In English the theorem says that the probability we should expect to assign to the true value of H after observing the true value of D is greater than or equal to the expected probability we assign to the true value of H before observing the value of D,

and confirm with the author in private. Second, I remember seeing my probability theory professor use sloppy shorthand, and I initially interpreted as a sloppy shorthand for . Neither of these would have been a problem if I were more familiar with this area of study, but many people are less familiar than I am.

comment by DanielFilan · 2019-04-25T05:21:05.995Z · LW(p) · GW(p)

I have a very basic question about notation -- what tells me that H in the equation refers to the true hypothesis?

H stands for hypothesis. We're taking expectations over our distribution over hypotheses: that is, expectations over which hypothesis is true.

Put another way, I don't really understand why that equation has a different interpretation than the conservation-of-expected-evidence equation: E[P(H=hi|D)]=P(H=hi).

In the PPI inequality, the expectations are being taken over H and D jointly, in the CEE equation, the expectation is just being taken over D.

Replies from: DanielFilan
comment by DanielFilan · 2019-04-25T05:28:36.913Z · LW(p) · GW(p)

I should note that when I first saw the PPI inequality, I also didn't get what it was saying, just because I had very low prior probability mass on it saying the thing it actually says. (I can't quite pin down what generalisation or principle led to this situation, but there you go.)

comment by habryka (habryka4) · 2019-04-25T02:41:10.303Z · LW(p) · GW(p)

Yeah, I have intuitively the same interpretation.

My model is also that there is indeed lots of competing notational syntax in probability theory, and that some people would tell you that the current notation being used is invalid, or stands for something weird and meaningless. So I do think explaining the notation and the choice of notation in detail here is a good idea.

comment by Ronny Fernandez (ronny-fernandez) · 2019-04-25T06:57:21.388Z · LW(p) · GW(p)

I honestly could not think of a better way to write it. I had the same problem when my friend first showed me this notation. I thought about using but that seemed more confusing and less standard? I believe this is how they write things in information theory, but those equations usually have logs in them.

Replies from: DanielFilan, habryka4
comment by DanielFilan · 2019-04-25T18:36:01.140Z · LW(p) · GW(p)

Just to add an additional voice here, I would view that as incorrect in this context, instead referring to the thing that the CEE is saying. The way I'd try to clarify this would be to put the variables varying in the expectation in subscripts after the , so the CEE equation would look like , and the PPI inequality would be .

comment by habryka (habryka4) · 2019-04-25T17:26:57.778Z · LW(p) · GW(p)

Yeah, this is the one that I would have used.

comment by Mark Sellke (mark-sellke) · 2019-04-23T22:19:17.733Z · LW(p) · GW(p)

This was fun!

A related fact: suppose you have a simple random walk (let's say integer valued for simplicity, this all works with Brownian motion too) conditioned to reach (say) 100 before reaching 0. Then (at least before it has reached 100), from state n it has a (n+1)/2n chance to move up to n+1, instead of a 1/2 chance for the unconditioned walk. The proof is another helping of Bayes' Rule.

This model applies pretty directly if you think of a probability as a martingale in [0,1], and the conditioning as being secretly told the truth. So in this example you can explicitly quantify the drift toward the truth.

comment by Bunthut · 2019-04-24T14:32:46.572Z · LW(p) · GW(p)

Does your principle follow from Goods? It would seem that it does. Perhaps a good way to generalise the idea would be that the EV linearly aggregates the distribution and isnt expected to change, but other aggregations like log get on average closer to their value at hypothetical certainty. For example the variance of a real parameter goes expected down.

Replies from: ronny-fernandez
comment by Ronny Fernandez (ronny-fernandez) · 2019-04-24T15:42:09.056Z · LW(p) · GW(p)

I didn't take the time to check whether it did or didn't. If you would walk me through how it does, I would appreciate it.

Replies from: Bunthut
comment by Bunthut · 2019-04-25T11:16:46.496Z · LW(p) · GW(p)

Good shows that for every utility function for every situation, the EV of utility increases or stays the same when you gain information.

If we can construct a utility function where its utility EV always equals the the EV of propabilty assigned to the correct hypothesis, we could transfer the conclusion. That was my idea when I made the comment.

Here is that utility function: first, the agent mentally assigns a positive real number to every hypothesis , such that . It prefers any world where it does this to any where it doesnt. Its utility function is :

This is the quadratic scoring rule, so . Then its expected utility is :

Simplifying:

And since , this is:
Which is just .

Replies from: ronny-fernandez
comment by Ronny Fernandez (ronny-fernandez) · 2019-04-25T18:37:22.683Z · LW(p) · GW(p)

I see. I think you could also use PPI to prove Good's theorem though. Presumably the reason it pays to get new evidence is that you should expect to assign more probability to the truth after observing new evidence?

comment by ryan_b · 2019-04-24T21:00:16.804Z · LW(p) · GW(p)

I think this is very well done. The explanation is sufficiently clear that even I, the non-formal-math person, can follow the logic.

comment by Samuel Hapák (hleumas) · 2019-04-29T22:39:21.783Z · LW(p) · GW(p)

There is actually much easier and intuitive proof.

For simplicity, let's assume H takes only two values T(true) and F(false).

Now, let's assume that God know that H = T, but observer (me) doesn't know it. If I now make measurement of some dependent variable D with value d_i, I'all either:

1. Update my probability of T upwards if d_i is more probable under T than in general.

2. Update my probability of T downwards if d_i is less probable under T than in general.

3. Don't change my probability of T at all if d_is is same as in general.

(In general here means without the knowledge whether T or F happened, i.e. assuming prior probabilities of observer)

Law [LW · GW] of [LW · GW] conservation [LW · GW] of [LW · GW] expected [LW · GW] evidence [LW · GW] tells us that in general (assuming prior probabilities), expected change in assigned probability for T is 0. However, if H=T, than those events that update probability of T upwards are more likely under T than in general, and those which update probability of T downwards are less likely. Thus expected change in assigned probability for T > 0 if T is true.

QED

Replies from: ronny-fernandez
comment by Ronny Fernandez (ronny-fernandez) · 2019-05-02T19:59:18.717Z · LW(p) · GW(p)

I had already proved it for two values of H before I contracted Sellke. How easily does this proof generalize to multiple values of H?

Replies from: hleumas
comment by Samuel Hapák (hleumas) · 2019-05-03T20:01:12.784Z · LW(p) · GW(p)

Very simple. To prove it for arbitrary number of values, you just need to prove that h_i being true increases its expected “probability to be assigned” after measurement for each i.

If you define T as h_i and F as NOT h_i, you just reduced the problem to two values version.