Unknown Probabilities

post by transhumanist_atom_understander · 2023-11-27T02:30:07.335Z · LW · GW · 1 comments

Contents

  Flipping a Possibly Trick Coin
  What You Would Believe
None
1 comment

An unknown probability sounds like a type error. There are unknowns, such as the result of a coin flip. And there are probabilities that these unknowns take certain values, such as the probability that the flip comes up heads.

As a formula, The unknown, the result of the flip, is inside the probability operator. The probability, 1/2, is outside. They're not the same kind of thing.

But suppose we have, for every possible value of the unknown , where and are propositions. Then it makes sense to say that is an unknown probability of a hypothesis , against background knowledge .

Some natural examples:

I could have simplified that last one considerably by just saying that is the probability you would assign to 6, if you knew the shape of the die. In fact, all of these unknown probabilities are really probabilities you would assign if you had some additional piece of information. But before we get there, I'll go through the first example, the coin example, in more formalism than is really necessary. I'd rather be tedious than mysterious.

(By the way, this will already be familiar if you've taken a probability class based on measure theory, where conditional expectations are defined as a certain random variable. Also, the unknown probability is the "" from Jaynes's "", and the formula above is really the same as 18.1 in his "Probability Theory: The Logic of Science".)

Flipping a Possibly Trick Coin

The formalism is as follows. Each unknown is a function. It maps a possible world to the value of the unknown in that world.

(If you're familiar with the more usual terminology: what I'm calling an "unknown" is a random variable, and what I'm calling a "possible world" is an outcome.)

For the coin example we'll want four possible worlds. One example of an unknown that I've already mentioned is "Result":

It will be more clear what these possible worlds mean when I list all of the unknowns. I'll put this in a tabular form, so that "Result" will be one column of the table.

Four possible worlds: two coins, times two sides. I added this "Side" unknown to distinguish between the two sides of the trick coin, but I'm not actually going to use it. For the trick coin, either "Side" has the "Result" of Heads.

simply depends on Coin. They're both functions of the possible world, though, so we can define it as

You can see from the table that Coin and are redundant, intersubstitutible. Their values pick out the same sets of possible worlds. We can use that substitution as follows:

And likewise,

Taking those two together, we have, for every possible value of , So it makes sense to call an unknown probability of , as long as we have no other relevant background knowledge.

Note that it shouldn't matter if we "expand" the space of possible worlds, for example by having each possible world represent a trajectory of the flip through phase space. We can consider each of these four possible worlds as the output of some lossy function from a richer space of possible worlds. The lost information doesn't affect our analysis because the unknowns of interest can be defined in terms of the more coarse-grained description.

What You Would Believe

Now we can return to the idea that an "unknown probability" is really a probability we would assign, with more information. In the coin flip case, it was the probability we would assign to heads, if we knew whether it was the trick or the fair coin being flipped. And if we didn't know anything else relevant, such as the initial conditions of the flip.

Though this notation makes it a little annoying to write this formally, we can do it as follows: is a valid proposition because is simply some particular value that can take. This formula says that the "unknown probability" at a world is a conditional probability, with that world's value of the unknown behind the conditional bar. The unknown probability is defined in terms of the probability operator, but it's an unknown like any other: a function of a possible world.

Interestingly, these unknown probabilities are defined in terms of posterior probabilities. That is, you can think of above as the posterior probability that you will update to, after learning . This posterior probability is unknown because is unknown. This leads to a statement of conservation of expected evidence [? · GW]: You may have heard it said that the expected posterior is the prior. Naively, that seems like a type error: probabilities are not unknowns, so we can't have expectations of them. But with the concept of an unknown probability, we can take it literally.

Exercise for the reader: does this contradict The Principle of Predicted Improvement [LW · GW]? How should the unknown posterior probability in that post be defined?

1 comments

Comments sorted by top scores.

comment by transhumanist_atom_understander · 2025-04-25T13:18:11.513Z · LW(p) · GW(p)

The last formula in this post, the conservation of expected evidence, had a mistake which I've only just now fixed. Since I guess it's not obvious even to me, I'll put a reminder for myself here, which may not be useful to others. Really I'm just "translating" from the "law of iterated expectations" I learned in my stats theory class, which was:

This is using a notation which is pretty standard for defining conditional expectations. To define it you can first consider the expected value given a particular value of the random variable . Think of that as a function of that particular value: Then we define conditional expectation as a random variable, obtained from plugging in the random value of : The problem with this notation is it gets confusing which capital letters are random variables and which are propositions, so I've bolded random variables. But it makes it very easy to state the law of iterated expectations.

The law of iterated expectations also holds when "relativized". That is, where is an event. If we wanted to stick to just putting random variables behind the conditional bar we could have used the indicator function of that event.

And this translates to the statement in my post. is an indicator for the event , which makes a conditional expectation of it a conditional probability of . So is . Our proposition is the background information , I used the same symbol there. And the right hand side is another expectation of an indicator and therefore also a probability.

I really didn't want to define this notation in the post itself, but it's how I'm trained to think of this stuff, so for my own confidence in the final formula I had to write it out this way.