The randomness/ignorance model solves many anthropic problems
post by Rafael Harth (sil-ver) · 2019-11-11T17:02:33.496Z · LW · GW · 5 commentsContents
5 comments
(Follow-up to Randomness vs Ignorance [LW · GW] and Reference Classes for Randomness [LW · GW])
I've argued that all uncertainty can be divided into randomness and ignorance and that this model is free of contradictions. Its purpose is to resolve anthropic puzzles such as the Sleeping Beauty problem.
If the model is applied to these problems, they appear to be underspecified. Details required to categorize the relevant uncertainty are missing, and this underspecification might explain why there is still no consensus on the correct answers. However, if the missing pieces are added in such a way that all uncertainty can be categorized as randomness, the model does give an answer. Doing this doesn't just solve a variant of the problem, it also highlights the parts that make these problems distinct from each other.
I'll go through two examples to demonstrate this. The underlying principles are simple, and the model can be applied to every anthropic problem I know of.
In the original problem, a coin is thrown at the beginning to decide between the one-interview and the two-interview version of the experiment. In our variation, we will instead repeat the experiment times and have of those run the one-interview version, and another run the two-interview version. Sleeping Beauty knows this but isn't being told which version she's currently participating in. This leads to instances of Sleeping Beauty waking up on Monday, and instances of her waking up on Tuesday. All instances fall into the same reference class, because there is no information available to tell them apart. Thus, Sleeping Beauty's uncertainty about the current day is random with probability for Monday.
2. Presumptuous Philosopher [LW · GW]
In the original problem, the debate is about the question of how the size of the universe influences the probability that the universe is large, but it is unspecified whether our current universe is the only universe.
Let's fill in the blanks. Suppose there is one universe at the base of reality which runs many simulations, one of them being ours. The simulated universes can't run simulations themselves, so there are only two layers. Exactly half of their simulations are of "small" universes (say with people), and the other half are of "large" universes (say with people). All universes look identical from the inside.
Once again, there is only one reference class. Since there is an equal number of small and large universes, exactly out of members of the class are located in large universes. If we know all this, then (unlike in the original problem) our uncertainty about which universe we live in is clearly random with probability i.e. for the universe being large.
Bostrom came up with the Presumptuous Philosopher problem as an argument against SIA (which is one of the two main anthropic theories, and the one which answers on Sleeping Beauty). Notice how it is about the size of the universe, i.e. something that might never be repeated, where the answer might always be the same. This is no coincidence. SIA tends to align with the randomness/ignorance model whenever all uncertainty collapses into randomness, and to diverge whenever it doesn't. Naturally, the way to construct a thought experiment where SIA appears to be overconfident is to make it so the relevant uncertainty might plausibly be ignorance. This is an example of how I believe the randomness/ignorance model adds to our understanding of these problems.
So far I haven't talked about how the model computes probability if the relevant uncertainty is ignorance. In fact it behaves like SSA (rather than SIA), but the argument is lengthy. For now, simply assume it's agnostic.
5 comments
Comments sorted by top scores.
comment by avturchin · 2019-11-11T18:03:20.655Z · LW(p) · GW(p)
If the universe is infinite and has all possible things, then most of ignorance becomes randomness?
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2019-11-11T18:15:19.967Z · LW(p) · GW(p)
Yes. It's much harder to find clear cases for ignorance than to find clear cases for randomness. That makes it telling that the P/P problem is about a case where it's at least arguable.
comment by Dagon · 2019-11-11T19:28:20.580Z · LW(p) · GW(p)
I don't think you need to claim that there are different kinds of uncertainty to solve these. If you clearly specify what predicted experiences/outcomes you're applying the probability to, both of these examples dissolve.
"Will you remember an awakening" has a different answer than "how many awakenings will be reported to you by an observer". Uncertainty about these are the same: ignorance.
Replies from: sil-ver↑ comment by Rafael Harth (sil-ver) · 2019-11-12T19:34:45.926Z · LW(p) · GW(p)
I don't think you need to claim that there are different kinds of uncertainty to solve these. If you clearly specify what predicted experiences/outcomes you're applying the probability to, both of these examples dissolve.
This implies that everyone arguing about the correct probability in Sleeping Beauty is misguided, right?
I definitely think it is essential to differentiate between the two. I think there are cases where the question is the same and meaningful but the answer changes as the nature of uncertainty changes. Presumptuous Philosopher is such a case.
I argue more that the results of this model are meaningful in the next post.
Replies from: Dagon