# Anthropic probabilities: answering different questions

post by Stuart_Armstrong · 2019-01-14T18:50:56.086Z · score: 21 (8 votes) · LW · GW · 1 comments

## Contents

  Frequentist anthropic probabilities
The issues with the questions
None
1 comment


What is the answer to the question of anthropic probabilities? I've claimed before that there was no answer - anthropic decision theory (ADT) was the only way to go.

But actually, there are answers - the problem is simply that there are multiple versions of "the question of anthropic probabilities", each with their own answer. And what decision theory did was unambiguously select the right question (and the right answer) for the job.

# Frequentist anthropic probabilities

It is much easier for humans to think in terms of frequencies, and anthropic probabilities are no exceptions. So imagine that either a small universe (with one copy of you) or a large universe (with copies of you) is created, with equal probability. Then your copies will independently observe a large sequence of random bits, with . After that, the universe ends, and the whole experiment begins again, with a small or large universe being created again. This experiment will then be repeated a very large number of times, so we can coherent talk about frequencies.

Then there are three questions you might ask yourself during these experiments:

1. What proportion of my versions will be in a large universe?
2. What proportion of universes, where versions of me exist, will be large?
3. What proportion of universes, where exact copies of me exist, will be large?

In the limit as these experiments are run a large number of times, the answers to these questions will converge on , , and "it depends on how many random bits you have seen when you ask the question". In other words, the probabilities given by SIA, SSA, and FNC.

Notice there is a fourth question that we could ask to complete the three:

1. What proportion my exact copies will be in a large universe?

But this question will also converge to , ie SIA, showing how SIA is independent of the reference class [LW · GW] as long as the reference class contains at least the exact copies.

## The issues with the questions

All three questions are well-posed questions with exact and correct answers. From outside, however, there are issues with all three.

Question 2 has the perennial "reference class problem": what are you counting as "versions of me"? If we change the reference class, we change the question, and therefore its not surprising it gives a different answer.

Question 3 has the same time inconsistency that FNC has [LW · GW]: the answer will be (predictably) different at different times, in a way that breaks probabilities = expectation of future probabilities. Again, each question is individually sound, but "exact copies of me" means different things at different times.

Question 1 has a similar time inconsistency issue [LW · GW] when the number of identical copies changes predictably but differentially across time.

Aside from that, questions 1 and 3 are often the wrong questions to ask [LW · GW] in decision theory. Non-identical versions can timelessly cooperate with you; identical copies may be totally opposed to you [LW · GW].

# The advantages of decision theory

Why does decision theory perform well in anthropic contexts, giving single decisions even when there are multiple anthropic probability questions? Simply because it unambiguously selects the question that it is relevant to answer. Average utilitarians maximise their score by figuring out the universe; total utilitarians by figuring out where most of the copies are. The whole process of ADT/UDT decision-making computes a specific reference class: the reference class of all correlated decisions with your own. By automatically including precommitments, ADT/UDT resolves the fact that the class of "exact copies of me" keeps on changing. And by being explicitly a decision theory, it resolves the issue of cooperation and non-cooperation of identical and non-identical agents.

So, back when I thought "anthropic probabilities" were a single question with a single answer, the fact that ADT/UDT gave a single answer (albeit a decision one rather than a probability one) convinced me that anthropic decisions were true while anthropic probabilities were not.

But now that I've realised that there are multiple anthropic probability questions (and also that all the "paradoxes" of anthropic probabilities have non-paradoxical decision theory analogues [LW · GW]), I'm fully content to say:

• "Yes Virginia, anthropic probabilities exist, and different anthropic probabilities are answering different anthropic questions."

Incidentally, there are far more than three questions - each of these questions can be different, depending on what time it is asked. So I'd also conclude:

• The reason that anthropic probability is so debated, is because none of the anthropic questions correspond to a simple, stable question that corresponds to an intuitive understanding of what anthropic probability actually is.