Which Questions Are Anthropic Questions?

post by dadadarren · 2023-08-31T15:15:39.964Z · LW · GW · 13 comments

Contents

  1. The Room Assignment Problem
  2. The Incubator
  3. Incubator + Room Assignment
  Discussion
None
13 comments

I will try to keep this shot, just want to use some simple problems to point out what I think is a commonly overlooked point in anthropic discussions.

1. The Room Assignment Problem

You are among 100 people waiting in a hallway. The hallway leads to a hundred rooms numbered from 1 to 100. All of you are knocked out by a sleeping gas and each put into a random/unknown room. After waking up, what is the probability that you are in room No. 1?

This is just an ordinary probability question. All room numbers are symmetrical, the answer is simply 1%. It is also easy to imagine taking part in similar room-assigning experiments a great number of times, the relative fraction of you waking up in room No.1, or any other number, would approach 1%. 

It is safe to say this is not an anthropic question. Even for people with the metaphysical stance that all probabilities are anthropic, it is undeniable that we have been solving such questions successfully without any anthropic considerations. 

2. The Incubator

An incubator enters the hallway. It will enter room No.1 and creat a person in it then does the same for the other 99 rooms.  It turns out you are one of the people the incubator has just created. You wake up in a room and is made aware of the experiment setup. What is the probability that you are in room No.1?

While this question may seem trivial, it is an anthropic question. In ordinary problems like Question 1, an event refers to a unique experiment outcome, a distinct possible world. But this incubator problem, objectively speaking, is deterministic. Knowing the exact process from a god's eye view still leaves uncertainty because the uncertainty comes from the fact that you are unsure of your own location in this only world, e.g. which physical person is the subjective self. This kind of same-world event is the crux of anthropic paradoxes. E.g. the sleeping beauty problem is special because of the two awakenings both in the Tails world. 

Also worth noting the frequentist model is much trickier to imagine. While the incubator has no problem performing this job ten thousand times, and creating a million people in the process, it seems absurd to say you are one of the people created in each experiment, that you are ten thousand people. Obviously, this is not a run-of-the-mill question. 

3. Incubator + Room Assignment

This time the incubator creats 100 people in the hall way, you are among the 100 people created. Each person is then assigned to a random room. What is the probability that you are in Room 1?

This is just an ordinary probability question. In fact, it is Question 1 with some background story of how you got in the hallway before the experiment began. Whether you were there because of the incubator or some other process does not affect the room-assigning process. Different room numbers still reflect unique assignments and different possible worlds.  

Instead of taking which physical person is the self as a given fact, some might want to treat it the same way as in Question 2, implying besides the room assignment process, the uncertainty also comes from which physical person the self is. This would lead to undesirable consequences. However, that is not the focus of this post. Even if one endorses this approach it is undeniable that Questions 2 and 3 are different in nature. The former's uncertainty entirely depends on the single-world self-location while the latter doesn't. 

Discussion

Almost all anthropic discussions implicitly treat Questions 2 and 3 as the same problem not worth differentiating. Some, e.g. Nick Bostrom IMSMR, explicitly stated that they ought to be regarded as equivalent. But this position is not accompanied by any explanation. It is treated as an intuitive thing to simply accept. 

But such equivalency links anthropic problems with ordinary ones. It requires justification. One possible justification, which I think is also one major reason why the equivalency seems so intuitive, is that all popular anthropic theories, including both SSA and SIA, as long as they consider the self as a random sample, would treat Question 2 the same way as Question 1. From this, it follows Question 2 and 3 are equivalent as well. 

In reality, however, the arguments sometimes take the opposite direction. People take the equivalency as an indisputable fact and from there argue for the credibility of common anthropic assumptions. Without an independent justification for the purposed equivalency, this logic is circular. 

13 comments

Comments sorted by top scores.

comment by Ape in the coat · 2023-09-01T11:27:08.283Z · LW(p) · GW(p)

What is the justification for treating questions 1 and 2; or 2 and 3 differently? Why would the fact, that 2 is classified as "anthropic problem" while 1 and 3 are not classified this way, change anything about the way probability theory works? 

I feel that justifying this position requires some special metaphysical difference between anthropic and non-anthropic problems and as a rule of thumb, as soon as we start talking about metaphysics, it's a clear signal that we are just dancing around our own confusion.

I agree that some anthropic problems have meaningful differences from non-anthropic variants, but these differences should have justifications reducable to general probability theoretic considerations.

Replies from: dadadarren
comment by dadadarren · 2023-09-01T12:36:55.739Z · LW(p) · GW(p)

Anthropic paradoxes happen only when we use events representing different self-locations in the same possible world. If the paradoxes are just problems of probability theory then why this limited scope? 

I do consider anthropic problems, in one sense or another, to be metaphysical. And I know there are people who disagree with this. But wouldn't stipulating anthropic paradoxes are solely probability problems also require arguments to justify? Apart from "a rule of thumb"?

Replies from: Ape in the coat
comment by Ape in the coat · 2023-09-01T13:17:25.541Z · LW(p) · GW(p)

My current hypothesis is that anthropic paradoxes happen when people use probability theory incorrectly, in an inappropriate setting, making incorrect assumptions. Mostly assuming things to be randomly sampled when they are not, ignoring causality and law of conservation of expected evidence. 

But wouldn't stipulating anthropic paradoxes are solely probability problems also require arguments to justify?

Of course. I'm currently finishing a post dedicated to this among other things. Here is an example from it, that I call Bargain Sleeping Beauty (BSB).

You and another person participate in the experiment. Sadly, the funding is limited so no amnesia drug is provided. Instead, a coin is tossed. On Heads one of you will be put to sleep and then awaken. On Tails both of you will be put to sleep and then awaken in different rooms. You were put to sleep and now are awaken. What is the probability that the coin landed Heads

I claim that here P(Heads|Awakening) = 1/3, despite being Double Halfer/Halfer in Classic/Incubator versions correspondingly. And the important difference isn't in the fact that BSB isn't anthropic problem but in the fact that here there is actually a random sample between two people who would be put to sleep and awakened on Heads. So being awakened is evidence for Tails. 

And of course there is also this example of an antropic paradox [LW · GW] which doesn't become less paradoxical when remade it into a non-antropic problem [LW · GW].

Replies from: dadadarren
comment by dadadarren · 2023-09-02T17:52:24.349Z · LW(p) · GW(p)

If we modify the original sleeping beauty problem, such that if heads you will be awakened on one randomly sampled day (either Monday/ Tuesday), would you change your answer to 1/3?

Replies from: Ape in the coat
comment by Ape in the coat · 2023-09-03T06:13:04.532Z · LW(p) · GW(p)

This kind of sampling, actually makes Halfism true. You can see that P(Heads|Monday) = 2/3 in this setting, contrary to classical SB where P(Heads|Monday) = 1/2. But the paradox disappears, nevertheless.

To make Thirdism true we need to make the implicit assumption, that awakened states are randomly sampled, to be actually true. So the causal process that determines the awakenings shouldn't be based on a coin toss, but on a random generator with three states: 0, 1, 2.

If the generator produced 0, the coin will be put Heads and the Beauty to be awakened on Monday. If 1 - the coin is to be put Tails and the Beauty also to be awakened on Monday. And if the generator produced 2 - the coin is to be put Tails and the Beauty to be awakened on Tuesday. Again the paradox disappears, even though the experiment is still as anthropic as ever. 

Replies from: dadadarren
comment by dadadarren · 2023-09-05T12:34:16.245Z · LW(p) · GW(p)

I don't feel there is enough common ground for effective discussion. This is the first time I have seen the position that the sleeping beauty paradox disappears when the Heads awakening is sampled between Monday and Tuesday. 

Replies from: Ape in the coat
comment by Ape in the coat · 2023-09-05T13:04:38.338Z · LW(p) · GW(p)

Oh, sorry, I misinterpreted you. I thought you meant that Tails outcome is randomly sampled, not Heads outcome. So that we would have 1 awakening on Monday on Heads and 1 awakening on either Monday or Tuesday on Tails and then, indeed, there is no paradox.

Yeah, as far as I can tell, random sampling on Heads doesn't change anything, just makes harder to track the outcomes. You may read my recent post [LW · GW] to better grasp how and what kind of random sampling is relevant to anthropic problems.

comment by ShardPhoenix · 2023-09-01T06:34:41.113Z · LW(p) · GW(p)

Doesn't example 3 show that one and two are actually the same? What difference does it make whether you start inside or outside the room?

Replies from: dadadarren
comment by dadadarren · 2023-09-01T12:13:28.124Z · LW(p) · GW(p)

Like Question 1 and traditional probability problems, Question 3's events reflect different possible worlds, different outcomes of the room-assigning experiment.  Question 2's supposed events reflect different locations of the self in the same possible world, i.e. different centred worlds. 

Controversial anthropic probability problems occur only when the latter type is used. So there is good reason to think this distinction is significant. 

Replies from: Ape in the coat
comment by Ape in the coat · 2023-09-01T14:04:21.460Z · LW(p) · GW(p)

Hmm. I don't think you require the framework of centered possible worlds for question 2. Nothing is stopping us to perceive the situations as different possible worlds, not different places in the same world. There are hundred independent elementary outcomes (I am in room X for X up to 100). So we can define probability space and satisfy the conditions of Kolmogorov's Axioms.

On the other hand, consider classic Sleeping Beauty. Heads and Monday, Tails and Tuesday, Tails and Tuesday are not three independent outcomes (the last two are casually connected), so normal probability theory is not applicable and people are trying to do the shenanigans with centered possible worlds.

Replies from: dadadarren
comment by dadadarren · 2023-09-02T18:12:27.970Z · LW(p) · GW(p)

Can you point out the difference why Tails and Monday, Tails and Tuesday are casually connected while the 100 people created by the incubator are not, by independent outcomes instead?

Nothing is stopping us to perceive the situations as different possible worlds, not different places in the same world.

All this post is trying to argue is statement like this requires some justification. Even if the justification is a mere stipulation, it should be at least recognized as an additional assumption. Given that anthropic problems often lead to controversial paradoxes, it is prudent to examine every assumption we make in solving them. 

Replies from: Ape in the coat
comment by Ape in the coat · 2023-09-03T06:38:59.111Z · LW(p) · GW(p)

Can you point out the difference why Tails and Monday, Tails and Tuesday are casually connected while the 100 people created by the incubator are not, by independent outcomes instead?

 

Sure. Tails and Tuesday always happens after Tails and Monday with the same person. While each of a hundred people are created only in one room. Here [LW · GW] I've shown how this is a big deal.

There is a general problem with applying probability theory to moments in time due to it's conectedness. We can in principle design an experiment to make this conectedness irrelevant. But SB isn't that because it simultaneously tries to track random sampled results of a coin toss and non random sampled days. When we fix the day we can meaningfully talk about P(Heads|Monday) and P(Tails|Monday). When we fix the outcome of the coin toss we can meaningfully talk about P(Monday|Heads) and P(Monday|Tails). But as soon as we try to combine them together... well then we have to talk about "centered possible words" for which we do not actually have proper mathematical framework, which means we are just unlawfully making things up.

Given that anthropic problems often lead to controversial paradoxes, it is prudent to examine every assumption we make in solving them. 

Totally agree with this point. I just believe that I've already found the source of these paradoxes and it has to do with wrongly applying probability theory and not with whether the problem is anthropic or not. But yeah, I can be missing something here and it's important to be prudent with such things.

comment by Q Home · 2023-09-18T13:58:51.719Z · LW(p) · GW(p)

I like how you explain your opinion, very clear and short, basically contained in a single bit of information: "you're not a random sample" or "this equivalence between 2 classes of problems can be wrong".

But I think you should focus on describing the opinion of others (in simple/new ways) too. Otherwise you're just repeating yourself over and over.

If you're interested, I could try helping to write a simplified guide to ideas about anthropics.