Primitive Perspectives and Sleeping Beauty

post by dadadarren · 2022-03-26T01:55:39.460Z · LW · GW · 14 comments

Contents

  The Fission Problem With a Toss
  PBR's Answer 
  Perspective Disagreement
  New Information About First-Person Perspective
  Back to Sleeping Beauty
None
14 comments

This is another attempt to promote my solution to anthropic paradoxes (perspective-based reasoning, PBR).

I propose the first-person perspective shall be considered a primitive axiomatic fact. "I naturally know I am this particular person, yet there is no underlying reason for why it is so. I just am." Taking the first-person perspective as a given, recognizing there is no rational way to analyze it would solve anthropic paradoxes and more.

This is in stark contrast to the conventional approach: considering it as an Observation Selection Effect (OSE), treating the first-person perspective as a random sample like SSA or SIA does. I discussed the main differences in a previous post [LW · GW]. Here I will explain how it answers problems like sleeping beauty.

The Fission Problem With a Toss

Imagine during tonight's sleep, an advanced alien would toss a fair coin. If Tails it would split you into 2 halves right through the middle. He will then complete each part by accurately cloning the missing half onto it. By the end, there will be two copies of you with memories preserved, indiscernible to human cognition. If Heads, nothing happens and you would wake up just as usual. After waking up from this experiment, and not knowing if you have been split, how should you reason about the probability that "yesterday's coin landed Heads?"

(For easy communication, let's call the split copy with the same left-half body as yesterday L and the copy with the same right-half body R. )

The experiment is set up so that there are 2 epistemically similar observers in the case of Tails, while only 1 if Heads. This can also be achieved by straight cloning without going through the splitting processes. I choose to present the experiment this way for consistency as I have used a similar thought experiment in the previous post.

PBR's Answer

The answer is 1/2 because there is no new information waking up the next day. But that is nothing new. What I want to point out is the probability of 1/2 can be verified by a frequentist model.

Picture yourself participating. After waking up the second day, it doesn't matter if I am the original person or L or R, I can take part in another iteration of the same experiment. After waking up from the second experiment I can do it again, and so on. Since it is a fair coin, as the iteration increases the relative frequency of Heads I experienced would approach 1/2. There would exist another copy in Tails experiments, but it doesn't change the relative frequency of the coin toss for me.

PBR differs from traditional halfers regarding self-locating probability. For example, given the coin toss landed Tails, what is the probability that I am L? Traditional halfers endorsing SSA would treat the first-person perspective as a random sample drawn from the two copies, giving equal probabilities to "I am L" and "I am R". But PBR recognizes there is no good way to explain the first-person perspective. So there is no valid probability value. This can also be verified by the frequentist model.

If I take part in multiple experiments as described above, then among all Tails experiments, there is no reason for the relative frequency of "me being L" to converge to any particular value. Obviously, half of all copies are L while the other half R. If we consider all copies produced in the experiment the fraction of L would be 1/2. Yet there is no way to say what relative fraction I will experience personally. Not without additional assumptions such as the first-person perspective is a random sample, . (A more detailed discussion about self-locating probability is available in this previous post [LW · GW])

Perspective Disagreement

Suppose the resulting copy/copies of the experiment are put into two separate rooms (1 room will be empty if Heads). A Friend of yours randomly enters one of the two rooms and meets you. You can communicate freely. How should you two reason about the probability that yesterday's coin landed Heads?

For the Friend, one of the two rooms would be empty if the coin landed on Heads. Both rooms would be occupied if the coin landed Tails. Therefore seeing the randomly chosen room occupied would lead to a Bayesian update and cause the probability of Heads to become 1/3.

For me, however, it doesn't matter how the coin landed, there is a constant 50% chance of meeting the Friend. Therefore seeing the friend does not change the probability of the coin toss.

For thirders, there is nothing worth noting. They arrived at the answer of 1/3 through a similar thought process: SIA treats the first person as a random sample from all potential observers just like the Friend sampling the rooms.

But for halfers, this presents a rather peculiar case. While the Friend and I can share whatever information we wish, we are still giving different answers. This problem has been discovered by Katja Grace [LW · GW] and John Pittard about a decade ago. Yet, to my knowledge, traditional halfers do not have a satisfactory explanation.

My approach gives a very straightforward answer to it. The first-person perspective is primitive and cannot be explained, thus incommunicable. To the Friend, he has met a non-specific copy. If the coin landed Tails and there are two copies, it does not matter which one he meets, his analysis would be the same. However, from my perspective, he met someone specific: the first-person me. I can try to share this information with the Friend by saying "It's me you are seeing!". Yet that specification would mean nothing to him.

This disagreement is also valid for the frequentists. If I take part in say 1000 iterations of the experiment, then I would roughly experience about 500 Heads/Tails. I would also see the Friend about 500 times, with about 250 times in Heads experiments and 250 in Tails experiments. The relative fraction of Heads where I meet the Friend is still 1/2. If the Friend takes part in 1000 experiments, he will have about 750 meetings. Out of which, 250 will be Heads. The relative fraction is 1/3 for him. The difference is due to the Friend meeting "the other copy" instead of me specifically.

New Information About First-Person Perspective

Suppose you ask the experimenter "Is the left side of my body the same old part from yesterday?" and got a positive answer. How should you reason about the probability of Heads?

Traditional halfers would incorporate this information through a Bayesian update. If Heads, I am the original copy. My left side is guaranteed to be the same as yesterday. If Tails, they would assign an equal probability for me to be L or R. Knowing my left side is the same eliminates the case for R. The probability of Tails is halved while Heads remains. Renormalizing gives P(Heads) equals 2/3.

(Thirders perform a similar update but with a different prior which gives a probability of 1/2. )

According to PBR, the above Bayesian update is invalid since it requires analyzing "what the first-person perspective is?". In the case of Tails, there is no proper way to reason which one of the two copies is the first person, so no valid probability for "I am L" or "I am R". The subsequent elimination and renormalization, therefore, has no logical basis.

Again this can be shown with the frequentist model. Repeating the experiment a large number of times would lead me to experience about equal numbers of Heads vs Tails. However, in these experiments, among iteration coin landed Tails, the relative frequency of "I am L" would not converge to any value. (Half of all copies are L, but that is regarding all copies instead of the specific first-person.) Consequently, among all experiments where "my left side is the same as yesterday" the relative frequency of Heads would not converge on any particular value.

For example, repeating the experiment 1000 times would give about 500 Heads and Tails each. Say, I am the copy who is L 400 times in the 500 Tails cases, then the fraction of Heads where "my left side is the same" would be 5/9. If I am a different physical person, say the R in all 500 Tails cases, then the fraction of Heads would be 100% when "my left side is the same". The long-run frequency solely depends on which physical copy I am. And there is no proper way to reason about it. Traditional camps have no trouble generating a value only because they make additional assumptions explaining what the first-person perspective is, such as regarding it a selection outcome from all copies.

So for new information regarding the first-person perspective (self-locating information is the term), no Bayesian update can be performed. Such information about which person I am shall be treated as primitively given. There is no way to analyze why it is so. Now it is known that I am the one with the same left side as yesterday. And for this particular physical person, the long-run frequency of Heads is still 1/2. Consistent with the no-update value.

Back to Sleeping Beauty

Fission with Toss and Sleeping Beauty Problem are equivalents in terms of anthropics. Each camp, (SSA, SIA, PBR) gives the same answer to both problems. For PBR, note the first-person perspective not only primitively identifies an agent I. It also identifies the moment now.

The sleeping beauty problem has its positives and negatives. On one hand, it is a remarkably concise and non-exotic problem that gathered a lot of attention for anthropics. But at the same time, creating similar epistemic instances using memory erasure can easily lead to misguided intuitions. For example, when attempting to solve it with a frequentist approach, people often assume new iterations take place chronologically in succession, i.e. after Tuesday. Yet this only allows the first-person experience of the last awakenings to accumulate. The correct model should be a bifurcating tree, each iteration takes half the duration as the previous so all experiments happen in the original two days' time.

Just like in Fission and Toss, PBR suggests the probability of Heads is 1/2 and remains at 1/2 after learning it is Monday. Furthermore, there is no valid probability for self-locating beliefs such as "now is Monday". Double-halfers have been trying to find ways to justify why there shouldn't be a Bayesian update. But all attempts so far have been unsuccessful. Michael Titelbaum has a strong and, in my opinion, conclusive counter. He showed that as long as we assign a non-zero probability to "today is Tuesday" double halving fails. PBR does not suffer from this pitfall.

PBR solves the three major problems faced by halfers all at once: 1. lack of a frequentist model, 2. reason for double-halving, and 3. disagreement between communicating parties. Furthermore, it does not suffer from other paradoxes such as Doomsday Argument or Presumptuous Philosopher.

14 comments

Comments sorted by top scores.

comment by avturchin · 2024-01-27T14:44:00.363Z · LW(p) · GW(p)

An interesting topic is that subjective probabilities can be (or not) path-dependent:
- If we create 3 copies of me by some symmetric process, I can think expect that being any of them has equal chances = 1:3

-If we create 2 copies, and after that one copy is (symmetrically) copied again, we get 0.5 for the first copy and 0.25 for second and third copies. 

In both cases we have 3 completely similar copies, but we get them by different paths, and this implies different probabilities. Also, if we ignore paths, and select only based on final states of copies, no matter how they are created, we get SSA. 

This thought experiment looks like SB and your Fission with a toss, but both copy-creating situations are the same: just symmetrical copying. 

comment by avturchin · 2024-01-27T14:33:18.258Z · LW(p) · GW(p)

If the person is told that it is Tails, and asked what is the probability that he is L – what should he say? Is it undefined under PBR?

Replies from: dadadarren
comment by dadadarren · 2024-01-29T14:28:24.904Z · LW(p) · GW(p)

Yep, under PBR, perspective—which agent is the "I"—is primitive. I can take it as given, but there is no way to analyze it. In another word, self-locating probability like "what is the probability that I am L" is undefined. 

Replies from: avturchin
comment by avturchin · 2024-01-30T10:49:50.128Z · LW(p) · GW(p)

But can we ask another question: 'where I am located?' For example, I know that I am avturchin, but I don't know in which of 10 rooms I am located, and assuming that 9 of them are red outside and 1 green, I can bet there is 0.9 chances that I am in red one. It doesn't matter here if I am just one person entering the rooms, or there are other people in the rooms (if in equal numbers) or even that my copies are in each room. 

Replies from: dadadarren
comment by dadadarren · 2024-01-30T14:56:52.142Z · LW(p) · GW(p)

If one person is created in each room, then there is no probability of "which room I am in" cause that is asking "which person I am". To arrive to any probability you need to employ some sort of anthropic assumption. 

If 10 persons are are randomly assigned (or assigned according to some unknown process), the probability of "which room I am in" exists. No anthropic assumption is needed to answer it. 

You can also find the difference using a frequentist model by repeating the experiments. The latter questions has a strategy that could maximize "my" personal interest.  The former model doesn't. It only has a strategy, if abided by everyone, that could maximize the group interest (coordination strategy). 

Replies from: avturchin
comment by avturchin · 2024-01-31T12:42:35.139Z · LW(p) · GW(p)

We can experimentally test this.

I can treat the place I was born as random relative to its latitude = 59N. I ignore everything I know about population distribution and spherical geometry and ask a question: assuming that I was born in the middle of all latitudes, what is the highest possible latitude? It will be double of my latitude, or 118 - which is reasonably close to real answer 90. 

From this I conclude that I can use information about my location as a random sample and use it for some predictions about the things I can't observe. 

Replies from: dadadarren
comment by dadadarren · 2024-01-31T14:18:57.769Z · LW(p) · GW(p)

If you use this logic not for the latitude your are born in but for your birth rank among human beings, then you get the Doomsday argument. 

To me the latitude argument is even more problematic as it involves problems such as linearity. But in any case I am not convinced of this line of reasoning. 

P.S. 59N is really-really high.  Anyway if your use that information and make predictions about where humans are born generally latitude-wise it will be way-way off. 

comment by MSRayne · 2022-06-29T20:58:11.372Z · LW(p) · GW(p)

I thought I had a solution to Sleeping Beauty, involving utilities - which I then realized after looking it up is just ata's solution [LW · GW] rediscovered - but then reading this I was enlightened. Decision theory shows us how we ought to behave in order to maximize the expected utility of our future self, but the question of "which self we are" afterward is entirely ill-posed. Very interesting!

comment by Signer · 2022-03-26T10:51:57.074Z · LW(p) · GW(p)

If I take part in say 1000 iterations of the experiment, then I would roughly experience about 500 Heads/Tails. I would also see the Friend about 500 times, with about 250 times in Heads experiments and 250 in Tails experiments.

Wait, which "I" are you talking about here? I forgot how PBR counts perspectives and for "experience about 500 Heads/Tails" I can understand using pre-experiment "I", but why equate it with only one after-experiment perspective?

Replies from: dadadarren
comment by dadadarren · 2022-03-26T23:07:27.848Z · LW(p) · GW(p)

The "I" is primitively defined by the first-person perspective. After waking up from the experiment, you can naturally tell this person is "I".It doesn't matter if there exists another copy physically similar to you. You are not experiencing the world from their perspective. 

You can repeat the experiment many times and count your first-person experience. That is the frequentist model. 

Replies from: None
comment by [deleted] · 2023-09-10T12:43:18.175Z · LW(p) · GW(p)

Your claims about the limitations on knowing about consciousness and free will based on the primitivity of perspective seem to me pretty random.

The perspective that we are taking is a primitive, but I don't understand why you connect that with consciousness given that the perspective is completely independent on any claims about it being conscious. I don't see how to link both non-arbitrarily, the mechanisms of consciousness exist regardless of the perspective taken. The epistemic limitations come from facts about brains not from an underlying notion of perspective.

And in the case of free will, there's no reason why we cannot have a third-person account of what we mean by free will. There's no problematic loop.

Replies from: dadadarren
comment by dadadarren · 2023-09-11T14:27:34.662Z · LW(p) · GW(p)

Consciousness has many contending definitions. e.g. if you take the view that consciousness is identified by physical complexity and the ability to process data then it doesn't have anything to do with perspective. I'm endorsing phenomenal consciousness, as in the hard problem of consciousness: we can describe brain functions purely physically,  yet it does not resolve why they are accompanied by subjective feelings. And this "feeling" is entirely first-person, I don't know your feelings because otherwise, I would be you instead of me. "What it means to be a bat is to know what it is like to be a bat."

In short, by suggesting they are irreducible and primitive, my position is incompatible with the physicalist's worldview, in terms of the definition of consciousness, and the nature of perspective. Knowing this might be regarded as a weakness by many, I feel obliged to point it out. 

Replies from: None
comment by [deleted] · 2023-09-11T23:22:58.136Z · LW(p) · GW(p)

But you've generalised your position on perspective beyond conscious beings. My understanding is that perspective is not reducible to non-perspective facts in the theory because the perspective is contingent, but nothing there explicitly refers to consciousness.

You can adopt mutatus mutandis a different perspective in the description of a problem and arrive to the right conclusion. There's no appeal to a phenomenal perspective there.

The epistemic limitations of minds that map to the idea of a perspective-centric epistemology and metaphysics come from facts about brains.

Replies from: dadadarren
comment by dadadarren · 2023-09-16T19:23:17.595Z · LW(p) · GW(p)

I didn't "choose" to generalize my position beyond conscious beings. It is an integral part of it. If perspectives are valid only for things that are conscious (however that is defined), then perspective has some prerequisite and is no longer fundamental. It would also give rise to the age-old reference class problem and no longer be a solution to anthropic paradoxes. E.g. are computer simulations conscious? answers to that would directly determine anthropic problems such as Nick Bostrom's simulation argument. 

Phenomenal consciousness is integral to perspective also in the sense that you know your perspective, i.e. which is the self, precisely because the subjective experience is most immediate to it. So when a subject wakes up in the fission experiment, they know which person "I" refers to even though he cannot point that person out on a map. 

My argument is in direct conflict with physicalism. And it places phenomenal consciousness and subjective experience outside the field of physics.