Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems?

post by dadadarren · 2019-08-01T20:10:46.445Z · LW · GW · 32 comments

Contents

  Preamble
  The Disagreement
  Repeating the Experiment as the Friend vs as the Participant
  Who is in this Meeting
  Can Indexicals Be Used In Anthropics?
  Impartial Observer vs the First Person
  Paradoxes and Mixed Reasoning
  Sleeping Beauty Paradox and Conclusion
None
32 comments

Preamble

This post discusses why any halfer position in the Sleeping Beauty Problem would lead to disagreements between two agents sharing all information. This issue has not been much discussed except by @Katja Grace [LW · GW] and John Pittard. Furthermore I would explain why these seemingly absurd disagreements are actually valid. This post is another attempt by me trying to get attention to the important difference between reasoning as the first person versus reasoning as an impartial observer in anthropic problems.

The Disagreement

To show any halfer position would lead to disagreements between two communicating agents consider this problem:

Bring a Friend: You and one of your friend are participating in a cloning experiment. After you fall asleep the experimenter would toss a fair coin. If it lands Heads nothing happens. If it lands Tails you would be cloned and the clone would be put into an identical room. The cloning process is highly accurate such that it retains the memory to a level of fidelity that is humanly indistinguishable. As a result, next morning after waking up there is no way to tell if you are physically the original or the clone. Your friend wouldn’t be cloned in any case. The next morning she would choose one of the two rooms to enter. Suppose your friend enters your room. How should she reason the probability of Heads? How should you reason?

For the friend this is not an anthropic problem. So her answer shouldn’t be controversial. If the coin landed Heads she has 50% chance of seeing an occupied room. While if the coin landed Tails both room would be occupied. Therefore seeing me in the room is evidence favouring Tails. She would update the probability of Heads to 1/3.

From my (the participant’s) perspective this is a classical anthropic problem just like Sleeping Beauty. There are two camps. Halfers would say the probability of Heads is 1/2. Reason being I knew I would find myself in this situation. Therefore I haven’t gained any new information about the coin toss. The probability must remain unchanged. The other camp says the probability of Heads should actually be 1/3. Most thirders argue I have gained the new information that I exist which is evidence favouring more copies of the participants exist. Therefore the probability of Tails shall increase from the prior.

Both camps should agree that seeing the friend (or not) would not change their answer. Because the friend is simply choosing one room out of the two. Regardless of coin toss result there is always a 50% chance for her to enter my room.

Now Halfers are in a peculiar situation. My probability of Heads is 1/2 while the friend’s answer is 1/3. We can share our information and communicate any way we like. Nothing I say can change her answer as nothing she says can change mine. To make the matter even more interesting I would have to admit there is no mistake in my friend’s reasoning and she would think I am correct too. Our difference is purely due to the differences in perspectives. This seems to contradict Aumann’s Agreement Theorem.

Thirders do not have any of these problems. The friend and I would be in perfect agreement that the probability is 1/3. As a result this issue is occasionally used as a counter to Halferism (as did so by Katja Grace [LW · GW] even though that post targets SSA specifically her argument applies to all halfers). However I would like to argue these disagreements are indeed valid.

Repeating the Experiment as the Friend vs as the Participant

Re-experiencing the experiment as the friend is not the same as re-experiencing it as a participant. From the friend’s perspective repeating it is straightforward. Let another coin toss and potential cloning happen to someone else and then choose a random room again. It is easy to see if the number of repetition is large she would see the chosen room occupied about 3/4 of the times. Out of which about 1/3 of the meetings would be after Heads. The relative frequency agrees with her answer of 1/3.

To repeat the experiment from my first-person perspective is a different story. After waking up from the first experiment (and potentially meeting my friend) I would simply participate in the same process again. I shall fall asleep and let another coin toss and potential cloning take place. I would wake up again not knowing if I’m the same physical person the day before. Suppose I’m told the coin toss result at the end of each day. If this is repeated a large number of times then I would count about 1/2 of the awakenings following Heads. I would also meet my friend 1/2 of the times, with about equal numbers after heads or tails. My relative frequency of Heads would be 1/2, agreeing with my answer.

In tosses involving both my friend and me she may see the other copy of participant instead of me specifically after Tails. This caused our difference in answers. Which leads to our different interpretation of:

Who is in this Meeting

From the friend’s perspective choosing a random room have two possible outcomes. Either the chosen room is empty or it is occupied. The new information she received is simply there is someone in the room. She interprets that person as an unspecific participant.

On the other hand from my perspective the person in the room is specific, i.e. me. The possible outcome for the room selection is either I see the friend or I do not see the friend. There may or may not exist another version of participant who is highly similar to me but that would not affect my observation.

Effectively we are answering two different questions. For my friend “what is the probability of Heads given there is a version of participant in the chosen room”. For me “what is the probability of Heads given I specifically am in the chosen room”. In this sense the disagreement does not violate Aumann’s Agreement Theorem.

A point to note: the specification of I is limited to my first-person perspective. It is incommunicable. I can keep telling my friend “It’s me” yet it would carry no information to her. Because this specification has nothing to do with objective differences between me and other copies of the participant. I refer this person as me only because I’m experiencing the world from its perspective. Because this person is undeniably most immediate to the only subjective experience and consciousness accessible. Identifying me is primitive. Which begs the question:

Can Indexicals Be Used In Anthropics?

Indexicals (or pure indexicals by some definition) such as I, here, now and by extension we, today or even this world is a point of contention between halfers and thirders. Typically thirders think indexicals can be used in anthropic reasoning while halfers disagree. In the Sleeping Beauty Problem this conflict manifest in the debate of new information. Most thirders think there is new information since I learned I am awake on one specific day, i.e. today. Halfers typically argue the indexical today is not an objective specification and it can only be said I am awake on an unknown day, i.e. there is at least one awakening. In the cloning example above thirders think the new information is a specific participant I exist whereas halfers often argue objectively speaking it only shows at least one participant exists.

The disagreement between participant and friend presents another explanation for the indexicals. Indexicals are references to the perspective center. When someone says here he is talking about the location where he is experiencing the world from, the place most immediate to the subjective experience. Similarly I and now points to other aspects of the perspective center. They refer to the agent and time most immediate to the subjective experience respectively. Because it is only related to the perspective a participant can intrinsically identify I and never confuse himself with others. He can do this without knowing any difference between him and other highly similar participants. Also because of this dependency on the perspective this identification is not meaningful to the friend. Which lead to their disagreement.

By this logic the use of indexicals in anthropic problems is valid, that is, as long as we are reasoning from the first-person perspective of a participant. The debate of their usage is a debate between perspectives. When thirders say today is a specific day it requires us to imagine being in beauty’s shoes wakening up in the experiment. Halfers oppose today’s use because they are reasoning as an outsider. In which case some objective measure is required to differentiate the two days. This shows the conflicting logics between an-

Impartial Observer vs the First Person

We often purposely formulate our logics in a way so it is not from any specific perspective. As if the perspective center is irrelevant to the problem at hand, i.e. we reason as (imaginary) impartial observers would. This uncentered reasoning would not treat any agent, time or location as inherently special. It is what we usually meant as thinking objectively. Comparing to first-person reasoning it is different in several aspects.

The obvious difference is the aforementioned use of indexicals. Impartial observers’ uncentered reasoning cannot use indexicals since they are references to the perspective center. For most problems this simply means substituting the indexical I to a third-person identity in logical expressions. Similarly now and here are switched to some objectively identified time and location. For anthropic problems however this has further implications because the ability to inherently identify oneself affects reasoning as shown by the disagreement between the participant and the friend.

Another difference is about one’s uniqueness. The indexicals as references to the perspective center are inherently special. From the first-person perspective I am one of a kind. Other agents, no matter how physically similar, are not its logical equals. This explains why as the first person I can be identified without knowing any difference between I and others. The differentiation is not needed because they are never in the same category to begin with. The same is true for now and here. On the other hand for an impartial outsider no agent, time or location is inherently special, i.e. they are indifferent.

The last difference is the probability of existence. The existence of I is a logical truth. Because to use the indexicals one has to reason from the first-person perspective. Yet reasoning from its perspective could only conclude in its self-existence, i.e. “I think, therefore I am”. It is sometimes presented as “ I can only find myself exist.” Furthermore given reasoned from a consistent perspective, “I am, here, now” would always be true. Because these indexicals refer to different aspects of the same perspective center. On the other hand we can also reason as impartial observers and specify a participant or time by some third-person identifiable measures. In this case it is entirely possible that agent does not exist or not conscious at the specified time. E.g. in the previous cloning example we can identify a participant as the one in the chosen room. It is possible that he does not exist since the chosen room can be potentially empty. In summary, it takes an outsiders’ perspective to think about someone’s nonexistence/unconsciousness.

Given these differences, logics from the first person and impartial outsiders should not mix in anthropic related problems. However most arguments in this field paid no attention to these distinctions. It is my core argument that anthropic paradoxes are caused by arbitrarily switching perspectives in reasoning, mixing the conflicting logics.

Paradoxes and Mixed Reasoning

To recap: the first-person perspective is centered. It can use indexicals because I, here and now are inherently special comparing to other agents, locations or time. Where “I exist, here and now” is a logical truth. On the other hand the impartial observers’ perspective is uncentered. Indexicals cannot be used because impartial observers are indifferent to any agent, time or location. Where the existence of any specific agents at any time or location is not guaranteed. Within a single logical framework we can employ either perspective, but not both. If an argument mixes the two, paradoxes ensue.

Take the Doomsday Argument as an example. It suggests we should take a more pessimistic outlook for human’s future than observed evidence suggests. The argument is simple. First, it recognizes a principle of indifference among all human beings (past, present or future alike). Then it specifically considers my birth rank among all humans (sometimes it is expressed as our birth rank or that of the current generation). As a result it concludes I am more likely to have my birth rank if there are fewer people in total, i.e. doom soon is more likely. This is a classic case of mixed perspectives. On one hand it treats all human beings indifferently as an impartial outsider would. Yet at the same time it uses indexicals by employing a first-person perspective and take a special interest in my or by extension our birth rank. Only by mixing the two it enables the conditional update shifting the probability to doom soon. If we reason as the first person and identify the indexical I by treating it as inherently special then the principle of indifference among all humans no longer applies. Similarly if we reason as an impartial outsider and recognize the principle of indifference then there is no reason to consider my birth rank specifically, in fact there is no way to identify I to begin with. Either way, the outlook of mankind can only be estimated by observed evidence. The probability shift is false.

Interestingly on some level we realize the inherently apparent I and the indifferent principle to all humans are not logically consistent. To reconcile this conflict a conscious step is often added: anthropic assumptions, which suggests treating I as a randomly selected individual among indifferent agents. Even though there is no justification to such assumptions accepting them feels natural. Because they allow two highly intuitive ideas to coexist. However, those two ideas are based on different perspectives which should be kept separate to begin with.

An example is the Self-Sampling Assumption (SSA). It suggests we should reason as if I am randomly selected from all actually existent (past, present or future) observers. This would lead to the infamous Doomsday Argument. An alternative to the SSA is the Self-Indication Assumption (SIA). It suggests we should reason as if I am randomly selected from all potentially existent observers. While it would refute the Doomsday Argument it has its own paradox: the Presumptuous Philosopher. (It concludes the number of intelligent life-forms in the universe should be higher than observed evidence suggests. Due to the fact that I exist is evidence favouring more observers.) The debate between SSA and SIA is about the correct reference class of I, whether it should be all existent or possible observers. Yet if the perspective reasonings are not mixed this problem would never exist in the first place. There is no default reference class for I since right from the start it is never in the same category with other observers, let it be actual or potential.

No default reference class also means any notions of the probability distribution of me being members of the said reference class are false. Such probabilities do not exist. Consider the paradox related to Boltzmann Brain. Some arguments suggest that under current theories of universe Boltzmann brains would vastly outnumber human brains in the universe. Then the probability of me being a Boltzmann brain is almost 100%. Essential to this calculation is a principle of indifference among all brains, which is valid if reasoned as an impartial observer. Yet it also specifically considers the first-person center I which contradict the indifference. As a result the probability it trying to calculate is logically inconsistent to begin with. There is no answer to it. Instead of using the indexical I the brain in question shall be specified from impartial observers’ perspective. E.g. A randomly selected one among all brains would almost 100% being a Boltzmann brain. This calculation would be correct. But also way less interesting. The same principle also refutes Nick Bostrom’s Simulation Hypothesis.

The non-existence of such probabilities can also be shown by the frequentist interpretation. Recall in the cloning example, I (participant) can re-experience the experiment as the first person. From my perspective after taking part in a large number of iterations the relative frequency of Heads or seeing my friend would both approach a certain value (1/2). However there is no reason for the relative frequency of me being the clone or the original of each experiment to converge toward any particular value. This again suggests such probabilities do not exist. Instead of using indexicals a participant must be specified from impartial observers’ perspective. Only then it is valid to ask the probability of this individual being the original or clone. E.g. the probability that the participant in the chosen room (if it exists) being the original is valid. A relative frequency can be calculated by an outsider without having to take a participant’s first-person perspective.

Sleeping Beauty Paradox and Conclusion

The Sleeping Beauty Paradox is without a doubt the most debated problem in anthropic reasoning. Nonetheless the same principle applies. The answer to it can be derived either from beauty’s first-person perspective or from impartial observers’ perspective. From the first-person perspective I have gained no new information. I did find myself awake today specifically. Yet that is just a logical truth in first-person reasoning. So even before falling asleep on Sunday it is already known that I would wake up in the experiment and identify that day as today. The probability of Heads remains at 1/2. From impartial observers’ perspective there is no new information either. While beauty being awake on a specific day is not guaranteed from this perspective, it could not use beauty’s perspective center to specify today. So all that is known is there is an unspecific awakening, i.e. there is at least one awakening. The probability of Heads should remain at 1/2 as well.

More importantly “the probability of today being Monday”, or “the probability of this awakening being the first” do not exist. Because they use indexicals in some default reference class (actual awakenings or potential awakenings) which is inconsistent. No Bayesian updating shall be performed after learning “Today is Monday”. The probability of Heads is 1/2 at awakening and remains at 1/2 after beauty finds out it is Monday.

In conclusion, perspectives play a significant role in anthropic related problems. Different perspectives could potentially give completely different answers. Most notably the special interest to the perspective center of the first-person and the general indifference of impartial observers are not compatible​. Reasoning from these two perspectives​ must be kept separate to avoid paradoxes.

32 comments

Comments sorted by top scores.

comment by cousin_it · 2019-08-03T16:56:56.479Z · LW(p) · GW(p)

Thanks for writing this! To recap:

  1. Alice goes to sleep and a coin is flipped. Heads: wake up on both day 1 and day 2 with amnesia. Tails: wake up only on day 1.

  2. Bob goes to sleep and another coin is flipped. Heads: wake up on day 1. Tails: wake up on day 2.

  3. If Alice and Bob are awake on the same day, they meet and talk.

Now if Alice and Bob do meet, then Bob believes Alice's coin came up heads with probability 2/3. If Alice is a thirder, she agrees. But if Alice is a halfer, they have an unresolvable disagreement.

Here's another thought experiment I came up with sometime ago (first invented by Cian Dorr, I think):

  1. Alice goes to sleep and a coin is flipped. Heads: wake up on both day 1 and day 2 with amnesia. Tails: wake up only on day 1. Then she's told the coinflip result and goes home.

  2. In case of tails, when Alice gets home, she sets up her room to look the same as the experiment. Then she writes herself a note that she's not actually in the experiment, takes an amnesia pill, and goes to sleep.

Now Alice's situation is symmetric: in case of both heads and tails she wakes up twice. In case of tails, with probability 1/2 she finds the note and learns that she's not in the experiment. So if she doesn't find the note, she updates to 2/3 probability of heads.

Taken together, these experiments show that thirdism is robust both to perspective change and to giving yourself mini-amnesia on a whim. I don't know any such nice properties for halfism. So I'm gonna jump the gun and say thirdism is most likely just true.

Replies from: dadadarren, Ape in the coat
comment by dadadarren · 2019-08-04T03:28:59.481Z · LW(p) · GW(p)

Obviously I don't agree but I respect your judgment.

I agree with your first example. It is equivalent to the cloning with a friend experiment. (I'm sorry but I'm so used to Head 1 awakening, Tail 2 awakenings setups as most literatures set it that way. I know it is reversed in your example. But for the sake of consistency, I would still discuss it this way. Please forgive my stubbornness.) In that setup Alice and Bob would come into disagreement as long as Alice is a halfer, no matter her reasons. I can understand if you treat this as evidence for halferism being wrong. At the end of the day, I have to admit this is very peculiar. Nonetheless, what I did was try to explain why this disagreement is valid. The reason I used a cloning example instead of the original memory wipe example is that it makes the expression much easier. But I would like to take this opportunity to apply the same argument to explain the disagreement in a memory wipe setup.

Frequentist reason: repeating the experiment from a participant's perspective is different from repeating it from an observer's perspective. While this is much easier to show in the cloning example, it is messier for memory wipes. The SBP is essentially, in case of Tails, dividing the total duration of the experiment (2 days) into 2 halves with a memory wipe. So there would be 2 subjectively indistinguishable instances. For Alice, repetitions must be in the same structure. Yet prior iterations should not affect the later ones. So each subsequent experiment must be shorter in duration. So if the first experiment takes 2 days. Then the second can only take 1 day. The third half a day, the fourth quarter a day, etc. This way Alice can repeat the experiment as many times as needed. And the relative frequency would approach to 1/2. For Bob, repeating it would always be randomly waking up at a potential awakening of Alice. Structure of repetition is irrelevant. for him. The relative frequency of Heads is 1/3 given he wakes up with Alice.

Bayesian reason: they interpret the meeting differently. To Bob, the meeting means one of Alice's awakening(s) is on the day Bob's awake. To Alice, the meeting means this specific awakening is on the day that Bob's awake. Alice is able to specify this specific awaken from any possible others because it is her perspective center. It is inherently special to her.

Regarding the second experiment. I am aware of this type of argument. Jacob Ross calls it *Hypothetical priors arguments". Variations of it have been purposed by Dorr 2002, Arntzenius 2003, and Horgan 2004, 2008. Basically it adds the missing identical awakening of Heads back. And sometime after waking up that added awakening is rejected by some information. Since the four possible awakenings are clearly symmetrical so each of which must have a probability of 1/4. Removing a possibility would call for a Bayesian update to cause the probability of Heads to drop to 1/3. This argument was not successful in convincing the opposition because it relies on its equivalency to the original Sleeping Beauty Problem. This equivalency however is largely intuition-based. So halfers would just say the two problems are different and noncomparable and thirders would disagree. There would be some back an forth between the two camps but not many valuable discussions can be had. That explains why this argument is typically seen in earlier papers. Nonetheless, I want to present my reasons why they are not equivalent. The first-person identification of today or this awakening is based on its perspective center. Which is based on its perception and subjective experience. If there is no waking up, then there is no first-person perspective to begin with. It is vastly different from wake up first then reject this awakening as a possibility. Also, as discussed in the main post., there is no probability distribution for an indexical being a member of default a reference class. So I'm against assigning 1/4 to the four events and the subsequent conditional update.

I am grateful for your reply. I'm not naive enough to think I can change your mind. Yet I appreciate the opportunity you gave for me to present some ideas that don't fit in the flow of the main post. Especially the messy explanation of the disagreement in memory-wipe experiments​.

comment by Ape in the coat · 2024-01-30T16:05:55.566Z · LW(p) · GW(p)

Now if Alice and Bob do meet, then Bob believes Alice's coin came up heads with probability 2/3. If Alice is a thirder, she agrees. But if Alice is a halfer, they have an unresolvable disagreement.

 

That's actually not the case. Halfer Alice agrees with Bob that her coin came Heads with 2/3 probability when they meet. Meeting with Bob is a new evidence that she couldn't be certain to expect. She knows that when the coin is Heads this event has 100% probability, while on Tails it's only 50%. So she updates as normal. And indeed, if we repeat the experiment multiple times and write down the state of the coin everytime Alice meets Bob, about 2/3 of them will be Heads.

Thirder Alice, on the contrary, doesn't update at meeting Bob at all. She has already "updated on awakening" that the coin is 2/3 Heads and ignores new evidence. This serves her good when she meets Bob, in the same sense that a broken clock is right twice a day. But in a general case about 1/2 of coin tosses are Heads.

Here's another thought experiment I came up with sometime ago

Suppose Alice is a thirder and doesn't write herself a note. Now she has two awakenings on both Heads and Tails which she can't destinguish between and thus agrees with a halfer Alice. But why stop here? Suppose that on Tails she recreates not one but two awakenings. Now thirder Alice will have 3/5 credence in favour of Tails. This way a thirder Alice can make herself arbitrary confident in the result of the coin toss just by a precommitment! Such shenanigans, however, will not work with a halfer.

Replies from: cousin_it, dadadarren
comment by cousin_it · 2024-02-01T00:54:39.303Z · LW(p) · GW(p)

She knows that when the coin is Heads this event has 100% probability, while on Tails it’s only 50%.

I might be missing something in your argument, but I think in my setup as stated, it should be 50% in both cases. When Alice's coin is heads, she wakes up on both days, but Bob wakes up on only one of them, depending on his own coin. So no matter if Alice is a halfer or a thirder, meeting Bob doesn't give her any new information about her coin. While Bob, in case of meeting Alice, does update to 2/3 about Alice's coin. So if the Alice he's meeting is a halfer, they have an unresolvable disagreement about her coin.

This way a thirder Alice can make herself arbitrary confident in the result of the coin toss just by a precommitment!

Yeah, also known as "algorithm for winning the lottery": precommit to make many copies of yourself if you win. I guess we thirders have learned to live with it.

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-01T07:20:33.558Z · LW(p) · GW(p)

Think about it in terms of what happens in the experiment as a whole - that's what halfing is about. If the coin is Heads Alice always meets Bob, either on Monday or on Tuesday, depending on when Bob is awake. If the coin is Tails Alice can meet Bob only on Monday,  which happens only 50% of time. Run the experiment multiple times and there will be iterations of it where Alice and Bob do not meet and all of them happen when the coin is Tails.

That's how halfer Alice thinks:

I'm awake. But this doesn't give me any new information because I was expecting to be awake in this experiment with 100%, regardless of the outcome of the coin toss:

Oh, hi Bob! Now this doesn't happen every experiment. This is new evidence in favour of Heads, so I update:

This is the same update that Bob makes when he meets Alice. On the other hand, Thirder Alice belives that probability for Heads is 2/3 even before she meets Bob. Which makes her overconfident in the wrong answer when the meeting doesn't happen and the coin is Tails.

Likewise, you can remove Bob from the problem alltogether and just make it so Alice has only 50% chance to be awake on Monday, and if it didn't happen and the coin is Heads she will be Awake on Tuesday. Here awakening is new evidence because it doesn't happen in every experiment.

Replies from: cousin_it
comment by cousin_it · 2024-02-01T12:17:09.361Z · LW(p) · GW(p)

It seems to me that the correct Bayesian updating is a bit different.

Let's denote Alice and Bob's coins as A and B, each taking values H or T, and denote the current day as D, taking values 1 or 2. Then, just after waking up but before learning whether Bob is awake, Halfer Alice has this prior: P(A=H∧D=1) = 1/4, P(A=H∧D=2) = 1/4, P(A=T∧D=1) = 1/2, and independently P(B=H) = P(B=T) = 1/2.

After that, meeting Bob gives new her information N = (A=H∧D=1∧B=H) ∨ (A=H∧D=2∧B=T) ∨ (A=T∧D=1∧B=H). These are three mutually exclusive clauses, and we can compute each of them according to Alice's prior above: P(A=H∧D=1∧B=H) = 1/4 * 1/2 = 1/8, P(A=H∧D=2∧B=T) = 1/4 * 1/2 = 1/8, P(A=T∧D=1∧B=H) = 1/2 * 1/2 = 1/4. The probability mass of N is split equally between A=H and A=T, so observing N shouldn't make Halfer Alice update about her coin.

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-01T14:13:54.982Z · LW(p) · GW(p)

You are describing Lewisian Halfer's model. It indeed produces incorrect results for Sleeping Beauty, I'm currently working on a post that is going to dive deeper into it, but for now, suffice to say, that according to it P(Heads|Monday) = 3/2, which is clearly wrong. It also fails at betting.

Correct halfer position for Sleeping Beauty, also known as double halfing, claims that both P(Heads|Awake) = 1/2 and P(Heads|Monday) = 1/2. Once again, I'm planning to have a deep dive into how it is possible and why it's the right answer in a separate post. For now you can just see that it's about probabilities averaged per iteration of experiment, not per awakening and that my comment is talking about how a Beauty, who is this kind of halfer, is supposed to reason.

Replies from: cousin_it
comment by cousin_it · 2024-02-01T14:34:55.406Z · LW(p) · GW(p)

suffice to say, that according to it P(Heads|Monday) = 3⁄2, which is clearly wrong

That sentence confuses me. The formulas from my comment imply that P(A=H|D=1) = 1/3 and P(A=T|D=1) = 2/3, which looks like an ok halfer position (modulo the fact that I accidentally swapped heads and tails in the very first comment and now am still sticking to that for coherence - sorry!)

About the double halfer position, not sure I understand it. What probabilities does it give for these three conjunctions: P(A=H∧D=1), P(A=H∧D=2), P(A=T∧D=1)? It seems to me (maybe naively) that these three numbers should be enough, any conditionals can be calculated from them by Bayes' theorem.

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-01T16:27:12.258Z · LW(p) · GW(p)

Think about it. In Sleeping Beauty, Monday awakening always happens. So the coin toss can be made already after this awakening. If 3/2 had been a correct estimate, the Beauty would have been able to predict the future coin toss better than chance. Which would've been... quite peculiar, to say the least. Of course one can also just run a simulation of the experiment and check [LW · GW].

Lewisian Halfers, however, are right for the Fissure mind experiment, where a person is either left in Room1 or splited in two people, one of which goes to Room1 and the second goes to Room2 - at random. In such experiment, you can't be certain to be in Room1 and so being in Room1 is indeed more likely if there was no spliting. And if a visitor comes to a random room and you meet there you will indeed have disagreement about probabilities [LW · GW].

About the double halfer position, not sure I understand it. What probabilities does it give for these three conjunctions: P(A=H∧D=1), P(A=H∧D=2), P(A=T∧D=1)?

Answering your direct question: 1/2 for all three.

That's because (A=H∧D=1) and (A=H∧D=2) are actually the same outcome. P(D=1) = 1; P(D=2) = 1/2.

Here (D=1) doesn't mean "this awakening is happening in the first day of the experiment" but rather "awakening in the first day of the experiment has happened". Likewise with (D=2). (D=1) and (D=2) are not exclusonary here, but intersecting. If (D=2) is true then (D=1) is also always true.
And if you want to talk specifically about "this awakening is happening in the first day of the experiment", then such probability is undefined for the Sleeping Beauty setting.

If it gave you more questions than answers - I'm hopefully going to answer them all when I finish my next couple of posts, where I attempt to rigorously justify all of it. For now you can just notice that double halfer position doesn't have problems with betting, doesn't updates out of nowhere and doesn't allow you to arbitrary manipulate your probabilities by precommitments.

Replies from: cousin_it
comment by cousin_it · 2024-02-01T16:40:47.384Z · LW(p) · GW(p)

And if you want to talk specifically about “this awakening is happening in the first day of the experiment”, then such probability is undefined for the Sleeping Beauty setting.

Yeah, I don't know if "undefined" is a good answer.

To be fair, there are some decision-theoretic situations where "undefined" is a good answer. For example, let's say Alice wakes up with amnesia on 10 consecutive days, and each day she's presented with a choice of envelope A or envelope B, one of which contains money. And she knows that whichever envelope she chooses on day 1, the experimenter will put money in the other envelope on days 2-10. This case is truly undefined: the contents of the envelopes on the desk in front of Alice are eerily dependent on how Alice will make the choice. For example, if she always chooses envelope A, then she should believe that the money is in envelope A with probability 10% and in B with probability 90%. But she can't use that knowledge to say "oh I'll choose B then", because that'll change the probabilities again.

But the Sleeping Beauty problem is not like that. Alice doesn't make any decisions during the experiment that could feed back into her probabilities. If each day we put a sealed envelope in front of Alice, containing a note saying which day it is, then Alice really ought to have some probability distribution over what's in the envelope. Undefined doesn't cut it for me yet. Maybe I should just wait for your post :-)

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-02T07:46:06.906Z · LW(p) · GW(p)

Why "this awakening is happening during Monday" isn't a valid event indeed requires some careful justifications and I wasn't planning to go into more details in this comment section. But the example of an undefined event you brough up is actually very helpful to get the right intuition. Because yes, despite the fact that it's less obvious, Sleeping Beauty problem is very much like this.

Lets look again at the envelope experiment. There are two outcomes for money placement: ABBBBBBBBB and BAAAAAAAAA. If you run the experiment and write down where the money were at each awakening you will always notice these long lines of As or Bs. If we do the same in a different experimental setting, where for every awakening there is some non-zero chance that money are either in envelope A or in envelope B, we won't be able to always be sure to observe this streak of the same values.

There is the exact same behaviour with Alice and Bob's awakenings. Run the experiment multiple times and write down which day the awakenings happen. Bob will have his Monday and Tuesday awakenings going in random order, while Alice will always have her Tuesday awakening predecessed by Monday awakening. Compare experiments with more awakenings, for example where Alice wakes up on every day of the week on Heads and Bob wakes up on a random day of the week, and it will be more obvious. 

Just like when Alice picks envelope A on day one money will always be placed in the envelope B on day two, When Alice wakes up on Heads&Monday her next awakening will always happen on Heads&Tuesday. In both cases, previous awakening affects the future awakening.

Replies from: cousin_it
comment by cousin_it · 2024-02-02T08:24:30.599Z · LW(p) · GW(p)

Hmm. But in the envelope experiment, once Alice commits to a decision (e.g. choose A), her probabilities are well-defined. So in Sleeping Beauty, if we make it so the day is automatically disclosed to Alice at 5pm let's say, it seems like her probabilities about it should be well-defined from the get go. Or at least, the envelope experiment doesn't seem to shed light why they should be undefined. Am I missing something?

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-02T08:54:48.257Z · LW(p) · GW(p)

Hmm. But in the envelope experiment, once Alice commits to a decision (to e.g. choose A), the probabilities are well-defined. So in Sleeping Beauty, if the day is automatically disclosed to Alice at 5pm let's say, it seems like her probabilities about it should be well-defined from the get go.

Do you mean that conditional probabilities should be well defined? They indeed are. 

P(Heads|Monday) = 1/2; P(Heads|Tuesday)=1. But as P(Monday) and P(Tuesday) are not defined you can't use them to arrive to P(Heads&Monday) and P(Heads&Tuesday) via Bayes theorem.

Replies from: cousin_it
comment by cousin_it · 2024-02-02T09:08:54.307Z · LW(p) · GW(p)

If you say things like "P(X|Y) is defined but P(Y) isn't", doesn't that call for a reformulation of all probability theory? Like, if I take the interpretation of probability theory based on sigma-algebras (which is quite popular), then P(Y) gotta be defined, no way around it. The very definition of P(X|Y) depends on P(X∧Y) and P(Y). You can say "let's kick out this leg from this table", but the math tells me pretty insistently that the table can't stand without that particular leg. Or at least, if there's a version of probability theory where P(Y) can be undefined but P(X|Y) defined, I'd want to see more details about that theory and how it doesn't trip over itself. Does that make sense?

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-02T09:29:52.992Z · LW(p) · GW(p)

Sure. But this has already been done and took much less trouble than you might have though. Citing Wikipedia on Conditional Probability:

Some authors, such as de Finetti, prefer to introduce conditional probability as an axiom of probability:

 {\displaystyle P(A\cap B)=P(A\mid B)P(B)}.

This equation for a conditional probability, although mathematically equivalent, may be intuitively easier to understand. It can be interpreted as "the probability of B occurring multiplied by the probability of A occurring, provided that B has occurred, is equal to the probability of the A and B occurrences together, although not necessarily occurring at the same time". Additionally, this may be preferred philosophically; under major probability interpretations, such as the subjective theory, conditional probability is considered a primitive entity. Moreover, this "multiplication rule" can be practically useful in computing the probability of {\displaystyle A\cap B} and introduces a symmetry with the summation axiom for Poincaré Formula

Replies from: cousin_it
comment by cousin_it · 2024-02-13T10:50:44.111Z · LW(p) · GW(p)

Not sure I understand. My question was, what kind of probability theory can support things like "P(X|Y) is defined but P(Y) isn't". The snippet you give doesn't seem relevant to that, as it assumes both values are defined.

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-13T11:32:57.289Z · LW(p) · GW(p)

The kind of probability theory that defines P(X|Y) axiomatically as a primitive entity and only then defines P(X&Y) as a multiplication of P(X|Y) and P(Y), instead of defining conditional probability as a ratio between P(X&Y) and P(Y). 

While it's mathematically equivalent, the former method is more resembling the way people deal with probabilities in practice - usually conditional probability is known and probability of an intersection isn't - and formally allows us to talk about conditional probabilities, even when the probability of an event we condition on is not defined.

comment by dadadarren · 2024-01-30T20:14:22.197Z · LW(p) · GW(p)

I think this highlights our difference at least in the numerical sense in this example. I would say Alex and Bob would disagree (provided Alex is a halfer, which is the correct answer in my opinion). The disagreement is again based on the perspective-based self identification. From Alex's perspective, there is an inherent difference between "today's awakening" and "the other day's awakening" (provided there is actually two awakenings). But to Bob, either of those is "today's awakening", Alex cannot communicate the inherent difference from her perspective to Bob. 

In another word, after waking up during the experiment, the two alternatives are "I see Bob today" or "I do not see Bob today." Both at 0.5 chance regardless of the coin toss result. 

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-01T06:46:17.809Z · LW(p) · GW(p)

I think this highlights our difference at least in the numerical sense in this example.

Yes! This is one of the few objective disagreements we have and I'm very excited to figure it out!

You seem to treat different awakenings of Alice as if they were different people in attempts to preserve the similarity between memory erasure sleeping beauty type of problems and fissure type of problems. While I notice that these problems are different. 

The difference is that in Sleeping Beauty P(Heads|Monday) = 1/2 while in Fissure, where non-fissured person is always in Room1 and fissured people are randomly assigned either Room1 or Room2, P(Heads|Room1) = 2/3. Is it our crux?

Replies from: dadadarren
comment by dadadarren · 2024-02-01T14:23:55.569Z · LW(p) · GW(p)

I maintain the memory erasure and fission problem are similar because I regard the first-person identification equally applies to both questions. Both the inherent identifications of "NOW" and "I" are based on the primitive perspective. I.E., to Alice, today's awakening is not the other day's awakening, she can naturally tell them apart because she is experiencing the one today. 

I don't think our difference comes from the non-fissured person always stays in Room1 while the fissure person are randomly assigned either Room 1 or Room 2. Even if the experiment is changed, so that the non-fissured person is randomly assigned among the two rooms, and the fissured person with the original left body always stays in Room 1 and the fissured person with the original right body always in Room 2 my answer wouldn't change. 

Our difference still lies in the primitivity of perspective. In this current problem by cousin-it, I would say Alice should not update the probability after meeting Bob, because from her first-person perspective, the only thing she can observe is "I see Bob (today)" vs "I don't see Bob (today)", and her probability shall be calculated accordingly. She is not in the vantage point to observe whether "I see Bob on one of the two days" vs "I don't see Bob on any of the two days", so she should not update that way. 

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-04T14:35:27.327Z · LW(p) · GW(p)

to Alice, today's awakening is not the other day's awakening, she can naturally tell them apart because she is experiencing the one today.

Well, sure but nothing is preventing her from also realizing that both of the awakenings are happening to her, not some other person. That both today's and tomorrow awakening are casually connected to each other even if she has her memory erased, contrary to the fissure problem where there are actually two different people in two rooms with their own causal history hence forth.

I would say Alice should not update the probability after meeting Bob, because from her first-person perspective, the only thing she can observe is "I see Bob (today)" vs "I don't see Bob (today)", and her probability shall be calculated accordingly. She is not in the vantage point to observe whether "I see Bob on one of the two days" vs "I don't see Bob on any of the two days", so she should not update that way.

Alice is indeed unable to observe the event "I didn't see Bob at all". Due to the memory erasure she can't distinguish between "I don't observe Bob today but will observe him tomorrow/observed him yesterday" and "I do not observe Bob in this experiment at all". So when Alice doesn't see Bob she keeps her credence at 50%.

But why doesn't she also observe "I see Bob on one of the two days", if she sees Bob on a specific day? Surely today is one of the two days. This seems like logical necessity. 

Suppose there is no Bob. Suppose:

The Beauty is awakened on Monday with 50% chance. If she wasn't awakened a fair coin is tossed. On Tails the Beauty is awakened on Tuesday.

Do you also think that the Beauty isn't supposed to update in favor of Tails when she awakes in this case?

Replies from: dadadarren
comment by dadadarren · 2024-02-04T16:34:40.228Z · LW(p) · GW(p)

This post highlights my problem with your approach: I just don't see a clear logic dictating which interpretation to use in a given problem—whether it's the specific first-person instance or any instance in some reference class. 

When Alice meets Bob, you are saying she should construe it as "I meet Bob in the experiment (on any day)" instead of "I meet Bob today" because—"both awakening are happening to her, not another person". This personhood continuity, in your opinion, is based on what? Given you have distinguished the memory erasure problem from the fission problem, I would venture to guess you identify personhood by the physical body. If that's the case, would it be correct to say you regard anthropic problems utilizing memory erasures fundamentally different from problems with fissures or clones? Entertain me this, what if the exact procedural is not disclose to you, then what? E.g. there is a chance that the "memory erasure" is actually achieved by creating a clone of Alice and wake that clone on Monday, then destroy it. Then wake the original on Tuesday. What would Alice's probability calculation then? Anything changes if the fissure is used instead of cloning? What would Alice's probability of Tails when she sees Bob when she is unsure of the exact procedure?

Furthermore you are holding that if saw Bob,  Alice should interpret "I have met Bob (on some day) in the experiment". But if if she didn't see Bob, she shall interpret "I haven't met Bob specifically for Today". In another word, whether to use "specifically today" or "someday" depends on whether or not she sees Bob or not. Does this not seem problematic at all to you?

I'm not sure about what you mean in your example, Beauty is awakened on Monday with 50% chance, if she is awaken then what happens? Nothing? The experiment just ends, perhaps with a non-consequential fair coin toss anyway? If she is not awakened then if the coin toss is Tails then she wakes on Tuesday? Is that the setup? I fail to see there is any anthropic elements in this question at all. Of course I would update the probability to favour Tails in this case upon awakening. Because that is new information for me. I wasn't sure that I would find myself awake during the experiment at all. 

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-05T16:50:57.554Z · LW(p) · GW(p)

This personhood continuity, in your opinion, is based on what?

Causality. Two time states of a single person a causally connected, while two clones are not. Probability theory treats independent and non-independent events differently. The fact that it fits the basic intuition for personal identity is a nice bonus.

If that's the case, would it be correct to say you regard anthropic problems utilizing memory erasures fundamentally different from problems with fissures or clones?

Yes it would. I find the fact that these problems are put in the same category of "anthropic problems" quite unfortunate as they have testably different probability theoretic properties. For example for Sleeping Beauty correct position is double halfism, while for fissure - lewisian halfism.

Entertain me this, what if the exact procedural is not disclose to you, then what? E.g. there is a chance that the "memory erasure" is actually achieved by creating a clone of Alice and wake that clone on Monday, then destroy it. Then wake the original on Tuesday. What would Alice's probability calculation then? Anything changes if the fissure is used instead of cloning? What would Alice's probability of Tails when she sees Bob when she is unsure of the exact procedure?

Okay, that sounds as an interesting problem. Let's formulate it like this:

Alice if put to sleep then the coin is tossed. On Heads she is awaken on Monday. On Tails another coin is tossed:

  1. Either she is awakened both on Monday and on Tuesday with memory erasure
  2. Or fissure happens. Alice1 is awakened on Monday, Alice2 is awakened on Tuesday

What do we have probability wise, on an awakening on the unknown day?

50% for Heads, 50% for Tails, 25% fissure, 25% memory erasure, 12.5% to be Alice1/Alice2

Now, suppose Alice meets Bob, who is awaken on a random day. Bob updates 2/3 in favor of Tails as he meets an Alice in the experiment with 75% probability.

But for a particular Alice the probability to meet Bob in the experiment is only 1/4 + 2/8 + 1/8 = 5/8

So her probability that the initial coin is Heads:

Now, I think in this particular case there is not much difference between fissure and cloning. There would apparently be difference if we were talking about a person who was about to participate in the experiment, instead of a person in the middle of it. Because current participator can be in the state of uncertanity whether she is a clone or not, while future participator is pretty sure the she is not going to be a clone, thus can omit this possibility from the calculations.

But yeah, I should probably write a separate post about such scenarios, after I'm done with the Sleeping Beauty case.

Furthermore you are holding that if saw Bob, Alice should interpret "I have met Bob (on some day) in the experiment". But if if she didn't see Bob, she shall interpret "I haven't met Bob specifically for Today". In another word, whether to use "specifically today" or "someday" depends on whether or not she sees Bob or not. Does this not seem problematic at all to you?

As a matter of fact, it doesn't. You seem to be thinking that I'm switching between two different mathematical models here. But actually, we can use a single probability space.
 
"I see Bob in the experiment" is equal to "I see Bob on either Monday or Tuesday" it's an event that consist of two outcomes: "seeing Bob on Monday" and "seeing Bob on Tuesday". When an outcome is realized it means that every event which this outcome is part of is realized. So when Alice sees Bob on Monday she both observes "I see Bob on Monday" and "I see Bob in the experiment". And, likewise, when Alice sees Bob on Tuesday. Just one observation of Bob on any day of the experiment is enough to be certain that Bob was observed on either Monday or Tuesday.

On the other hand, "I don't see Bob in the experiment" happens only when Bob was neither observed on Monday, nor on Tuesday. Not observing him only on one day isn't enough. To observe this event Alice has to accumulate information between two days.

All this is true, regardless of whether there is memory erasure or not. What is different with memory erasure is that now Alice is made unable to accumulate information between days. So she can't observe event "I don't see Bob in the experiment". However, she is still perfectly able to observe event "I see Bob in the experiment". She is supposed to update her credence for Heads based on it. And until her memory is erased she can act on this information.

What if problematic, on the other hand, is the "today", "this awakening" and similar categories which can't be formally mathematically specified in Sleeping Beauty. This is the reason why probability of an event "today is Monday" is undefined, "today" is not just some variable that takes a specific value from the {Monday, Tuesday}, on Tails it have to be both! It's not a fixed thing throughout the experiment and so reasoning as if it is leads to confusion and paradoxes.

I fail to see there is any anthropic elements in this question at all. Of course I would update the probability to favour Tails in this case upon awakening. Because that is new information for me. I wasn't sure that I would find myself awake during the experiment at all.

As I keep saying, this whole "anthropic problems" category is silly to begin with. All of these are just plain probability theory problems. And these two problems are isomorphic to each other. If being awaken on Tails is twice as likely than being awaken on Heads, awakening is an evidence in favor of Tails. If meeting Bob is twice as likely on Tails than on Heads, then meeting Bob is an evidence in favor of Tails. The same basic principle that gives you answer in one problem gives you the answer to the other. You don't need to search for any "anthropic elements" in these problems. The math works the same way.

comment by Dagon · 2019-08-01T21:56:20.165Z · LW(p) · GW(p)

This kind of misses out on the fundamental question of what "probability" means, and how it relates to cost/reward of being wrong or right. The different answers are based on whether the question is "will any instance of me experience X" or "will a specific instance of me (assuming there is any distinction) experience X" or "will all instances of me experience X".

Replies from: dadadarren
comment by dadadarren · 2019-08-02T16:24:54.109Z · LW(p) · GW(p)

Not trying to put it in any negative way, but I honestly find the reply vague and hard to respond to. I get a general impression about what you are trying to say but feel I'm guessing. Do you disagree with me interpreting probability as relative frequencies in the disagreement example? Or do you think there has to be a defined cost/reward setup to make it a decision-making problem to talk about probabilities in anthropics? Or maybe something else?

Regarding different answers to different questions of the various instances of me. Again I'm not very sure what the argument is or how is it related to anthropics. Are you trying to say the disagreement on probability is due to different interpretations of the question? Also, I want to point out that not all anthropic problems are related to different instances of an observer. Take the Doomsday Argument, or the cloning experiment for example, the paradox is formed at the agent level, no special consideration of time/instances is needed.

Replies from: Dagon
comment by Dagon · 2019-08-02T16:30:42.295Z · LW(p) · GW(p)

I think I'm mostly reacting to:

More importantly “the probability of today being Monday”, or “the probability of this awakening being the first” do not exist.

Which I think is incorrect. They exist to the same extent that any probability exists: there are future experiences one can define (payouts, or resolutions of a wager) and it's sensible to talk about the relative likelihood of those experiences.

Replies from: dadadarren
comment by dadadarren · 2019-08-04T04:01:29.775Z · LW(p) · GW(p)

I can relate to that. In fact, that is the most common criticism I have faced. After all, it is quite counter-intuitive.

I want to point to the paradox regarding the probability of me being a Boltzmann Brain. The probability of "this awakening being the first" is of the same format: the probability of an apparent indexical being a member of some default reference class. There is no experiment deciding which brain is me just as there is no experiment determining which day is today. There is no reason to apply a principle of indifference among the members of the default reference class. Yet that is essential to come up with a probability.

Of course one can define the experience. But I am not arguing "today is Monday" is a nonsensical statement, only there is no probability distribution. Yes, we can even wager on it. But we do not need probability to wager. Probability is however needed to come up with a betting strategy. Imagine you are the participant in the cloning-with-a-friend example who's repeating the experiment a large number of times. You enter wagers about whether you are the original or clone after each wake-up. Now there exist a strategy to maximize the total gain of all participants or a strategy to maximize the average gain of all participants . (assuming all participants would act the same way as I do.) However, there is no strategy to simply maximize the gain of the self-apparent me. That is a huge red flag for me.

Of course one may argue there is no such strategy because of this beneficiary me is undefined. (it's just an indexical after all). Then would it be consistent to say the related probability exists and well-defined?

Replies from: Dagon
comment by Dagon · 2019-08-04T15:57:02.106Z · LW(p) · GW(p)
But we do not need probability to wager. Probability is however needed to come up with a betting strategy

This may be near to a crux for me. Other than making decisions, what purpose does a probability distribution serve? Once we've agreed that probability is about an agent's uncertainty rather than an objective fact of the universe, it reduces to "what is the proper betting strategy", which combines probabilities with payoffs.

If you are a Boltzmann brain (or if you somehow update toward that conclusion), what will you do differently? Nothing, as such brains don't actually act, they just exist momentarily and experience things.


Replies from: dadadarren
comment by dadadarren · 2019-08-29T15:31:39.033Z · LW(p) · GW(p)

Sorry for abandoning the discussion and reply so late. I think even if the sole purpose of probability is to guide decision making the problem remains about these self-location probabilities. In the cloning example, suppose we are giving a reward for participants' every correct guess whether they are the original or clone. "The probability distribution of me being the original or the clone" doesn't help us to make any decision. One may say these probabilities guide us to make decisions to maximize the overall benefit of all participants combined. However such decisions are guided by "the probability distribution of a randomly selected participant being the original or the clone" without the use of indexical. And this purposed use of self-locating probability is based on the assumption that I am a randomly selected observer among certain reference class. In effect, an unsupported assumption is added yet it doesn't allow us to make any new decisions. From a decision-making point of view, the entire purpose of this assumption seems to be finding an use of these self-locating probabilities.

"The probability distribution of me being the original or the clone" would be useful to decision making if it guides us on how to maximize the benefit of me specifically as stated in the probability distribution. But such a strategy do not exist. If one holds the view that other than decision making probability serves no purpose, then he should have no problem accepting self-locating probabilities do not exist since they do not have any purpose.

Replies from: Dagon
comment by Dagon · 2019-08-29T18:07:03.002Z · LW(p) · GW(p)

What reward (and more importantly, what utility) does the predictor receive/lose for a correct/incorrect guess?

To the extent that "you" care about your clones, you should guess in ways that maximize the aggregate payout to all guessers. If you don't, then guess to maximize the guesser's payout even at the expense of clones (who will make the same guess, but be wrong more often).

Self-locating probabilities exist only to the extent that they influence how much utility the current decision-maker assigns to experiences of possibly-you entities.

Replies from: dadadarren
comment by dadadarren · 2019-08-29T22:17:06.298Z · LW(p) · GW(p)

Probability should not depend on the type of rewards. Of course, a complicated system of reward could cause decision making to deviate from simple probability concerns. But probability would not be affected. If it helps then consider a simple reward system that each correct answer is awarded one util. As a participant, you take part in the same toss and clone experiment every day. So when you wake up the following day you do not know if you are the same physical person the day before. So you guess again for the same reward. Let your utils be independent of possible clones. E.g. if for each correct guess you are rewarded with a coin then the cloning would apply to the coins in your pocket too. Such that my cumulative gain would only be affected by my past guesses.

Why the extent of care to other clones matter? My answer and other clones' utils are causally independent. The other clone's utility depends on his answer. If you are talking about the possible future fissions of me it is still unrelated. Since my decision now would affect the two equally.

Surely, if "the probability distribution of me being the original or the clone" exists then it would be simple to devise a guessing strategy to maximize my gains? But somehow this strategy is elusive. Instead, the purposed self-locating probability could only help to give strategies to maximize the collective (or average) utilities of all clones even though some are clearly not me as the probability states. And that is assuming all clones make exactly the same decision as I do. If everyone must make the same decision (so there is only one decision making) and only the collective utility is considered then how is it still guided by a probability about the indexical me? That decision could be derived from the probability distribution of a randomly selected participant. Assuming I am a randomly selected participant is entirely unsubstantiated, and unnecessary to decision making as it brings nothing to the table.

comment by stochastic_bit · 2019-08-02T05:39:15.379Z · LW(p) · GW(p)

Really interesting, I like the use of Aumann’s Agreement Theorem, but not sure yet about the claims against indexing.


I have some problems with the SIA vs SSA discussion:

As I see it, SIA and SSA are not alternative for each other, they are both true, and cancel each other, so the doomsday argument is true argument (when ignoring other information like SIA) and also the claim for more life is true (when ignoring SSA) - but together the claims are false. My ranking is evidence for doomsday, but my existence is evidence for more life, when putting them together (in Bayes theorem) they just cancel. No paradoxes.