The Validity of Self-Locating Probabilities (Pt. 2)

post by dadadarren · 2021-08-25T01:53:17.616Z · LW · GW · 16 comments

Contents

  Cloning with Memory (with a coin toss)
  Possible Answers
  If You Say P(Heads)=1/3 
  If You Say P(Heads)=1/2
None
16 comments

In a previous post, [LW · GW] I argued self-locating probabilities are not valid concepts.  Many comments want me to use examples with solid numbers and bets. So here is the example that is essentially the Sleeping Beauty Problem with a small twist to highlight the problems of self-locating probability. 

Cloning with Memory (with a coin toss)

The experiment is almost the same as before [LW · GW]. Tonight during your sleep some mad scientist will scan your body at a molecular level to create a highly accurate clone. The process is highly advanced so the created person will accurately retain the original's memory to a degree not discernible by human cognition. So after waking up, there is no way to tell whether you are the Original or the Clone. However, the mad scientist will only perform the cloning if a fair coin toss lands on Tails (He will scan you regardless). Now, after waking up ask yourself this: "what is the probability that I am the Original?" Also, "What is the probability that the coin landed on Heads?"

Possible Answers

Let me present my answer first so it is out of the way: The probability of Heads is 1/2 since it is a fair coin toss. And the "probability I am the Orignal" is not a valid concept. "I" is an identification not based on anything but my first-person perspective. Whether "I" am the Orignal or the Clone is something primitive, not analyzable. Any attempt to justify this probability requires additional postulates such as equating "I" to a random sample of some sort. 

Perhaps the more popular answer would be to say the probability that I am the Orignal is 2/3. Whereas the probability for Heads is 1/3. This would correspond to Thrider's camp in the Sleeping Beauty Problem. The rationale for it may not be the same for all Thirders. But it typically follows the Self-Indication Assumption, and finding myself exists eliminates the possibility that I am the Clone and the coin landed Heads. 

Another camp would be saying the probability of Heads is 1/2, and the probability that I am the Orignal is 3/4. This corresponds to the Halfer camp in the Sleeping Beauty problem. This camp endorses the "no new information" argument. But have different reasons regarding how to update given self-locating information. 

If You Say P(Heads)=1/3 

Say the Mad scientist wants to encourage people to participate in his experiment. So he decides to give 2 gold bars to each copy after waking him up. He will always offer each copy a bet. You can give up these 2 bars for a chance to win 5 if the coin landed Heads. All this is disclosed to you. There is no new information when being offered the bars and bet. Say your objective is simple: "I just want more gold" (and risk-neutral). Given you think P(heads)=1/3, would you take the bet? If you would take the bet, what makes your decision not reflecting the probability? 

If You Say P(Heads)=1/2

It faces the same trouble as pointed out by Elga in the sleeping beauty problem. What happens when I learn that I am the Orignal. i.e. what is P(heads|I am the Original)? Standard Bayesian update would give the probability of Heads is 2/3. But that can't be right. For this experiment, the Original and the Clone do not have to be waked up at the same time. The mad scientist could wake up the Orignal first. In fact, the coin can be tossed after it. For dramatic effect, after telling you you are the original, the mad scientist can give you the coin and let you toss it. It seems absurd to say the probability for Heads is anything but 1/2. Why is does the probability for Heads remains unchanged after learning you are the Orignal? How come Bayesian update is not applicable for self-locating information?

16 comments

Comments sorted by top scores.

comment by Radford Neal · 2021-08-25T04:10:19.436Z · LW(p) · GW(p)

I think we've had this discussion before, but let me try one more time...

You say, "And the "probability I am the Orignal" is not a valid concept. "I" is an identification not based on anything but my first-person perspective. Whether "I" am the Orignal or the Clone is something primitive, not analyzable. Any attempt to justify this probability requires additional postulates such as equating "I" to a random sample of some sort."

But to me, this means throwing out the whole notion of probability as a subjective measure of uncertainty.  Perhaps you're fine with that, but it also means throwing out all use of probability in scientific applications, such as  evaluating the probability that a drug will work and/or have side effects - because the practical use of such evaluations is to conclude that "I" will probably be cured by that drug, which is a statement you have declared meaningless.  Maybe you're assuming that some "additional postulate" will fix that, but if so, I don't see why something similar wouldn't also render the probability that "I" am the Original in your problem meaningful.

I think an underlying problem here is an insistence on overly abstract thought experiments.  You're assuming that the subject of the experiment cannot simply walk out the door of the room they're in and see whether they're in the same place where they went to sleep (in which case they're the Original), or in a different place.  They can also do all sorts of other things, whose effects for good or bad may depend on whether or not they are the Original (before they figure this out). They will in general need some measure of uncertainty in making decisions of this sort - they can't simply say that self-locating probabilities are meaningless, when implicitly they will be using them to decide. This is all true even if they in fact decide to cooperate with the experimenter and do none of this.

The assumption that the experiment must proceed in the manner as it is abstractly described severs all connection between the answers being proposed and the real world.  There is then nothing stopping anyone from proposing that the probability of Heads is 1/2, or 3/4, or 2/7 - since none of these have consequences - and similarly the probability of "I am the Original" can be anything you like, or be meaningless, if you treat the person making the judgement as an abstract entity constrained to do nothing but what they're supposed to do in the problem statement, rather than as a real person.

Replies from: dadadarren
comment by dadadarren · 2021-08-25T17:34:16.590Z · LW(p) · GW(p)

I don't think rejecting self-locating probability means totally rejecting probability as a measure of uncertainty. Because self-locating probability only applies to very specific anthropic problems. E.g.

  1. An incubator creates two observers, the first in a blue room and the second in a red room. Given I am one of the created observers but don't know if I am the first or the second. What is the probability that I will see blue when I turn on the lights?
  2. Some people put me and another person into two rooms. One Blue, one red but the process is random or unknown to me. Before turning on the light what is the probability that I am in the blue room?

My position is that the two problems are fundamentally different, only problem 1 is what has been referred to as self-locating probability in anthropic paradoxes. The entire experiment is known from a god's eye view. The uncertainty is which of the two observers is ME. ME (as well as Now or Here) are not some physical or observable identification but primitive concepts due to reasoning from a first-person perspective. So there is no reasonable way to attach a probability to it. 

Problem 2 is different. I know which person is ME all along. The uncertainty is not about which is me but what happened to me. About the room assignment process. This whole problem can be described from a god's eye view and is still comprehensible. I.E. "dadadarren and another person has been put into two rooms respectively, what is the probability that dadadarren is in the blue room?" So even though it askes which room I am in, it is different from the self-locating probabilities being discussed in anthropic problems. Probabilities like this are obviously valid. 

You probably think this distinction is not meaningful. So saying self-locating probabilities are invalid would lead to all probability as a measure of uncertainty being invalid. But that is not the argument I am making. Granted, for some metaphysical views, the is no difference between the two types. E.g. the Many-Worlds Interpretation considers the self-locating probability as the source of probability. So my argument is not compatible with the MWI, i.e. it is a counter-argument against MWI.  

Also, I am not making assumptions that the copies cannot go outside or ask others whether they are the Original. And they can certainly get into situations where the outcome depends on whether they are the original or the clone. I am arguing in such situations when a decision is involved, given my objective is about MY own benefit (as in the benefit of the "I" in self-locating probability) then there is no singular rational decision. Rational decisions only exist if the objective is about the collective benefit (total or average) of the copies, or a random sample from these copies. Yet it is hard to argue "maximizing MY own benefit" is disconnected from reality, something a real person will not do. 

comment by SarahNibs (GuySrinivasan) · 2021-08-25T03:03:09.445Z · LW(p) · GW(p)

Probability is the measure of your uncertainty.

If the procedure is to flip a coin, clone me on tails but not on heads, separate the original and the clone if needed, then let me (or us) wake up, then when I wake up I will think the probability that I am the original is 3/4 and that the probability the coin landed on heads is 1/2. If I am then informed that I am the original, I will think that the probability the coin landed on heads is 2/3.

But that can't be right. For this experiment, the Original and the Clone do not have to be waked up at the same time. The mad scientist could wake up the Original first. In fact, the coin can be tossed after it. For dramatic effect, after telling you you are the original, the mad scientist can give you the coin and let you toss it. It seems absurd to say the probability for Heads is anything but 1/2. Why is does the probability for Heads remains unchanged after learning you are the Original?

I don't understand the objection. Yes, I say 1/2, and yes, anything but that seems absurd. This coinflip isn't correlated with whether I was cloned, why should its probability depend on whether I am the original or the clone? In the first situation I believe the pre-experiment coinflip has 50% probability of having landed heads, then after learning some information positively correlated with the actual result being heads, I update to 67% probability of the coin having landed heads. In the dramatic situation I believe the post-experiment coinflip has 50% chance of heads and never learn any information correlated with the result. Zero contradiction here.

Replies from: GuySrinivasan, dadadarren
comment by SarahNibs (GuySrinivasan) · 2021-08-25T03:11:30.329Z · LW(p) · GW(p)

I think the comments on https://www.lesswrong.com/posts/YyJ8roBHhD3BhdXWn/probability-is-a-model-frequency-is-an-observation-why-both [LW · GW] are pretty good, btw. They really showcase how all the hand-waving goes away as soon as you specify the decisions the original/clone will be making based on their degree of belief that they're the original.

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2021-08-25T03:28:12.284Z · LW(p) · GW(p)

(That also means, of course, that when I say I choose 3/4 and 1/2 and then 2/3, I am smuggling in information; implicitly assuming the reward structure for "getting the right answer". If I'd rather be right all the time if I'm the original and don't care at all if I'm the clone then I can answer "probability 1.0 that I'm the original!" and make out like a bandit. In that sense, yes, all probabilities are meaningless, not just self-locating ones, until you know what decisions are being made based on them.)

comment by dadadarren · 2021-08-25T19:34:31.726Z · LW(p) · GW(p)

The experiment can take the following steps: 1. Sleep, 2.Scan, 3.Wake up the Orignal, 4. The original tosses the coin, 5. If tails create the Clone. 6. Wake up the Clone so he has indiscernible experience as the Orignal in 3. This whole process is disclosed to you.

Now after waking up the coin may or may not have been tossed, what is the probability of Heads? And what is the probability that I am the Orignal (i.e. the coin has yet to be tossed)?

comment by Measure · 2021-08-25T16:21:53.489Z · LW(p) · GW(p)

P(original) = 2/3
P(heads) = 1/3
P(heads|original) = 1/2
Value of bet is (1/3)*5 = 1.667 < 2, so don't take the bet.

The policy of not betting maximizes the total payoff to all clones (6 vs. 5). The policy of betting maximizes the average payoff-per-clone over worlds (2.5 vs 2). If I don't care at all about my (possible) future clones, I should pre-commit to taking the bet, since this maximizes the payoff of the original. However, after waking up, I have to consider the possibility (with its associated probability) that I am the clone, so that changes my answer.

Replies from: dadadarren
comment by dadadarren · 2021-08-25T18:12:38.504Z · LW(p) · GW(p)

> However, after waking up, I have to consider the possibility (with its associated probability) that I am the clone, so that changes my answer.

The original and the clone are treated exactly the same regarding gold bars and bets. And both of them are offer after waking up to you, regardless if you are the orignal or the clone. You are just trying to maximize your own gain. Do you still consider not taking the bet a better decision?

Replies from: Measure
comment by Measure · 2021-08-25T18:34:15.441Z · LW(p) · GW(p)

After waking up, I hold P(heads) to be 1/3, so the bet is negative EV for me personally.

Replies from: dadadarren
comment by dadadarren · 2021-08-25T19:28:37.463Z · LW(p) · GW(p)

Say you can have several gold bars in your pocked when going into the experiment. And the mad scientist knows this. To make sure you have no new information when looking into your pocked, he will place the same amount of gold bars into the clone's pocket before waking him up. And you know this. So if you had 3 bars before, after waking up you will for sure find 3 gold bars in your pocket.

The rest of the problem is the same. You will be given 2 bars and offered the bet. Does this changes anything? Would you still reject the bet?

Replies from: Measure
comment by Measure · 2021-08-25T19:49:05.602Z · LW(p) · GW(p)

I don't see why that would change anything. (This is all assuming my utility in gold bars is linear. Otherwise I'd have to worry about being flat broke and unable to feed myself if I turn out to be the clone.)

Replies from: dadadarren
comment by dadadarren · 2021-08-27T16:51:54.338Z · LW(p) · GW(p)

OK. Then consider doing this: After the first experiment, take part in the same experiment again, then again and again. You can keep the bars you earned in your pocket.

Say you participated for 100 iterations, by not entering the bet you would have 200 bars. By entering the bet do you expect to have approximately 100*1/3*5=167 bars?

Replies from: Measure
comment by Measure · 2021-08-27T17:32:50.589Z · LW(p) · GW(p)

That's the average of what all the clones will have at the end. The original should have about 250.

comment by JBlack · 2021-08-25T07:28:28.007Z · LW(p) · GW(p)

The bet is obviously in the scientist's favour: there is an objective 50% chance that the scientist pays 3 extra gold, and an equal chance that they recover 4 gold. This doesn't depend upon any subjective probabilities. Given that it's a sucker bet and you don't know who the sucker is, avoid it. This is only intensified since the duplicate probably has a much greater future utility for gold than the original who probably owns other property, has income, etc. Even if the utility is linear in both cases (risk-neutral), the slopes are likely different.

In the P(Heads has been flipped) = 1/2 case, yes the Bayesian update is to P(Heads has been flipped | I am the original) = 2/3 assuming you make a bunch of other reasonable assumptions. But note that this is specific to the conditions of the problem as stated. If the original will always be woken before the coin flip, then the true value of P(Heads has been flipped | I am the original) is identically 0. The coin hasn't been flipped, so it certainly hasn't been flipped to heads! Their estimate of the probability is mistaken because the scientist lied, which is not that surprising for a mad scientist and shouldn't have been assumed impossible in the first place.

If you change the scenario, then you should go through and change the appropriate probability models.

Replies from: dadadarren
comment by dadadarren · 2021-08-25T18:05:42.884Z · LW(p) · GW(p)

I don't see why owning other property would change the objective of having more gold. You are using gold to bet gold, where do other properties come into play? Nonetheless, if it bothers you just let's just assume the subject has no other wealth other than what's no him. Does that mean you still would not enter the bet?

The mad scientist does not need to lie. The experiment is changed to: 1. Sleep, 2.Scan, 3.Wake up the Orignal, 4. The original tosses the coin, 5. If tails create the Clone. 6. Wake up the Clone so he has indiscernible experience as the Orignal in 3. This whole process is disclosed to you. Now after waking up the coin may or may not have been tossed, what is the probability of Heads? What is the probability if you are the original? If you say they are both 1/2 then what is the probability that you are the clone?

Perhaps what confuses me the most is that you are arguing for both thirders and Halfers at the same time. If you think halfers are correct to say the probability of Heads is 1/2 wouldn't you have taken the bet? If you think thirders are correct won't you say the probability should be updated according to the standard Bayes rule? Why are you arguing against both of these? What's your position?

Replies from: JBlack
comment by JBlack · 2021-08-26T07:30:28.461Z · LW(p) · GW(p)

If I am the original and have a job, house, 10000 gold in the bank, etc then I don't need another 2 gold to buy shelter, food, etc. Being given 2 gold is vaguely nice, I can buy myself a nice book on decision theory or something. With 5 gold I could buy a book on decision theory and a concert ticket.

A duplicate with no legal claim to any property might need 2 gold to have a good chance to even survive. While 5 gold would give an even better chance of survival, only the original can win the extra 3 gold so risk-neutrality in the duplicate case is irrelevant.

If I'm unsure on whether or not I'm the duplicate, I'm not going to stake a nice book against years of life expectancy even if the odds were 100:1.

Now after waking up the coin may or may not have been tossed, what is the probability of Heads?

We're using a model here analogous to the Sleeping Beauty 1/2 model right?

P(Heads | Observations upon awakening) = P(Heads) = 1/2 since the observations are stipulated to be identical and therefore independent of Heads.

What is the probability if you are the original?

That's not actually an event in this space, and therefore has no measure. The measurable events in this space are just the sigma algebra generated by "Heads" and some set of possible "Observations on awakening".