The Validity of Self-Locating Probabilities

post by dadadarren · 2021-08-21T02:53:13.579Z · LW · GW · 42 comments

Contents

  Cloning with Memory
  What I'm Not Arguing 
  Repeating The Experiment
  "I don't know"
None
42 comments

For the past few years, I have been pushing the idea that anthropic paradoxes can be explained by the primitive nature of perspectives [LW · GW]. Base on discussions I noticed one part of this argument is disliked the most - the invalidity of self-locating probabilities. Almost everyone disagrees with it. Here I will use a concise thought experiment to demonstrate the idea. Hopefully it will generate conversations and clarify the disagreement. 

Cloning with Memory

Imagine you are participating in the following experiment. Tonight during your sleep some mad scientist will clone you. The process is highly advanced so the created person will accurately retain the original's memory to a degree not discernible by human cognition. So after waking up in the morning, there is no way to tell whether you are the Original or the Clone. (Infact you might already be the Clone by now.) Now, ask yourself this: "what is the probability that I am the Original?" 

I think such a probability does not exist. The question is asking about a particular person: "I". This reference is inherently understood from my perspective. "I" is the one most immediate to the subjective experience. It is not identified by any objective difference or underlying mechanics. "Who I am" is primitive. There is no way to formulate a probability for it being the Original or the Clone. 

What I'm Not Arguing 

After the cloning, if one person is randomly picked among the two copies, then the probability of the chosen one being the Orignal is 1/2. I am not arguing against this. But I am arguing against the equivalence of this probability and the above-mentioned self-locating probability. One is asking about the result of a sampling process, the other is about the primitively identified "I". The former is understandable by anyone, the latter is only comprehensible by thinking from the experiment subject's perspective. 

Repeating The Experiment

Using a frequentist approach may help to clarify this difference. Imagine you have just finished participating in "Cloning with Memory". Now you may be the Orignal or the Clone. But regardless of which, you can take part in the same experiment again. Let the mad scientists do their work during your next sleep. After waking up the second time, you may be the Orginal or the Clone of the second iteration. Yet regardless of which, you can take part in another iteration, and so on. 

Say you are doing this a great number of times, and keep counting of whether you are the Orginal or the Clone in each iteration. There is no reason for the relative frequency of the two to converge on any value. Because in each iteration, from your perspective "who I am" is primitive. There is nothing to determine which of the two copies is you.

Of course, if we jump out of this first-person perspective, and randomly select a copy in each experiment then as the iterations go on, the relative frequency of selecting the Orignal would converge towards 1/2. But that is a different problem. 

"I don't know"

It is fair to say this argument against self-locating probability is simple-minded. After waking up I can say that I am either the Orignal or the Clone. What is the reasonable degree of belief for each case? I think the only reasonable answer is "I don't know". To assign specific value to this probability, additional postulates are needed. For example, assuming "I" am a sample from some random selection. 

42 comments

Comments sorted by top scores.

comment by jchan · 2021-08-21T08:12:05.255Z · LW(p) · GW(p)

To make it slightly more concrete, we could say: one copy is put in a red room, and the other in a green room; but at first the lights are off, so both rooms are pitch black. I wake up in the darkness and ask myself: when I turn on the light, will I see red or green?

There’s something odd about this question. “Standard LessWrong Reductionism” must regard it as meaningless, because otherwise it would be a question about the scenario that remains unanswered even after all physical facts about it are known, thus refuting reductionism. But from the perspective of the test subject, it certainly seems like a real question.

Can we bite this bullet? I think so. The key is the word “I” - when the question is asked, the asker doesn’t know which physical entity “I” refers to, so it’s unsurprising that the question seems open even though all the physical facts are known. By analogy, if you were given detailed physical data of the two moons of Mars, and then you were asked “Which one is Phobos and which one is Deimos?”, you might not know the answer, but not because there’s some mysterious extra-physical fact about them.

So far so good, but now we face an even tougher bullet: If we accept quantum many-worlds and/or modal realism (as many LWers do), then we must accept that all probability questions are of this same kind, because there are versions of me elsewhere in the multiverse that experience all possible outcomes.

Unless we want to throw out the notion of probabilities altogether, we’ll need some way of understanding self-location problems besides dismissing them as meaningless. But I think the key is in recognizing that probability is ultimately in the map, not the territory, however real it may seem to us - i.e. it is a tool for a rational agent to achieve its goals, and nothing more.

Replies from: dadadarren, ike
comment by dadadarren · 2021-08-22T02:06:48.854Z · LW(p) · GW(p)

First of all, strong upvote. The points you raised have made me thought hard as well.

I don't think the probability about which room I am in is the same as the self-locating probability. Coincidentally I made my argument use color coding as well ( the probability of I am red or blue). The difference being which color I get labeled is determined by a particular process, the uncertainty is due to the randomness of that process or my lack of knowledge about it. Whereas for self-locating probability, there is nothing random/unknown about the experiment. The uncertainty, i.e. which physical person I am, is not determined by anything. If I ask myself why am I this particular human being? Why am I not Bill Gates? Then the only answer seems to be "Because the available subjective is connected to this person. Because I am experiencing the world from this person's perspective, not of Bill Gates'." It is not analyzable in terms of logic, only be regarded as a reasoning starting point. Something primitive.

Whether or not the questioner knows which person is being referred to by "I" is another interesting matter. Say the universe is infinite, and/or there are countless universes. So there could be many instances of human beings that are physically indistinguishable from me. But does that mean I don't know which one I am? It can be said that I do not know because I cannot provide any discernable details to distinguish myself from all of them. But on the other hand, it can be said I inherently know which is me. I can point to myself and say "I am this person" and call it a day. The physical similarities and differences are not even in the concern. This identification is nothing physical, it is inherently understandable to me because of my perspective. It is because of this primitive nature people consider "the probability of I am the Orginal" as a valid question instead of asking who is this "I" before answering.

My way of rejecting the self-locating probability is incompatible with the Many-Words interpretation. Sean Carroll calls this idea the "simple-mined" objection for the source of probability in Many-Worlds. Yet he admits that's a valid objection. I think treating perspectives as primitives would naturally lead to Copenhagen interpretation. It should also be noted that for Many-Worlds, "I" or "this branch" are still used as primitive notions when self-locating probabilities are derived.

Finally, the self-locating probability is not useful to decision-making. So even as tools, they are not justifiable. Goals such as maximizing the total or average benefit of a group can be determined by using probabilities of random samples from said group. e.g. probability of a randomly selected copy being Original. If the goal is strictly about the primitively identified "I" as in self-locating probability then there exists no valid strategy. As shown by the frequentist analysis in the post.

comment by ike · 2021-08-21T23:54:40.525Z · LW(p) · GW(p)

Yes, rejecting probability and refusing to make predictions about the future is just wrong here, no matter how many fancy primitives you put together.

I disagree that standard LW rejects that, though.

comment by Measure · 2021-08-21T03:44:25.918Z · LW(p) · GW(p)

The copy and the original are indistinguishable to me/us, but as long as there is still a fact of the matter of which is which — the scientist knows after all — I would say it still makes sense to talk about the probability that I am the original. It would be exactly the same as asking "What is the probability, given that he answers truthfully, that the scientist will say I am the original when I ask?". I will get a definite answer one way or the other, so I think it makes sense that I should be able to have a credence for each outcome beforehand (50% in this case).

Replies from: dadadarren
comment by dadadarren · 2021-08-22T00:09:50.433Z · LW(p) · GW(p)

Of course, whether I am the Orignal or the Clone is a matter of fact. There is a definitive answer to it. I also see no problem saying "the probability I am the Original" is essentially the same as " What is the probability, given that he answers truthfully, that the scientist will say I am the original when I ask? ".

But does being a matter of fact imply there is a probability to it? Subsequently, what is the justification for 50% being the correct answer?

Replies from: Measure, GuySrinivasan
comment by Measure · 2021-08-23T13:43:14.624Z · LW(p) · GW(p)

I'm using "probability" "credence" and "betting odds" mostly interchangeably to refer to my subjective state of knowledge. The 50% number comes from the symmetry of having two indistinguishable experiences being had by two people, one of which is me. Without any additional information to break that symmetry (such as learning that multiple copies were created or that the copying sometimes fails), I should assign equal credence to each possibility.

Replies from: dadadarren
comment by dadadarren · 2021-08-23T17:45:32.329Z · LW(p) · GW(p)

Would you say your reasoning is a principle of indifference between "I am the Original" vs "I am the Clone"?

Replies from: GuySrinivasan, Measure
comment by SarahNibs (GuySrinivasan) · 2021-08-23T20:32:34.529Z · LW(p) · GW(p)

(to be clear, everything I've said also flows from the principle of indifference; if you cannot tell the difference between N states of the world, then the probability 1/N describes your uncertainty about those N states)

comment by Measure · 2021-08-23T19:14:49.253Z · LW(p) · GW(p)

It's not that I wouldn't care which one I am (they're identical), but there would be no way for me to differentiate the experiences.

Suppose the scientist told me before the procedure that he would with probability 2/3 waken the original in a blue room and the clone in a red room and with probability 1/3 reverse the colors. If I were to wake up in a blue room afterward, my credence that I'm the original would be 2/3.

Replies from: dadadarren
comment by dadadarren · 2021-08-24T00:49:24.647Z · LW(p) · GW(p)

I was asking if your reasoning for equal probabilities of Original vs Clone can be summarized as the Principle of Indifference. Not suggesting you do not care which copy you are. Would I be wrong to assume you endorse POI in this problem?

Replies from: Measure
comment by Measure · 2021-08-24T14:25:01.380Z · LW(p) · GW(p)

I would say POI applies here.

Replies from: dadadarren
comment by dadadarren · 2021-08-25T01:57:28.597Z · LW(p) · GW(p)

Problem is POI is not a solid theory to rely on and it often lead to paradoxes. In anthropic problems in particular, it is unclear what exactly should be regarded indifferent. See this post [LW · GW] for an example.

comment by SarahNibs (GuySrinivasan) · 2021-08-22T00:28:17.162Z · LW(p) · GW(p)

Justifications for 50% being the correct answer:

  • if this happened lots of times, and you answered randomly, you would be right roughly 50% of the time
  • if you tried to make a wager, a bookie would give you near-1:1 odds
  • 50% is the correct answer to all of the equivalent questions which you accept are probabilities

:shrug:

Replies from: dadadarren
comment by dadadarren · 2021-08-22T02:25:29.695Z · LW(p) · GW(p)

>if this happened lots of times, and you answered randomly, you would be right roughly 50% of the time

How can this be verified? Like I have outlined in Repeating The Experiment, if you keep participating in the experiment again and again there is no reason to think the relative frequency for you being the Original in each experiment would converge to any particular value. Of course, we can select one copy each time and it will converge to 1/2. But that would reflect the probability of the random sample being the Orignal is 1/2.

It can be asserted that the two probabilities are the same thing. But at least we should recognize that as an additional assumption. 

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2021-08-23T04:56:59.466Z · LW(p) · GW(p)

What? Yeah, still missing something.

You know that "probability" doesn't mean "a number which, if we redid the experiment and substituted 'pick randomly according to this number' instead of the actual casual mechanism, would give the same distribution of results"? That it's a summary of knowledge, not a casual mechanism?

(I'm still trying to figure out where I think you're confused; from my perspective you keep saying "obviously X, that was never in question, but since X is fundamentally different than X, we can't just assume the same result holds". Not trying to make fun, just literally expressing my confusion about the stuff you're writing. In any case, you're definitely right about not being able to communicate what you're talking about very well ;) )

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-08-23T06:57:49.896Z · LW(p) · GW(p)

You know that “probability” doesn’t mean “a number which, if we redid the experiment and substituted ‘pick randomly according to this number’ instead of the actual casual mechanism, would give the same distribution of results”?

Er… it doesn’t? Doesn’t it mean exactly that, though? As far as we know? I mean, if you say that P(some outcome) = 0.5, then does it not mean that we think that if we ran the experiment a bunch of times, and also flipped a fair coin the same number of times, then the number of times the given outcome would occur would approximately equal the number of heads we got?

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2021-08-23T14:23:31.545Z · LW(p) · GW(p)

:facepalm: I simplified too much, thank you. The second phrasing is what I meant; "it's a summary of knowledge, not a causal mechanism". The first should have illustrated what breaks when substituting a summary for the mechanism, which does require something other than just looking at the summaries with nothing else changed. :D

I guess, let me try to recreate what made me write [the above] and see if I can communicate better.

I think what's going on is that dadadarren is saying to repeat the experiment. We begin with one person, the Original. Then we split, then split each, then split each again, etc. Now we have 2^n people, with histories [Original, Original, ..., Original] and [Original, Original, ..., Clone] and etc. There will be n*(n-1)/2 people who have participated in the experiment n times and been the Original n/2 times, they have subjectively seen that they came out the Original 50% of the time. But there have also been other people with different subjective impressions, such as the one who was the Original every time. That one's subjective impression is "Original 100%!".

But what happens if each person tries to predict what will happen next by using their experimental results (plus maybe a 50% prior) as an expectation of what they think will happen in the next experiment? Then they'll be much more wrong, collectively, than if they stuck to what they knew about the mechanism than if they plugged in their subjective impression as mechanism. So even the "Original n/(n+1)!" person should assign probability 50% to thinking they're the Original after the next split; their summary of past observations has no force over how the experiment works, and since they already knew everything about how the experiment works, doesn't give them any evidence to actually update on.

Replies from: dadadarren
comment by dadadarren · 2021-08-23T18:52:11.392Z · LW(p) · GW(p)

> I think what's going on is that dadadarren is saying to  repeat the experiment. We begin with one person, the Original. Then we  split, then split each, then split each again, etc. Now we have 2^n  people, with histories [Original, Original, ..., Original] and  [Original, Original, ..., Clone] and etc. There will be n*(n-1)/2 people  who have participated in the experiment n times and been the Original  n/2 times, they have subjectively seen that they came out the Original  50% of the time. But there have also been other people with different  subjective impressions, such as the one who was the Original every time.  That one's subjective impression is "Original 100%!". 


Ok, slow down here. What you are describing is repeating the experiment, but not from the subject's first-person perspective. Let's call this a description from a god's eye view. There is no "I" in the problem if you describe the experiment this way. Then how do you ask the "probability "I" am the Original?"


What I described in the post is to put yourself inside the subject's shoes. Imagine you are participating in the experiment from a first-person perspective. Hence, after waking up, you know exactly which one is "I". Despite there is another copy that is physically indiscernible and you don't know if you are the Orignal or Clone. This self-identification is primitive. 


If this seems convoluted, imagine a case of identical twins. Other people have to differentiate them by some observable features. But for a twin himself, this is not needed. He can inherently tell apart the "I" from the other twin, without needing to know what is the physical difference.


The probability is about "I" being the Original. So in a frequentist analysis, keep the first-person perspective while repeating the experiment. Imagine yourself take part in the same experiment again and again. Focus on "I" throughout these iterations. For your experience, the relative proportion of "I am the Original" has no reason to converge to any value as the iteration increases. 

What you are doing is to use the god's eye model instead. Because there is no "I" in this model, you are substituting "I" with "a random/typical copy". That's why I talk about the decision of one person only: the primitively identified "I". While you are talking about all of the copies as a group. Hence you say "Then they'll be much more wrong, collectively

It seems very natural to regard "I" as a randomly selected observer. Doing so will justify self-locating probabilities. Nonetheless, we should recognize that is an additional assumption. 

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2021-08-23T20:19:46.534Z · LW(p) · GW(p)

Okay, let me try again, then.

I am undergoing this experiment, repeatedly. The first time I do, there will be two people, both of whom remember prepping for this experiment, both of whom may ask "what is the probability I am the Original?" afterwards, one of whom will unknowingly be the Original, one of whom will unknowingly be the Clone. Subjectively, perhaps I was the Original; in that case if I ask "what is the probability I am the Original?" ... okay I'm already stuck. What's wrong with saying "50%"? Sure, there is a fact of the matter, but I don't know what it is. In my ignorance, why should I have a problem with saying 50% I'm the original? Certainly if I bet with someone who can find out the true answer I'm going to think my expectation is 0 at 1:1 odds.

But you say that's meaningless. Fine, let's go with it. We repeat the experiment. I will focus on "I". There are 2^n people, but each of them only has the subjective first-person perspective of themselves (and is instructed to ignore the obvious fact that there are another 2^n-1 people in their situation out there, because somehow that's not "first person" info?? okay). So anyway, there's just me now, after n experiments. A thought pops up in my head: "am I the Original?" and ... well, and I immediately think there's about a 1/2^n chance I'm the Original, and there's a 50% chance I'm the first Clone plus n-1 experiments, and there's a 25% chance I'm the first Clone of the first Clone plus n-2 experiments and a 25% chance I'm the second Clone of the Original plus n-2 experiments and etc.

I have no idea what you mean by "For your experience, the relative proportion of "I am the Original" has no reason to converge to any value as the iteration increases." Of course it does. It converges to 0% as the number of experiments increases, and it equals 1/2^n at every stage. Why wouldn't it? You keep saying it doesn't but your justification is always "in first-person perspective things are different" but as far as I can see they're not different at all.

Maybe you object to me thinking there are 2^n-1 others around? I'm fine with changing the experiment to randomly kill off one of the two after any experiments so that there's always only one around. Doesn't change my first-person perspective answers in the slightest. Still a 1/2^n chance my history was [not-cloned, not-cloned, not-cloned, ...] and a 1/2^n chance my history was [cloned, not-cloned, not-cloned, ...] and a 1/2^n chance my history was [not-cloned, cloned, not-cloned, ...] and a ... etc.

Replies from: dadadarren
comment by dadadarren · 2021-08-24T01:53:58.111Z · LW(p) · GW(p)

Ok.

You say you are using the first-person perspective to answer the probability "I am the Original", and focusing on yourself in the analysis. However, you keep bring up there are two copies. That "one is the original, the other is the clone." So the probability "I am the Original" is 50%.

Do you realize that you are equating "I" with "a random one of the two" in this analysis? There is an underlying assumption of "I am a random sample" or "I am a typical observer" here.

For repeating the experiment, I am talking about being the Original in each iteration. You may come out as the Clone from the first experiment. You can still participate in a second experiment, after waking up from the second experiment, you may be the Original (or the Clone) of the second experiment. And no matter which one you are, you can take part in a third experiment. You can come out of the third experiment as the Orignal (or the Clone) of the third experiment. And so on. Keep doing this, and keep counting how many times you came out as the Orignal vs the Clone. What is the rationale that they will become roughly equal? I.E. as you repeat more experiments you will experience being the Original roughly half of the time. Again, the justification would be "I" am a random copy.

I am not saying the existence of other copies must be ignored. I am saying if you reason from the first-person perspective, imagine yourself waking up from the experiments, then it is primitively clear all other copies are not the "I" or "myself" questioned by self-locating probability. Because you are very used to take the god's eye view and consider all copies together (and treating "I" as a random sample of all copies) I suggested to not pay attention to anyone else but imagine you as a participant, and focus on yourself. But evidently, this doesn't work.

It is a tricky matter to communicate for sure. If this still seems convoluted maybe I shall use examples with solid numbers and bets to highlight the paradox of self-locating probability. Would you be interested in that?

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2021-08-24T03:01:47.848Z · LW(p) · GW(p)

examples with solid numbers and bets

Well, yes, sorry, for the snark, but... obviously! If you know how to make it concrete with numbers instead of wishy-washy with words, please do so!

Replies from: dadadarren
comment by dadadarren · 2021-08-25T01:55:19.192Z · LW(p) · GW(p)

Alright, please see this post [LW · GW]. Which camp you are in? And how do you answer the related problem. 

comment by SarahNibs (GuySrinivasan) · 2021-08-21T03:32:51.164Z · LW(p) · GW(p)

I don't understand this argument.

The question is asking about a particular person: "I". This reference is inherently understood from my perspective. "I" is the one most immediate to the subjective experience. It is not identified by any objective difference or underlying mechanics. "Who I am" is primitive. There is no way to formulate a probability for it being the Original or the Clone.

This paragraph. Here's where you lost me.

What if the question was "what is the probability that am I the causal descendant of the Original, that in a world without the mad scientist I would still exist?" Is that different than "what is the probability that I am the Original?" If so, how? If not, what's the difference?

Replies from: dadadarren
comment by dadadarren · 2021-08-22T00:33:51.520Z · LW(p) · GW(p)

The Orignal/Clone is referring to the two physical persons in the experiment. One is a physical copy that existed before, the other created by mad scientists during the experiment. You can change the Original to "the causal descendant of the Original, that in a world without the mad scientist I would still exist?". But I don't think that's significant. Because the question does not depend on that. 

To illustrate this we can change the experiment. Instead of a direct cloning process, now the mad scientist will split you through the middle into two halves: the left part (L), and the right (R). Then he will complete the two by cloning the missing half onto them. So we still end up with two indiscernible copies. L and R. Now after waking up the second day, you can ask yourself  "what is the probability that I am L?". It is still a self-locating probability. And I thought about using this example in the post since it is more symmetrical. I ended up against it because it seems too exotic. 

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2021-08-22T00:44:54.460Z · LW(p) · GW(p)

I am convinced that you are confused but I have no idea how to figure out exactly what you're confused about. My best guess is that you don't agree that "a quantification of your uncertainty about propositions" is a good description of probabilities. Regardless, I think that e.g. Measure's objection is better phrased than mine.

comment by JBlack · 2021-08-22T05:01:48.301Z · LW(p) · GW(p)

I do have another question: suppose that the mad scientist (being mad after all) makes a mistake in the copy process such that 99% of the copies end up as mindless drones.

Would learning this change your assessment of being the clone in any way at all, or would you still hold completely the same assessment of "I don't know"?

Replies from: dadadarren
comment by dadadarren · 2021-08-22T21:29:24.946Z · LW(p) · GW(p)

Still the same. All I can say is I am either the Orignal or the Clone. For the credence of each is still "I don't know".

And this number-crunching goes both ways. Say if the Mad scientist is only producing valid Clone in 1% of the experiments. However if he successes, he will produce 1 Million of them. Then what is the probability of me being the Original? I assume people would say close to 0. 

This logic could lead to some strange actions such as Brain-Race as described by Adam Elga here. You could force someone to act to your liking by make 100s of his Clones with the same memory. For if he doesn't do so, you will torture all these clones. Then the best strategy for him is to play ball because he is most likely a clone. However, he could counter that by making 1000s of Clones of himself that will be tortured if they act to your liking. But you could make 100000s of Clones, and he could make 10000000s, etc. 

Replies from: JBlack
comment by JBlack · 2021-08-24T12:17:44.926Z · LW(p) · GW(p)

Personally no, I wouldn't say close to 0 in that situation. While the expected value of number of clones is 10000 and hence the expected value of number of observers is 10001, I can't think of a measure for which dividing by this quantity results in anything sensible. Generally it is not true that E[1/X] = 1/E[X]. While I have seen plenty of messed up calculations in self-locating probability, I haven't previously seen that particular one.

Regarding the Dr. Evil in the linked scenario, I believe that the whole scenario is pretty much pointless. Even knowing that they might be a Dupe, any cartoon super-villain like that is going to launch the weapon anyway.

Similarly in your scenario, there are factors outside self-locating credence that will affect behaviour. In a world with such cheap and easy remote duplication technology with no defence, people will develop strategies to deal with it. For example, pre-commitment to not comply with terrorist demands regardless of what is threatened. At any rate, a hostage's life is almost certainly going to be very short and probably unpleasant regardless of whether the original complies. It's not like there's even any point in them actually going to the trouble of torturing thousands of duplicates except to say (with little credibility) "now look what you made me do".

As I see it this is just dragging in a whole bunch of extra baggage into the scenario, such as personal and variable notions of personal identity, empathy, and/or altruism that are doing nothing but distract from the question at hand:

whether levels of credence in such situations can be assigned numerical values that obey rules of probability.

Replies from: dadadarren
comment by dadadarren · 2021-08-25T00:37:46.341Z · LW(p) · GW(p)

>Personally no, I wouldn't say close to 0 in that situation. While the expected value of number of clones is 10000 and hence the expected value of number of observers is 10001, I can't think of a measure for which dividing by this quantity results in anything sensible. 

Wait, are you saying there is no sensible way to assign a value to self-locating probability in this case? Or you are disagreeing with this particular way of assigning a self-locating probability and endorse another method? 

Replies from: JBlack
comment by JBlack · 2021-08-25T00:57:40.184Z · LW(p) · GW(p)

You said "I assume people would say close to 0". I don't know why you said that. I don't know how you arrived at that number, or why you would impute it to people in general. The most likely way I could find to arrive at a "close to 0" number was to make an error that I have seen a few times in the context of students calculating probabilities, but not previously in self-locating probabilities.

How did you arrive at the idea that "people would say close to 0"?

Replies from: dadadarren
comment by dadadarren · 2021-08-25T01:48:38.118Z · LW(p) · GW(p)

Because the thirder camp is currently the dominating opinion for the Sleeping Beauty Problem. Because Self-Indication Assumption has way more supporters than Self-Sampling Assumption. Self-Indication Assumption treats "I" as a randomly selected observer from all potentially existing observers. Which in this case would give a probability of being the Original close to 0.

I am not saying you have to agree with it. But do you have a method in mind to arrive at a different probability? If so what is the method? Or do you think there is no sensible probability value for this case?

Replies from: JBlack
comment by JBlack · 2021-08-25T03:11:43.334Z · LW(p) · GW(p)

One possible derivation:

P(mad scientist created no valid clones) = 0.99 as given in problem description, P(me being the original | no clones exist) = 1, therefore P(me being the original & no clones exist) = 0.99.

P(mad scientist created 10000 clones) = 0.01, P(me being the original | 10000 clones) ~= 0.00009999. Therefore P(me being the original & 10000 clones exist) ~= 0.0000009999.

P(me being the original) = P(me being the original & no clones exist) + P(me being the original & 10000 clones exist) ~= 0.9900009999 as these are disjoint exhaustive events.

0.9900009999 is not "close to 0".

Replies from: dadadarren
comment by dadadarren · 2021-08-25T17:51:49.760Z · LW(p) · GW(p)

You just stated Self-Sampling Assumption's calculation.  

Given you said "The most likely way I could find to arrive at a "close to 0" number was to make an error that I have seen a few times in the context of students calculating probabilities, but not previously in self-locating probabilities." about Self-Indication Assumption's method. 

Are you endorsing SSA over SIA? Or you are just listing the different camps in anthropic paradoxes?

Replies from: JBlack
comment by JBlack · 2021-08-26T09:08:49.584Z · LW(p) · GW(p)

No, I just forgot about the exact statement of Bostrom's original SIA. It doesn't apply in this case anyway, since it's only applied other things being equal, and here they aren't equal.

comment by bdelloidea · 2021-08-22T17:24:18.743Z · LW(p) · GW(p)

I'm having difficulty understanding exactly what an answer of "such a probability does not exist" means in this context. Assuming we both were subjected to the same experiment, but I then assigned a 50% probability to being the Original, how would our future behaviour differ? In what concrete scenario (other than answering questions about the probability we were the Original) would you predict us to act differently as a result of this specific difference in belief?

Replies from: dadadarren
comment by dadadarren · 2021-08-22T21:05:37.187Z · LW(p) · GW(p)

Our behavior should be different in many cases. However, base on my past experience, people who accept self-locating probabilities would often find various explanations so our decisions would still be the same. 

For example, in "Repeating the Experiment" the relative frequency of Me being the Original won't converge on any particular value. If we bet on that, I will say there is no strategy to maximize My personal gain. (There is a strategy to max the combined gain of all copies if everyone abides by it. As reflected by the probability of a randomly sampled copy being Original is 1/2)

On the other hand, you would say if I repeat the experiment long enough the relative frequency of me being the Original would converge on 50%, and the best strategy to max my personal gain is to bet accordingly. 

The problem of this example is that personal gain can only be verified by the first-person perspective of the subject. A verifiable example would be this: change the original experiment slightly. The Mad scientist would only perform the cloning if a fair coin toss landed on Tails. Then after waking up how should you guess the probability of Heads? What's the probability of Heads if you learn you are the Original? (Essentially the sleeping beauty problem).

If you endorse self-locating probability, then there are two options. First, the thirder. After waking up the probability of I am the Original is 2/3. The probability of Heads is 1/3. After learning I am the Original the probability of Heads updates to 1/2. 

The other option is to say after waking the probability of Heads is 1/2, the probability of I am the Original is 3/4. After learning I am the Orignal the probability of Heads needs to be updated. (How to do this update is very problematic, but let's skip it for now. The main point is the probability for Heads would have to be smaller than 1/2. And this is a very weak camp compare to the thirders)

Because I reject self-locating probability, I would say the probability of Heads is 1/2. And it is still 1/2 after learning I am the Original. No update because there is no probability in the first place. 

This should result in different betting strategies. Say you have just experienced 100 iterations of this toss and cloning and haven't learned whether you were the Orignal or the Clone in any of those iterations. Now you are offered to enter a bet for 2 dollars that will pay 5 dollars if the coin landed on Heads for each of those 100 iterations. Assuming you are a thirder, then you should not enter these bets, since you believe the probability of Heads is only 1/3. Whereas I would enter all these bets. But again, base on past experience thirders would come up with some explanation as to why they would also enter these bets. So our decisions would still be the same. 

Replies from: JBlack
comment by JBlack · 2021-08-24T12:30:45.555Z · LW(p) · GW(p)

This should result in different betting strategies.

Probabilities are not strategies. Strategy development may make use of probabilities, but only according to models that link various probabilities to outcomes, risk, and heaps of other factors. You can often formulate exactly the same strategies using different models employing different probabilities. Sometimes there is a single simplest model that employs an obvious probability space to yield a clear winning strategy, sometimes there is not.

Depending upon how the payoffs are structured in your bets and how results affect future states, I might or might not enter into any of those bets. You also use the term "thirder" and "halfer" as if this is a fixed personality trait, and not a choice of which probability space to employ in each particular scenario.

comment by JBlack · 2021-08-22T03:21:45.389Z · LW(p) · GW(p)

The main complaint here seems to be subjectivity of these probabilities. This does not bother me, as in my point of view a probability is any measure on a space that satisfies the axioms of probability. Whether a given model using probabilities matches observed reality depends upon the interpretations that are part of that model.

So essentially, whether Sleeping Beauty "should" think that some probability of it being Monday is 1/2 or 1/3 is of little interest to me. Those are each appropriate probabilities in different models. When you apply either of those models to make actual predictions or strategies, you use these probabilities in different ways and get the same final result either way. So who really cares whether 1/2 an angel or a 1/3 of an angel is dancing on the head of a coin in the interim?

The only real problem arises when someone uses probabilities from one model and misapplies parts of another model to make a prediction from them.

Replies from: dadadarren
comment by dadadarren · 2021-08-22T21:45:56.875Z · LW(p) · GW(p)

If you think 1/2 is a valid probability in its own model. I would assume you are also interested in the probability update rule of this model, i.e. how can Beauty justify the probability of Heads to be 1/2 after learning it is Monday.

Replies from: JBlack
comment by JBlack · 2021-08-24T12:34:16.435Z · LW(p) · GW(p)

Why would I be interested in finding a justification for that particular update?

Replies from: dadadarren
comment by dadadarren · 2021-08-25T00:33:47.659Z · LW(p) · GW(p)

Since you said 1/2 is a valid answer for its own model. You would want to know if that model is self-consistent? Not just picking whichever answer that seems least problematic?

Replies from: JBlack
comment by JBlack · 2021-08-25T00:51:43.877Z · LW(p) · GW(p)

What I mean is: It seems a bizarre thing to start with a model and then conjure a conclusion and then try to justify that the conclusion is consistent with the model. Why would you assume that I would be interested in doing any such thing?