Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning
post by dadadarren · 2019-03-09T15:21:02.258Z · LW · GW · 22 commentsContents
1. What is it about. 2. Using my own existence as evidence 3. Are double-halfers un-Bayesian? 4. Is there a discrepancy between Bayesian and Frequentist probability? 5. Is perspective disagreement unreasonable? 6. Conclusions None 22 comments
Some may notice I have posted this idea before. Unfortunately it has not sparked much discussion. However I sincerely believe this idea is the key to paradoxes related to anthropic reasoning. So I’m making this post hoping to raise more attention. Feedbacks are very welcome.
1. What is it about.
To summarize my entire argument in one sentence: reasonings from different perspectives should not mix.
If I say "I'm a man" then it is definitely true and meaningful as the body most immediate to my perception is of a human male. To my wife the same statement is obviously false since she is a woman. It is rather trivial if "I" is defined by my first-person perspective's center then the statement may not be valid once switched to someone else's perspective. Of course in real life if I say "I'm a man" my wife would (hopefully) agree. She maybe interpreting the sentence as "My husband is a man". However "my husband" is still defined by her perspective and the sentence might be invalid to others (e.g. I do not have a husband). Or we can employ a third-person's perspective and reason as an impartial outsider. The statement may become "Dadadarren is a man." Here "The person named Dadadarren" is not defined by any perspective center. Notice the name is used to specify me in third person because for an impartial outsider I am just an ordinary person like everybody else. There is no apparent "I" anymore. So some feature must be used to specify me among all people. My argument is that a logic framework must be fully contained within one perspective. It cannot be partially from a first-person reasoning and partially from a third-person reasoning. For example, if my perspective center is used to define "I" then we cannot switch to an outsider's third-person perspective just as we cannot switch to my wife's perspective. Conversely if we reason as an outsider and treat every individual as ordinary, i.e. all in the same reference class, then my first-person perspective’s center cannot be used to define a self explanatory “I”.
A perspective’s center defines not only “I” but also concepts such as “this”, “here” and “now”. So these words are primitively apparent to the first person. No other information is needed to explain them. Only when reasoned from another perspective do they need to be explained by features. Many anthropic arguments do not recognize this difference. So in their logic “I” sometimes refers to the primitively understood first-person center, while sometimes refers to a specific physical entity differentiated from others by some feature. Using these definitions interchangeably inevitably mixes reasoning from different perspectives. This is why many of these arguments give paradoxical conclusions.
2. Using my own existence as evidence
A point of contention in sleeping beauty problem is whether I have new information by waking up in the experiment. Halfers think I can only find myself awake, coupled with the existence of a guaranteed awakening, waking up in the experiment gives no new information. Thirders disagree. The notion I can only find myself awake is inconsequential. There are only two possibilities for today: either there is an awakening or not. Waking up gives the new information that there is an awakening which increases the probability of tails.
Both theories have intuitive appeals. From a first-person perspective it is true that I would always find myself exist. In fact by reasoning from the first-person perspective my existence is already acknowledged. In the case of sleeping beauty problem halfer is right in saying I can only find myself awake thus no new information.
It is also possible to reason from a third-person perspective. From a-third person perspective no one’s existence is inherently guaranteed. In the sleeping beauty problem if we reason from an outsider’s perspective then for a specific day it is possible the experiment subject would just sleep thought it. Thirders think waking up today eliminates that possibility thus is new information. Unfortunately there is a mistake. “Today” like “now” is defined by a perspective’s center. It is only meaningful if reasoned from a first-person perspective. So from the third-person perspective the information available is not the subject is awake on a specific day but rather the subject is awake on an unspecific day. I.e. there is (at least) one awakening in the experiment. This is already known from the experiment setup. The notion of new information is actually caused by mixed reasoning from two perspectives.
3. Are double-halfers un-Bayesian?
Basing on the coin toss result and today’s date halfers typically assigns probability as follows: P(Heads and Monday)=1/2, P(Tails and Monday)=1/4 and P(Tails and Tuesday)=1/4. However this presents a dilemma when she is told today is Monday. By ordinary bayesian update the probability of heads shall rise to 2/3. Since the coin toss can happen after the Monday wakeup I am predicting a fair coin toss yet to happen to deviate from half. The infamous Doomsday argument basically takes the same form. This is highly unpalatable. Alternatively some argue that after learning today is Monday the probability of Heads should remain at 1/2. However this seems un-bayesian. Various rules has been proposed (e.g. Halpern, Meacham, Briggs etc) to try to justify this yet all have very serious drawbacks (e.g. Titelbaum’s embarrassment for double-halfers). So the dilemma remains.
This is because in the context of sleeping beauty problem the probabilities of “today being Monday/Tuesday” do not exist. In another word “what’s the probability of today being Monday/Tuesday” are invalid questions. It is actually asking among the two potential awakenings which one is today’s. The generalize question in anthropic reasoning would be “among all entities in my reference class which one is me?”. These questions mix first and third-person reasoning. At one hand the reference class is only applicable from a third-person perspective while at the other hand it is using my perspective center to specify a self-explanatory “today” or “me”. They are essentially asking from a third-person perspective which one is my first-person perspective center. Such questions/probabilities are therefore invalid. Because these probabilities do not exist in the first place there is no Bayesian update to be performed upon learning today is Monday.
4. Is there a discrepancy between Bayesian and Frequentist probability?
One often seen argument is to repeat the sleeping beauty experiment multiple times. In the long run about 2/3 of the awakenings would be after Tails and 1/3 after Heads. This suggest the frequentist probability of Heads should be 1/3. Halfers are actually arguing the Bayesian and Frequentist probabilities being different. Such a notion is mistaken, once again, due to the mixing of perspectives.
Let’s take a first-person perspective of the experiment subject. For the ease of narratives consider this experiment. (Don’t worry I will get back to sleeping beauty problem momentarily.) When you go to sleep tonight a fair coin would be tossed. Depending on how it falls a highly accurate clone of you may be created and be put into an identical room. So when you wake up there is no way of knowing if you are old or new. Suppose you have fall asleep and wake up in the experiment, i.e. experienced a iteration of the experiment. Now from your first-person perspective what would a repetition of this experiment be like? Quite obviously it would be you going to sleep once more, with another coin toss and potential cloning, and wake up again. This process can be repeated many times up to infinity. Among these repetitions half of them would be Heads. Of course if there exist another copy of you he may go through the same procedure since for outsiders there is no reason to treat you two differently. But that does not concern you so it is irrelevant from a first-person perspective.
Based on the same reasoning the repetitions in the sleeping beauty problem should take the following form. In the first experiment the two potential awakenings are 24 hours apart. After waking up from the first experiment I shall enter another repetition with two potential awakenings 12 hours apart. Another repetition would be 6 hours apart and so on. (this assumes the time taken for actual waking and interviewing is zero for the easy of description etc.) In another word repeating the sleeping beauty experiment from the first-person perspective is a supertask which can be repeated up to infinity within 24 hours. Obviously half of those awakenings would be after Heads and the other half following Tails.
The above process describes first-person perspective. Third-person perspective is way less complicated. For an outsider the identity of the experiment participant, both before and after the memory wipe, are irrelevant. So experiments performed on different people through different times are all considered repetitions. As an outsider the number of repetitions is simply the number of experiments performed. Among which half of the experiments would have Heads tosses and the other half Tails.
From participant’s first-person perspective the frequency can be accounted by the number of awakenings. At the same time only experiments he subjectively experienced are counted as repetitions. Whereas from an outsider’s third-person perspective the frequency shall be accounted by experiments performed. Yet any experiment on any participants are considered repetitions. Only if we mix the two by account the frequency by awakenings and count all experiments performed on different people as repetitions do we get the frequency of Heads as 1/3. This is again a mix of perspectives.
On a side note since many arguments are based on bets and their respective rewards. These rewards should also follow decision maker’s perspective. Meaning when a participant is duplicated his money shall be duplicated as well. Only then would her decision reflect the probability.
5. Is perspective disagreement unreasonable?
John Pittard (2015) pointed out any halfer must affirm robust perspectivism. That is two people in direct communication can give different probabilities. This has been used by some as a counter to halferism. To see this disagreement consider the following example. (Pittard actually used a slightly more complicated example in his paper. However the explanation presented here is applicable to both cases. ) Just like in the previous example when you fall asleep you might be cloned if a coin toss resulted in Tails. The two of you would be put into identical rooms. If the coin fell Heads one of the rooms would be empty. The clone process is highly accurate it retains memory so you have no way of knowing if you are old or new. One of your friend would randomly choose one of the two rooms to enter. Say she saw you in the room. What’s her probability of Heads? What is your answer?
For your friend the question is non-anthropic so it is very easy. If the coin fell Heads one of the room would be empty. If the coin fell Tails then you would be cloned and both rooms would be occupied. Since the room chosen is occupied it is new evidence favouring Tails. Simple Bayesian updating would give her the probability of Heads to be 1/3. For you seeing your friend gives you no new information about the coin toss. Because it doesn’t matter if there is anyone in the other room, you would have the same probability of seeing her (1/2). Therefore whatever probability I assigned at wake up must remain. Meaning as a halfer I should assign the probability of Heads as 1/2.
Here the disagreement is apparent. The two of us are in direct communication and sharing all our information. Yet we are giving different probabilities to the same question. To make the situation weirder, both of us think the other’s reasoning is correct. Some may think it is highly unpalatable suggesting the probability of head cannot be anything other than 1/3.
The explanation for this disagreement is quite simple: we are not actually answering the same question. From my friend’s perspective the person in this room is any copy of me. If I was cloned she has no meaningful way of telling which specific one is in this room. Hence she is answering the question “what’s the probability of Heads given the chosen room is occupied (by any copy of me)”. Whereas from my first-person perspective I am defined by my perspective center. So even if there is another copy of me I am still inherently unique. I.e. nobody else is in my reference class. So I am answering the question “What is the probability of Heads given I, this person specifically, is in the chosen room” Because this specificality is purely based on the first-person perspective it is incommunicable. E.g. I can keep telling her “this is me” but it would mean nothing to her. That’s why we would remain our differences yet agree the other is also correct from his/her perspective.
This disagreement is also valid in the frequentists sense. Suppose I am stuck in this experiment for 1000 days. I would see my friend in about 500 days with half of those following Heads and the other half following Tails. However in these 1000 experiments she would find the room occupied 750 times. The extra 250 are the times when she saw the other copy of me following tails. Of course in reality my friend may be involved in far more repetitions than 1000. Because all the copies produced during the experiments might also want to repeat the experiment by their count. However this higher number of repetitions would not change the relative frequency for her. So the frequentist probability for the two of us are indeed different.
6. Conclusions
To reason in third-person perspective is to think as some outsider whose center is unrelated to the topic of interest. i.e. uncentered reasoning. First-person perspective uses its center in various aspect of logic. i.e. centered reasoning. Traditionally (e.g. David Lewis) treats centered reasoning as uncentered reasoning with additional information. That uncentered information describe the world and centered information adds my location on top of that. So uncentered reasoning can be subsumed into centered reasoning. In contrast I think they are logics from two distinct perspectives that are actually in parallel. Both centered and uncentered information describes the world just from different perspectives. Therefore the two systems should not mix. If we recognize the importance of it paradoxes in anthropic reasoning can be well explained.
22 comments
Comments sorted by top scores.
comment by Radford Neal (radford-neal) · 2019-03-10T18:10:36.068Z · LW(p) · GW(p)
This is because in the context of sleeping beauty problem the probabilities of “today being Monday/Tuesday” do not exist. In another word “what’s the probability of today being Monday/Tuesday” are invalid questions.
I think here you depart from common-sense realism, in favour of what I am not sure.
From a common-sense standpoint, it is meaningful for Beauty to consider the probability of it being Monday because she can decide to batter down the door to her room, go outside, and ask someone she encounters what day of the week it is. That she actually has no desire to do this does not render the question meaningless.
Replies from: dadadarren↑ comment by dadadarren · 2019-03-11T04:54:27.232Z · LW(p) · GW(p)
Thank you for the counter argument professor Neal. Allow me to explain.
First of all I want to point out I am not arguing statements such as "today is Monday" is invalid. It is just saying on the day most immediate to my perception (today) the calendar is showing Monday. It is like when I say "I am a man." Nothing wrong with it.
What I am arguing is in the Sleeping Beauty Problem "the probability of today is Monday (or Tuesday)" is invalid. In this problem the question is asking "out of the two potential awakenings what's the probability that this is the first/second one?". In such problems the two awakenings are treated as they belongs to the same reference class but at the same time one of the awakenings (this one, the one today) is also inherently unique such that no information is needed to single it out from the two. This is self-contradictory. Yes it is true from the perspective of an outsider the two awakenings are just like one another and can belong to the same reference class. And from the perspective of a first person "today" or "this" or "me" is self explanatory so it can specify a particular entity. But these reasonings should not mix. i.e. we cannot single out an entity from a reference class by using first-person perspective. Imagine receiving an unsolicited message and you ask among all people in the world who is texting you. The answer you receive: "I'm texting you." While the sender is telling the truth and the message is clear to him. To you, another person who's trying to single out a particular person among a group, it means absolutely nothing. Anthropic questions in the form of "among all entities in the reference class which one is me" and related probabilites all have the same problem. So questions such as "the probability that I am a man" is also invalid. (note in the anthropic sense it the question is not asking about any physical process that's random/unknown and determines the gender of my physical body, but the conceptual "I" ,as first-person center dictates, observe the world from the perspective of any human that is male. )
I also want to clarify that I am not arguing "The probability of today being Monday"is invalid simply because of its wording. For example, imagine you are put to sleep on Sunday and the experimenters toss a coin. If it lands head they will inject you with a drug that would make you sleep though Monday and wake up Tuesday morning. Otherwise you would wake up on Monday as usual. In this problem when you wake up "the probability of today being Monday/Tuesday" is perfectly valid. The two possibilities reflects the outcomes of a unknown coin toss. Which can be well understood from a first-person perspective. Compare it to "today being Monday/Tuesday" in the sleeping beauty problem, where a third-person perspective is needed to comprehend the possibilities as elements in the same reference class. The latter requires a mix of reasoning from different perspectives.
Replies from: radford-neal↑ comment by Radford Neal (radford-neal) · 2019-03-11T17:33:15.939Z · LW(p) · GW(p)
I'm afraid this makes no sense to me. I think this comes from my not understanding how the concept of a "reference class" can possible work. So I have no idea what it could mean to "observe the world from the perspective of any human that is male", if observing from that "perspective" is supposed to change the probability (or render the probability meaningless) of some statement that I would take to be about the actual, real, world.
As I've pointed out before, the Sleeping Beauty problem is only barely a thought experiment - with a slight improvement over current memory-affecting drugs, it would be possible to actually run the experiment. It's not like a thought experiment involving hypothetical computer simulation of people's brains, or some such, in which one might perhaps think that common sense reasoning is not applicable.
So consider an actual run of the experiment. Suppose that at the time Beauty agrees to take part in the experiment, she fails to remember that she had already agreed to participate in a different experiment on Monday afternoon. The Sleeping Beauty experimenters have promised to pay her $120 if she completes their experiment, while the other experimenters have promised to pay her $120+X, and her motivation is to maximize the expected amount of her earnings. On some awakening during the Sleeping Beauty experiment, Beauty realizes that she had forgotten about the other experiment, and considers leaving to go participate in it. Of course, she then wouldn't get the $120 for participating in the Sleeping Beauty experiment, but if it's Monday, she would get the $120+X for participating in the other experiment. Now if it's Tuesday, the other experiment has already been cancelled. So she needs to consider the probability that it's Monday in order to make a good decision.
It's not actually relevant to my point, but here is how it seems to me the probabilities work out. Suppose that Beauty has probability p of remembering the other experiment whenever she awakens, and suppose that this is independent for two awakenings (as is consistent with the assumption that her mental state is reset before a second awakening). To simplify matters, let's suppose (and suppose that Beauty also supposes) that p is quite small, so the probability of Beauty remembering the other experiment on both awakenings (if two happen) is negligible.
Since p is small, Beauty's probability for it being Monday given that she has woken and remembered the other experiment should be essentially as usual for this problem, with the answer depending on whether she is a Halfer or a Thirder. (If p were not small, she might need to downgrade the probability of Tuesday because there might be a non-negligible chance that she would have left the experiment on Monday, eliminating the Tuesday awakening.)
If she's a Thirder, when she wakes and remembers the other experiment, she will consider the probability that it is Monday to be 2/3, and will leave for the other experiment if (2/3)(120+X) is greater than 120, that is, if X is greater than 60. If she is a Halfer, it's harder to say, since Halferism is wrong, but let's suppose that she splits the 1/2 probability of two awakenings equally, and hence thinks the probability of it being Monday is 3/4. She will then leave if (3/4)(120+X) is greater than 120, that is, if X is greater than 40. We can also look at things from a frequentist perspective, and ask what her expected payment is if she always decides to leave when she remembers the other experiment. It will be (1-p)120 + p(120+X) conditional on the coin landing Heads, and (1-p)(1-p)120 + p(120+X) conditional on the coin landing Tails, for a total expectation of (1-(3/2)p)120 + p(120+X), ignoring the p-squared term as being negligible. This simplifies to (1-p/2)120 + pX, which is greater than 120 if X is greater than 60, in agreement with the Thirder reasoning.
In any case, though, if I've understood you correctly, you deny that there is any meaning to "the probability that it's Monday" in this situation. So how is Beauty supposed to decide what to do?
Replies from: dadadarren↑ comment by dadadarren · 2019-03-13T04:09:22.647Z · LW(p) · GW(p)
I do find some of the ideas related to anthropic reasoning hard to express. Let me try another expression and see if it is better. "The probability of me being a man" in the anthropic sense means the probability of me being born into this world as a human male. Or it can been seen as the probability of my soul getting embodied as a human male. This is what I meant by "experiencing the world from the perspective of a man". I'm think even though "I'm a man" is a valid statement "the probability of me being a man" does not exist. It might be tempting to say the probability is simply 1/2. But that implies I can only be a man or a women. The probability of me being anything else such as a chimpanzee or an alien is zero. There is no basis for that. Further problems involves the possibility of me not even born into this world at all. Trying to assign a value to this probability is impossible. Because to construct a sample space a reference class containing "me" is needed. However from the first-person perspective this "me" is defined by the perspective center. It is inherently unique. i. e. there is nothing else in its reference class. So a first-person identity cannot be used in such questions. Someone has to be identified from a third-person perspective instead. Because from a third-person perceptive no one is inherently special, so to specify someone would involve a process to single it out. Information about this process would determined the reference class. This is why both SSA and SIA argues the first-person "me" is conceptually equivalent to someone randomly selected from a certain group. Because that way it gives a reference class subsequently allows them to assign a value to such probabilities. However this equivalency means mixed reasoning from the two perspectives. Using the first-person "me" interchangeably with the proposed third-person identity leads to other anthropic paradoxes.
I do agree that sleeping beauty experiment is physically possible. For the experiment to take place the memory wipe doesn't even need to be perfect. As long as it is accurate enough to fool the human mind it will work. Since human mind can only contain finite amount of information nothing in theory denies its feasibility. I also appreciate you present your argument by experiments and numbers. I find discussing clearly defined examples much easier. Allow me to explain our differences.
First of all I agree with the calculations (with the exceptions of frequentist repetitions which I discusses in Section 4). Secondly I also agree maximizing someone's own earning would force the decisions to reflect the probability. Our differences is regarding how should the reward be handled. The whole reason sleeping beauty problem is related to anthropic reasoning is because it involves an observer duplication. That is with the memory wipe Monday Beauty and Tuesday Beauty are two separate entities (at least from their own perspectives). So they should have distinct rewards. Monday beauty's correct decision should benefit Monday Beauty alone and Tuesday Beauty's wrong decision should punish Tuesday Beauty only. In the example you presented that is not the case. The potential reward that is always given to whoever exists at the end of the experiment. In this kind of setup even when beauty experiences the status reset her rewards never do. It is like her money doesn't participates in the duplicating experiment as she does. I disagree with this discrepancy. In this question beauty is no longer trying to maximize the reward to the obvious first-person "me" but to maximize the cumulated reward at the end of the experiment. This shift in objective means instead of strategizing directly from the first-person perspective beauty should strategize from the perspective of a non participant, i.e. a third person. So instead of using first-person center to define a self-explanatory "today", the specific day in question is defined in third person. For example, it is calculating the probability that the day beauty remembers the experiment is a Monday.
Constructing a sleeping beauty experiment with appropriate rewards and repetitions is quite troublesome. (for example questions such as "if you do not have a chance to spend the money is it still a reward" comes into play). I want to present my argument in a different but also physically feasible experiment. Suppose when you go to sleep tonight a clone of you would be created and put into an identical room. The clone is highly accurate it retains the memory good enough so he fully believes he is the one fall asleep yesterday. After waking up in the experiment you ask yourself "what is the probability that I am the original?". My position is there is no such probability. Notice the "I" here is the self-explanatory "I" from first-person perspective. If a third-person specification is used instead, e.g. "the probability of a randomly chosen copy being the original", then there is no mix of perspectives and it is obviously valid. Now suppose if guessed correctly a reward of $1 would be given to you. The experiment is repeated many times. I.e. When fall asleep again on the second day another clone would be created. Same happens the third day, etc. During this process your money is cloned along with you. Everyday you get a chance to guess if you are the original regarding the previous night. Here to earn the maximum reward for yourself the only consideration is to guess correctly. I argue there is no strategy for that objective. Again notice I'm not trying to come up with a strategy that would maximize the total or average money owned by all copies. The objective is much more direct: to maximize the money owned by me, myself.
Replies from: radford-neal↑ comment by Radford Neal (radford-neal) · 2019-03-13T14:35:15.624Z · LW(p) · GW(p)
"The probability of me being a man" in the anthropic sense means the probability of me being born into this world as a human male. Or it can been seen as the probability of my soul getting embodied as a human male. ... even though "I'm a man" is a valid statement "the probability of me being a man" does not exist
Here, you have imported some highly questionable ideas, which would seem to be not at all necessary for analysing the Sleeping Beauty problem. This is my core objection to how Sleeping Beauty is used - it's an almost-non-fantastical problem that people take to have implications for these sorts of anthropic arguments, but when correctly analysed, it does not actually say anthing about such issues, except in a negative sense of revealing some arguments to be invalid.
You should also note that your use of "probability" here does not correspond to any use of this word in normal reasoning. To see this, consider "the probability of my having blue eyes". It take this to be in the same class as "the probability of me being a man", but it allows for less-ridiculous thought experiments. Suppose you are a member of an isolated desert tribe. There are no mirrors, and no pools of water in which you could see your reflection. The tribe also has a strong taboo against telling anyone what colour their eyes are. So you don't know what colour your eyes are. Do you maintain that "the probability that my eyes are blue" does not exist? Can't you look at the other members of the tribe, see what fraction have blue eyes, and take that as the probability that you have blue eyes? Note that this may have practical implications regarding how much care you should take to avoid sun exposure, to reduce your chance of developing glaucoma.
I assume that you do think "the probability that my eyes are blue" is meaningful in this scenario. You seem to have in mind only something like prior probabilties, not conditional on any observations. But all actual practical uses of probability are conditional on observations, so your discussion is reminescent of the proverbial question of "how many angels can dance on the head of a pin?".
I also agree maximizing someone's own earning would force the decisions to reflect the probability.
I'm not sure what exactly you're agreeing about here. Do you maintain that "the probability that it is Monday" does not exist, until Beauty happens to remember the other experiment, at which point it suddenly becomes meaningful? If so, why can't Beauty just magine that there is some such practical reason to want to know whether it is Monday, calculate what the probability is, and then take that to be the probability of it being Monday even though she doesn't actually need to make a decision for which that probability would be needed? Seems better than claiming that the probability doesn't exist, even though this procedure gives it a well-defined value...
The whole reason sleeping beauty problem is related to anthropic reasoning is because it involves an observer duplication. ... So they should have distinct rewards. Monday beauty's correct decision should benefit Monday Beauty alone...
There's a methodological issue here. I've presented a variation on Sleeping Beauty that I claim shows that "the probability that it's Monday" has to be a meaningful concept for Beauty. You say, "but if I look at a different variation, that arguement doesn't go through." Why should that matter, though? If my variation shows that the probability is meaningful, that should be enough. If this shows that Sleeping Beauty is not related to anthropic reasoning, so be it.
However, there's no problem making the reward be for "Beauty in the moment". Suppose that when Beauty wakes up, she sees a plate of cookies. She recognizes them as being freshly baked by a bakery she knows. She also knows that on Mondays, but not Tuesdays, they put an ingredient in the cookies to which she is mildly allergic, causing immediate, painful stomach cramps. She also knows that the cookies are quite delicious. Should she eat a cookie? Adjust the magnitudes of possible pleasure and pain as desired to make the question interesting. Shouldn't the probability of it being Monday be meaningful?
Suppose when you go to sleep tonight a clone of you would be created and put into an identical room. The clone is highly accurate it retains the memory good enough so he fully believes he is the one fall asleep yesterday.
Note that this is now a completely fantastical thought experiment, in contrast to the usual Sleeping Beauty problem. It may be impossible in principle, given the quantum no-cloning theorem. I also don't know how this is supposed to work in conjunction with your previous reference to "souls". I don't think this extreme variation actually shows anything interesting, but if it did, you'd need to ask yourself whether the need to resort to this fantasy indicates that you're in "angels dancing on the head of a pin" territory.
↑ comment by dadadarren · 2019-03-14T04:06:59.753Z · LW(p) · GW(p)
Thank you for the speedy reply professor. I was worried that with my slow response you might have lost interest in the discussion.
Forgive me for not discussing the issues in the order you presented. But I feel the most important argument I want to challenge is that sleeping beauty problem is physically possible but the cloning experiments are strictly fantastical.
In the cloning experiment the goal is not to make a physically exact copy of someone but to make the copy accurate enough that a human could not differentiate. Which is no different from the sleeping beauty problem. Considering the limitations of human cognitive ability and memory this doesn't remotely require a exact quantum state copy. Unless you take the position that the human memory is so sophisticated it is quantum state dependent. But then it means to revert beauty's memory to a earlier state would require her brain to change back to a previous quantum state. Complete information about that quantum state cannot be obtained unless she is destroyed at the time. I.E. Sleeping beauty would be against no-cloning theorem thus non-feasible as well. Apart from memory there is also the problem of physical differences. It is understood during the first day beauty would inevitably undergone some physical changes. E.g. her hair may grow, her skin is aging a tiny bit, etc. This is not considered a problem for the experiment because it is understood human cannot pick on such minuscule details. So even with these physical changes beauty would still think this is her first awakening after waking up the second time. The same principle applies to the cloning example. As long as the copy is physically close enough for human's cognitive ability to not notice any differences the clone would believe he is the original. In summary if sleeping beauty problem is physically possible the cloning example must be as well. In this problem after waking up in the experiment the "probability of me being the original" makes no sense. Even if you consider repeating the experiment many times there is no answer to it. Again it is referring to the primitively understood first-person "me" not someone identified form a third-person perspective such as "the probability of a randomly selected copy being the original."
As for the question of my soul getting incarnated into somebody. This is not my idea. Various anthropic school of thoughts lead to such an expression. For example in Doomsday Argument's prior probability calculation SSA argues I am equally likely to be born as any human ever exists. SIA adds on top of it and suggests I am more likely to be born into this world if more human ever exists. They both closely resemble the idea of soul embodiment into a pool of candidate. I mentioned this expression because it neatly describes what "the probability of me being a man" refers to in the anthropic context. And let's not loose the big picture here that I am arguing such probabilities do not make sense. So I complete agree with you that using soul embodiment as an experiment to assign probability is highly questionable. In fact I am arguing such notions are outright wrong.
Regarding the case of eye color quite clearly we are not discussing anything resembling the above idea. By surveying other people in the tribe I would know what percentage of the tribesman have blue eyes. If we say that percentage is the probability of someone having blue eyes then there is an underlying assumption this someone is an ordinary member of such tribe. He is not special. This is against the first-person perspective where I am inherently different from anybody else. Meaning this person is identified among the tribesman from a third-person perspective. Therefore that percentage is not the probability the first-person "I" have blue eyes. But rather the probability of an randomly selected tribesman have blue eyes. A optimal level of avoiding sun exposure can be derived from that survey number. However it cannot be said that strategy is optimal for myself. All we know is that if every tribesman follows this strategy then it would be optimal for the tribe as a whole.
I think by using a betting argument there is an underlying assumption that someone trying to maximize her own earning would follow a strategy determined by the correct probability. This I agree. However that is when the decision maker and the reward receiver is the same person. It is to say if beauty is contemplating the probability of "today" being Monday, then the reward for a correct guess should be given to today's beauty. That's what I meant by Monday Beauty's correct decision should reward Monday Beauty alone. In the setup you presented that is not the case. In your setup the objective is to maximize the accumulated earnings. For this objective the concept of a self-explanatory "today" is never used. So the calculation is not reflecting the probability of "today being Monday". But rather reflecting the probability that "the day beauty remembers the previous experiment is Monday". Essentially it has the same problem as the eye color example. The first-person center concept of "today" is switched to a third-person identity. If we go back to the cloning experiment, you are arguing after waking up "the probability of a randomly selected copy being original" is valid and meaningful. I agree with this. I am arguing using first-person center "the probability 'me' being the original" do not exist.
For the cookie experiment yes the painful reaction and delicious bliss are of course meaningful. But it only means "today is Monday" and "today is Tuesday" are both meaningful to her. This I never argued against. However if a probability of "today is Monday" exists then there should be an optimal strategy for "beauty in the moment" to maximize her pleasure. Notice strategies exists to maximize the pleasure throughout the two day experiment. Strategies also exists to optimize the pleasure of beauty exists at the end of experiment. But there is no strategy to maximize the pleasure of this self apparent "beauty in the moment". We can even repeat the experiment for this "today's beauty". Let her sleep now and enter another round of sleeping beauty experiment. Instead of the two potential awakenings 24 hour apart, this time they are 12 hours apart. So this new experiment fit into 1 day, beauty would not experience the memory wipe from the original experiment. (Here I'm assuming the actual awakening and interviewing takes no time, for the ease of expression). Again in the first awakening she would be given allergic cookies and the second awakening good cookies. When she wakes up she would be facing the same choice again. We can repeat the experiment further with later iterations' awakenings closer and closer. But there is no strategy to maximize a "beauty in the moments" overall pleasure. (Here it shows why I want to use the cloning example, because to repeat the sleeping beauty experiment from beauty's first-person perspective is very messy. And question such as if the pain is completely forgotten does it still matter comes into question.)
Replies from: radford-neal↑ comment by Radford Neal (radford-neal) · 2019-03-17T02:23:24.742Z · LW(p) · GW(p)
I don't think the issue of whether "cloning" is possible is actually crucial for this discussion, but since this relates to a common lesswrong sort of assumption, I'll elaborate on it. I do think that making a sufficiently accurate copy is probably possible in principle (but obviously not now, and perhaps never, in practice). However, I don't think this has been established. It seems conceivable that quantum effects are crucial to consciousness - certainly physics gives us no fundamental reason to rule this out. If this is true, then "cloning" (not the usual use of the word) by measuring the state of someone's body and then constructing a duplicate will not work - the measurement will not be adequate to produce a good copy This possibility is compatible with there being some very good memory-erasing drug, which need only act on the quantum state of the person in a suitable way, without "measuring" it in its entirety. So I don't agree with your statement that "if sleeping beauty problem is physically possible the cloning example must be as well". And even if true in principle, there is a vast difference in practice between developing a slightly better amnesia drug - I wouldn't be surprised if this was done tomorrow - and developing a way of measuring the state of someone's brain accurately enough to produce a replica, and then also developing a way of constructing such a replica from the measurement data - my best guess is that this will never be possible.
This practical difference relates to a different sense in which your cloning example is "fantastic". Even if we were sure that it was possible principle to "clone" people, we should not be sure that the methods of reasoning that we have evolved (biologically and culturally) will be adequate in a situation where this sort of thing happens. It would be like asking a human from 100,000 years ago to speculate on the social effects of Twitter. With social experience confined to a tribe of a dozen closely-related people, with occasional interactions with a few other tribes, not to mention a total ignorance of the relevant technology, they would be utterly incompetent to reason about how billions of people will interact when reacting to online text and video postings.
In this discussion, I get the impression that considering fantastical things like cloning leads you to discard common-sense realism. Uncrictially apply our current common-sense notions might indeed be invalid in a world where you can be duplicated - with the duplicate having perhaps first had its memories maliciously edited. There are lots of interesting, and difficult, issues here. But these are not issues that need to be settled in order to settle the Sleeping Beauty problem!
In your cloning example, you abandon common sense realism for no good reason. Since you talk about an original versus the clone, I take it that you see the experimenters as measuring the state of the original, without substantially disturbing it, and then creating a copy (as opposed to using a destructive measurement process, and then creating two copies, since then there is obviously no "original" left). In this situation, the distinction between the original and the copy is completely clear to any observer of the process. When they wake up, both the copy and the original do not know whether or not they are the original, but nevertheless one is the original and one is not. They can find out simply by asking the observer (and of course there are other possible ways - as is true for any fact about the world). Before they find out, they can assess the probability that they are the original, if that amuses them, or is necessary for some purpose. Nothing about this situation justifies abandoning the usual idea that probabilities regarding facts about the world are meaningful.
Regarding the cookies, you say "there is no strategy to maximize a "beauty in the moments" overall pleasure". So once again, I ask: How is Beauty supposed to decide whether to eat a cookie or not?
Replies from: dadadarren↑ comment by dadadarren · 2019-03-18T14:48:47.960Z · LW(p) · GW(p)
You mentioned if our consciousness is quantum state dependent then creating a clone with indistinguishable memory would be impossible. (Because to duplicate someone's memory would require complete information about his current quantum state, if I understand correctly) But at the same time you said sleeping beauty experiment is still possible since memory erasing only requires acting on the quantum state of the person without measuring it in its entirety. But wouldn't the action's end goal to revert the current state to a previous (Sunday nights) one? It would ultimately require beauty's quantum state to be measured at Sunday night. Unless there is some mechanics to exactly reverse the effect of time on something. But that to me appears even more unrealistic. I do agree that the practical difficulty between the two experiment is different. Cloning with memory does require more advanced technology to carry out. However I think that does not change how we analysis the experiments or affect probability calculations. Furthermore, I do not think this difference in technical difficulty means we are too primitive to ponder about the cloning example while sleeping beauty problem is fair game.
The reason I bring out the cloning example is because it makes my argument a lot easier to express than using the sleeping beauty problem. You think the two problem are significantly different because one may be impossible in theory the other problem is definitely feasible. So I felt obligated to show the two problems are similar especially concerning theoretical feasibilities. If you don't feel the theoretical feasibility is crucial to the discussion I'm ok to drop it from here on. One thing I want to point out is that all argument made by using the cloning experiment can be made by using the sleeping beauty problem. It is just that the expression would be very longwinded and messy.
You mentioning that no matter how we put it one of the copies is the original while the other is the clone. Again I agree with that. I am not arguing "I am the original" is a meaningless statement. I am arguing "the probability of me being the original" is invalid. And it is not because being the original or the clone makes no difference to the participant. But because in this question the first-person self explanatory concept of "me" should not be used. From the participant's first-person perspective imagine repeating this experiment. You fall asleep and undergone the cloning and wake up again. After this awakening you can guess again whether you are the original for this new experiment. This process can be repeated as many times as you want. Now we have a series of experiment that you have first-person subjective experience. However there is no reason the relative frequency of you being the original in this series of experiments would converge to any particular value.
Of course one could argue the probability must be half because half of the resulting copies are original and the other half is the clone. However this reasoning is actually thinking from the perspective of an outsider. It treats the resulting clones as entities from the same reference class. So it is in conflict with using the first-person "me" in question. This reasoning is applicable if the entity in question is singled out among the copies from a third-person perspective, e.g. "the probability of a randomly selected copy being original." Whereas the process described in the previous paragraph is strictly from the participants first-person perspective and inline with the use of first-person "me".
Now we can modify the experiment slightly such that the cloning only happens if a coin toss land on Tails. This way it exactly mirrors the sleeping beauty problem. After wake up we can give each of them a cookie that is delicious to the original but painful to the clone. Because from first-person perspective repeating the experiment would not converge to a relative frequency there is no way to come up with an strategy for the participant to decide whether or not to eat them that will benefit "me" in the long run. In another word if beauty's only concern is the subjective pleasure and pain of the apparent first-person "me", then probability calculation could not help her to make a choice. Beauty have no rational way of deciding to eat the cookie or not.
Replies from: radford-neal↑ comment by Radford Neal (radford-neal) · 2019-03-18T15:21:37.516Z · LW(p) · GW(p)
Regarding cloning, we have very good reason to think that good-enough memory erasure is possible, because this sort of thing happens in reality - we do forget things, and we forget all events after some traumas. Moreover, there are plausible paths to creating a suitable drug. For example, it could be that newly-created memories in the hippocampus are stored in molecular structures that do not have various side-chains that accumulate with time, so a drug that just destroyed the molecules without these side-chains would erase recent memories, but not older ones. Such a drug could very well exist even if consciousness has a quantum aspect to it that would rule out duplication.
I don't see how your argument that the first person "me" perspective renders probability statements "invalid" can apply to Sleeping Beauty problems without also rendering invalid all uses of probability in practice. When deciding whether or not to undergo some medical procedure, for example, all the information I have about its safety and efficacy is of "third-person" form. So it doesn't apply to me? That can't be right.
It also can't be right that Beauty has "no rational way of deciding to eat the cookie or not". The experiment iis only slightly fantastical, requiring a memory erasing drug that could well be invented tomorrow, without that being surprising. If your theory of rationality can't handle this situation, there is something seriously wrong with it.
Replies from: dadadarren↑ comment by dadadarren · 2019-03-20T14:31:02.670Z · LW(p) · GW(p)
I think while comparing cloning and sleeping beauty problem you are not holding them to the same standard. You said we have good reason to think that "good-enough" memory erasure is possible. By good-enough I think you meant the end result might not be 100% same from a previous mental state but the difference is too small for human cognitives to notice. So I think when talking about cloning the same leniency should be given and we shouldn't insist on a exact quantum copy either. You also suggested if our mental state is determined by our brain structure at a molecular level then it can be easily revered. But then suggests cloning would be impossible if our mind is determined by the brain at a quantum level. If our mind is determined at a quantum level simply reverting the molecular structure would not be enough to recreate a previous mental state either. I feel you are giving the sleeping beauty problem a easy pass here.
Why would the use of first-person me render all use of probability invalid? Regarding the risk of a medical procedure we are talking about an event with different possible outcomes that we cannot reliably predict for certain. Unlike the color of the eyes example you presented earlier this uncertainty can be well understood from the first-person perspective. For example when talking about the probability of winning the lottery you can interpret it from the third-person perspective and say if everyone in the world enters then only one person would win. But it is also possible to interpret it from the first-person perspective and say if I buy 7 billion tickets I would have only 1 winning ticket (or if I enter the same lottery 7 billion times I would only win once). They both work. Imagine while repeating the cloning experiment, after each wake up you toss a fair coin before going back to sleep again for the next repetition of cloning. As the number of repetitions increases the relative frequency of Heads of the coin tosses experienced by "me" would approach 1/2. However there is no reason the relative frequency of "me" being the original would converge to any value as the number of repetitions increase.
The reason there is no way to decide on whether or not to eat the cookie is because the only objective is to maximize the pleasure of the self-explanatory "me" and the reward is linked to "me" being the original. Not only my theory cannot handle the situation. I am arguing the situation is setup in a way no theory could handle it. People claiming beauty can make a rational decision is either changing the objective (e.g. be altruistic towards other copies instead of just the simple self) or did not use the first-person "me" (e.g. trying to maximize the pleasure of the person defined by some feature or selection process instead of this self-explanatory me).
Replies from: radford-neal↑ comment by Radford Neal (radford-neal) · 2019-03-20T19:47:55.084Z · LW(p) · GW(p)
Sleeping Beauty with cookies is an almost-realistic situation. I could easily create an analogous situation that is fully realistic (e.g., by modifying my Sailor's Child problem). Beauty will decide somehow whether or not to eat a cookie. If Beauty has no rational basis for making her decision, then I think she has no rational basis for making any decision. Denial of the existence of rationality is of course a possible position to take, but it's a position that by its nature is one that it is not profitable to try to discuss rationally.
Replies from: dadadarren↑ comment by dadadarren · 2019-03-21T14:18:44.826Z · LW(p) · GW(p)
Beauty can make a rational decision if she changes the objective. Instead of the first-person apparent "I" if she try to maximize the utility of a person distinguishable by a third-person then a rational decision can be made. The problem is that in almost all anthropic school of thought the first-person center is used without discrimination. E.g. in sleeping beauty problem the new evidence is I'm awake "today". In Doomsday argument it considers "my" birth rank. In SIA's rebuttal to Doomsday Argument the evidence supporting more observers is that "I" exist. In such logics it doesn't matter when you read the argument the "I" in your mind is a different physical person from the "I" in my mind when I read the same argument. Since the "I" or "Now" is defined by first-person center in their logic it should be used the same way in the decision making as well. The fact a rational decision cannot be made while using the self-apparent "I" only shows there is a problem with the objective. That using the self-apparent concept of "I" or "Now" indiscriminately in anthropic reasoning is wrong.
Actually in this regard my idea is quite similar to your FNC. Of course there are obvious differences. But I think a discussion of that deserves another thread.
I got a feeling that our discussion here is coming to an end. While we didn't convince each other, as expected for any anthropic related discussion, I still feel I have gained something out of it. It forced me to try to think and express more clearly and better structure my argument. I also want to think I have a better understanding of potential counter arguments. For that I want to express my gratitude
comment by avturchin · 2019-03-09T22:47:59.111Z · LW(p) · GW(p)
So, what is your view on Doomsday Argument form this perspective?
Replies from: dadadarren↑ comment by dadadarren · 2019-03-10T02:46:00.434Z · LW(p) · GW(p)
Base on my argument Sleeping Beauty Problem's answer is double halving. So Doomsday Argument and Presumptuous Philosopher are invalid.
More specifically there is no such thing as "the probability distribution of my birth rank among all humans". To treat all humans in the same reference class requires an outsider's third-person perspective. But from this perceptive there is no self explanatory "I". If reasoned from a first-person perspective then "I" is self explanatory as to myself I am inherently unique. But then nobody else is in the same reference class as I do. So proposed probability distribution mixes reasoning from different perspectives therefore invalid.
To make the statement perspectively consistent the individual in question must be specified not by a perspective center (such as "I" or "now") but by some objective feature to differentiate it from all humans. E.g. it is valid to ask "the probability distribution of the tallest person's birth rank among all humans". In this case it is theoretically correct to perform a Bayesian update once the birth rank is known. However this information won't be available until all human are born. So no supernatural predicting power as suggested by the Doomsday Argument is present.
Replies from: avturchin↑ comment by avturchin · 2019-03-10T09:31:24.791Z · LW(p) · GW(p)
But Copernican mediocrity principle still holds, and I can guess that you was not born on the 1 of January?
Replies from: dadadarren↑ comment by dadadarren · 2019-03-10T15:29:18.175Z · LW(p) · GW(p)
If mediocre principle holds then how can we specify one particular individual among all humans simply by uttering the word "I"? While specifying others requires using their unique features to differentiate him/her from the other humans. If "I" is understood to be this special person then isn't it self-contradictory to say "I" am also mediocre?
From first-person perspective concept of "I" is self-explanatory. Using the word is enough to specify me since to me I am inherently special from anything else. If we take an outsider's view and employ a third-person perspective then all human beings are in the same reference class and the mediocre principle applies. But then an individual has to be specified by its features to differentiate it from the rest. Pick one, not both.
Replies from: avturchin↑ comment by avturchin · 2019-03-10T16:22:01.598Z · LW(p) · GW(p)
I think that most of my properties like birth date or name are still random relative to "actual me", as if they are some externally observable random variables. I don't see anything non-random in my date, age, name or any other "important" properties.
Also, is you theory similar to full non-indexical conditioning?
Replies from: dadadarren↑ comment by dadadarren · 2019-03-10T17:45:49.055Z · LW(p) · GW(p)
I'm not sure what "actual me" stands for in this context. If we take a third-person view and randomly specify one individual among all humans then that person's birthday is random. If we take a first-person perspective and think about the birthday of "me, as the perspective center defines" then there is no reason to treat it as random. Of course there is some small randomness in the actual time of labour. But there is no reason to think I could have been born in the 2nd century and live as someone else in history and the fact I'm living in the current era is a random outcome.
Name is a different story. I can treat my name as an random variable if the naming process can be seen as an random event. This random nature of naming can be fully understood from both the first-person and the third-person perspective. For comparison consider this: during the two days in the sleeping beauty problem one day is randomly chosen and the room is painted red. On the other day the room is painted blue. When I wake up in the experiment "the probability that today is Monday" is invalid. However "the probability that today is the room is painted red" is a perfectly valid question. Both question uses first-person center to define "today". But the former have to switch to third-person to put the two awakenings into the same reference class. While the latter is only regarding the outcome of an random process which is comprehensible from a first-person perspective. The former is analogous to my birthday. The later is analogous to my name.
Lastly, no my theory is almost in every way against FNC. They give different answer to sleeping beauty problem. In fact I'm arguing SSA, SIA and FNC are all false because they mixed reasonings from two perspectives.
Replies from: avturchin↑ comment by avturchin · 2019-03-10T18:48:18.180Z · LW(p) · GW(p)
"Actually me" - it is me now, which is the only person about whom I can think in first perspective. The date of birth here is the month and day of my birth, excluding year. Month and day are random, so they could take any value between 1 Jan and 31 Dec. Despite my knowledge of the day and month of birth, they still look like they are randomly selected form this interval. So full knowledge doesn't prevent randomness of observables.
The randomness of the year of my birth is more difficult question. It can't be before around 1800, as nobody before this date discussed the ideas of this type (first was Laplace with his Rule of Succession).
Replies from: dadadarren↑ comment by dadadarren · 2019-03-11T02:46:35.805Z · LW(p) · GW(p)
So by "actual me" we are talking about me as from first-person perspective. Then I stand by my previous comment. There is no reason to think the time of my birth is random. Apart from the small randomness in the actual time of labour. I do not see why our discussion should exclude year and only consider month and date. Especially considering the Doomsday argument is talking about the entire history of humans. Even if only the month and dates are considered I fail to see why my birthday can be any value between 1Jan to 31 Dec. My birthday is mid August. There is no way for me to be born on 1 Jan unless we are talking about someone else entirely.
Unless of course if you are treating the starting point of our calendar year as random. Which has its merit since that point is rather arbitrary and bears no logical significance. In that sense yes technically the calendar date of my birth is random. However this only means how we denote my time of birth is random. Not that my actual birth time itself is random.
I am confused by your last paragraph. That my birth year cannot be before 1800. So instead of treating all human beings as in the same reference class like ordinary Doomsday Argument suggests, are you suggesting that only people who have ever considered ideas like Doomsday Argument are in the reference class? That is quite an extraordinary statement. I know such ideas exist but never seen any serious proponent. In comparison I think what I'm proposing is much more straightforward: If I employ first-person perspective and use introspection to identify me, or the "actual me" which is the only person there is to identify, then in this logic framework I am inherently unique. The only person in my reference class is me alone.
Replies from: avturchin↑ comment by avturchin · 2019-03-11T09:54:31.616Z · LW(p) · GW(p)
All discussion about days and months was to illustrate that despite my uniqueness, my properties (which are not relevant for knowing about DA) still look randomly chosen from the set of all available properties. In other words, my month of birth and the fact that I know about DA are "mutually random" variables. It is mostly the same for the year of birth, but here it is more complicated, as not all years are fit for random picking, but those which are after 1800 approximately.
For example, if I know that you birthday is in 8th month of year, but I don't know the total number of months in a year, I could estimate that with 50 per cent probability the year has 16 month, which is very close to real value of 12. My birthday in in 9th month, so it will be 18 months estimation.
Yes, I think that the most correct referent class is the class of those who knows about DA. This implies the "end" soon, in the decades. I have a large text which overview this and other my ideas about DA here.
Replies from: dadadarren↑ comment by dadadarren · 2019-03-12T00:37:02.634Z · LW(p) · GW(p)
Guessing the number of months by my birthday is essentially treating me as an sample. It is a valid method because if we take the birthdays from a large number of people then the estimation would converge on the true value. However in this logic framework no individual is inherently unique. Everyone belongs to the same reference class. So here we shouldn't use the concept of first-person me, or actual me, which automatically stands out from all others. Instead each individual, including myself, shall be treated equally and be specified from a third-person perspective by features. And as I have argued previously Doomsday Argument do not work if the individual is specified by features. It requires the use of first-person me and treat it as an ordinary number of a false reference class.