Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes
post by dadadarren · 2020-11-09T23:17:21.624Z · LW · GW · 11 commentsContents
Per Toss or Per Awakening Cloning Beauty Back to SB's Frequentist Model None 11 comments
In a recent post [LW · GW], it has been said the Sleeping Beauty Problem is resoundingly considered resolved in academia. As someone who has been following the literature extensively for the past few years, I can positively say that is not true. The post also used a frequentist argument (in the form of a computer simulation) as support for thirdism. I want to point out the shortcoming of this argument. Most importantly, I want to show what I consider to be the correct frequentist model. And why perspectives shall be regarded as fundamental to solve anthropic paradoxes. as I have argued in this previous post [LW · GW].
Per Toss or Per Awakening
If the SB experiment is performed 100 times then roughly speaking there will be 150 awakenings. Out of these 150 awakenings, about 50 would be after Heads. Neither camp is contending this. A computer simulation reiterating it cannot settle the debate.
The problem is Halfers, in general, suggest the relative frequency should be divided over the total number of tosses: 50/100. Thirders on the other hand want to divide it over the total number of awakenings: 50/150. No consensus has been reached so far.
I argue the above interpretations are both wrong. I purpose an alternative model that would give the same relative frequency over toss or awakening. It requires treating perspectives as fundamental.
Cloning Beauty
For ease of expression, I will first use the following thought experiment I call Cloning Beauty. It presents the same anthropic uncertainty as SB. If you have doubt that SB and CB are equivalent, do not worry. I will go back to SB and present its frequentist model momentarily.
CB is set up the same way as the original Sleeping Beauty. Instead of memory wipes, the subjectively similar instances are obtained by cloning. Omega would toss a coin after Beauty falls asleep. If it lands Heads, nothing happens. If Tails, Omega would create a highly accurate clone of Beauty and put her in the empty identical room. The cloning is highly accurate even memories are preserved. So a Beauty would not be able to tell if she is physically new. Beauty wakes up the next day, what's the probability of Heads?
For clarity let's call the pre-existing agent the original, call the newly created agent the clone, and call any agent a copy.
The same debate of Halferism vs Thirderism also exists in CB. After waking up Beauty could say there is no new information so it is 1/2. Or she could say I exist, so Tail is more likely therefore it is 1/3. SSA and SIA would give the same answers as SB.
Let's repeat this experiment so we can get some frequencies. Here the effect of perspective is prominent. Imagine yourself in Beauty's shoes. I fall asleep, wake up the next day not knowing what happened. I can enter another iteration of the same experiment hereon and repeat it again and again. Granted, through these repetitions I may not refer to the same physical copy (just like in the first experiment I might not be the same physical copy after waking up). Yet it is always clear to me which copy the indexical I refers to. It means the first-person which is primitively clear from my perspective. Through these repetitions I would experience Heads and Tails with roughly equal numbers.
If we imagine repeating the experiment from an outsider's perspective (as a third-person, or from a god's eye perspective, or with "a view from nowhere", or however you like to call it), the picture would be completely different. Indexicals such as I can no-longer be used to pinpoint a copy. Discussing a particular Beauty requires selecting it out first. Depending on how the copy is selected it will give different probabilities/ relative frequencies. E.g. a copy is selected from all actually exist copies, the probability, and relative frequency would be 1/2. Alternatively if a copy is selected from potentially existing copies, given the selection actually exists, the probability and relative frequency would be 1/3.
But none of the selection methods is relevant for Beauty to get the relative frequency from her own viewpoint. In anthropic problems, there are two sets of seemingly impeccable logic. On one hand, from an observer's first-person perspective, the analysis naturally focuses on indexicals such as I or now. The indexical references are also primitively clear to the first-person. On the other hand, from an outsider's perspective, it is correct to say no one is innately special. Every observer/moment should be treated with indifference. But the two sets of logic stem from different perspectives, they should be kept parallel to each other. Anthropic paradoxes are caused by trying to forcibly mix them. It will lead to assuming the primitively identified indexicals, like I or now, as the result of some imaginary sampling process, such as SSA or SIA.
Back to SB's Frequentist Model
The SB should have the same frequentist model as CB. The only nick is that the memory wipe of a previous iteration could interrupt a later iteration. But that can be easily avoided. It should be obvious that the exact duration of the SB experiment is inconsequential. E.g. The experiment can take two days or two hours. It doesn't matter. As long as a memory wipe happens after Tails, making the two awakenings subjectively similar, it is the same problem.
For simplicity let’s assume the time awake in each iteration is negligible and let the first awakenings happen immediately after the experiments start. After waking up from the first experiment, Beauty can take part in another iteration where the overall duration is only one day instead of the initial two days. After waking up from the second experiment she can take part in another iteration with the duration of half a day, and so on. Following iterations are progressively shorter. This way the later experiments will not be interrupted by the memory resets of previous iterations. So from Beauty’s perspective, she can have a continuous first-person experience of the repetitions. From an outsider’s perspective, this structure is a bifurcating supertask. Theoretically, the repetition could go to infinity, yet it will all happen within two days.
I argue all frequentist arguments, including arguments using bets and monetary rewards, should employ this model. And from beauty's perspective, the relative frequency would approach 1/2 as the repetition goes on.
11 comments
Comments sorted by top scores.
comment by Maxwell Peterson (maxwell-peterson) · 2020-11-11T20:07:24.207Z · LW(p) · GW(p)
I agree that the specification of which state of knowledge we're under is critical to the solution. Another way to put it is what I'm seeing over and over again as I go through Jaynes' Probability Theory book: different prior information leading to different probability estimates is not paradoxical. That's why specifying prior information with clarity is so important. Under one set of prior information, the probability is 1/3; under another, it's 1/2.
comment by ike · 2020-11-10T03:02:49.675Z · LW(p) · GW(p)
Through these repetitions I would experience Heads and Tails with roughly equal numbers.
This seems wrong. Over a large number of repetitions, most Beautys end up experiencing twice as many heads as tails.
Replies from: dadadarren↑ comment by dadadarren · 2020-11-10T16:34:33.628Z · LW(p) · GW(p)
The first-person experience would give equal numbers of Heads vs Tails. When the toss results in Tails, there exists another copy beside myself. So there is twice the number of copies experiencing Tails.
Replies from: ike↑ comment by ike · 2020-11-10T18:39:43.111Z · LW(p) · GW(p)
Sorry, yes, I missed that you clone on tails and not heads.
Twice as many copies for tails means in the long run any given copy is likely to be near 2/3ds tails. Where are you getting the opposite result?
Replies from: dadadarren↑ comment by dadadarren · 2020-11-11T17:23:00.613Z · LW(p) · GW(p)
Twice as many copies for tails means in the long run any given copy is likely to be near 2/3ds tails.
The keyword is “any given copy”. The debate between common camps (SSA vs SIA) is how the given copy is selected.
If at the end of the day a random copy is selected, then her expected relative frequency of Heads would be 1/3. Because every time Tails occurs, the number of copy doubles. In this model, a principle of indifference applies to all resulting copies.
Alternatively, a copy can be selected at the end of each iteration, so when the repetition finishes we end up with a chosen one. Here the expected relative frequency of the selected would be 1/2. Because even though the number of Beauty is doubled after Tails, they only have half the chance of being chosen. In this model, the principle of indifference is only applied to copies experienced the same tosses.
There is no disagreement so far. The debate is this: thirders would say the first selection model reflects Beauty’s probability while halfers may say the second selection model reflects Beauty’s probability. I am arguing they are both wrong.
Beauty does not perform any selection. The single copy is given by something else: her first-person perspective. The perspective is fundamental that cannot be logically reduced further. I am this copy, end of the story, no further explanation. So the relative frequency should be based on repeating the experiment from her first-person perspective. I.E. imagine in Beauty’s shoes: I fall asleep - I wake up, I fall asleep - I wake up…. do this again and again. Here I would experience about equal numbers of Heads and Tails. Granted, when it is Tails, there would exist another copy (consistent with the fact that according to the first selection model the result would be 1/3), but I don’t have to even consider that because from the first-person perspective it is primevally clear the other copy is not me.
In this first-person model, there is no principle of indifference among copies at all. Indexicals such as I is inherently the focus. While the indifferences mentioned earlier are based on selections from an outsider’s perspective (or a god’s eye view, or a view from nowhere if one prefers to call it). I argue because they are based on different perspectives, this uniqueness and indifference should not mix. Choose one perspective and stick with it through the entire analysis. Mix them would lead to assuming the indexical as the result of some selection process (SSA and SIA) and cause anthropic paradoxes.
Replies from: ike↑ comment by ike · 2020-11-11T19:30:21.270Z · LW(p) · GW(p)
>I am arguing they are both wrong.
You keep saying that you're arguing, but as far as I can tell you just say that everyone's wrong and don't really argue for it. I've pointed out issues with all of your posts and you haven't been responding substantively.
Here, you're assuming that Beauty knows she's not the clone. In that scenario, even thirders would agree the probability of heads is 1/2. This assumption is core to your claims - if not, we don't get "there is no principle of indifference among copies", among other statements above.
Replies from: dadadarren↑ comment by dadadarren · 2020-11-12T18:44:04.179Z · LW(p) · GW(p)
Ike, please don’t take this the wrong way, but with every exchange between us ending with your verdict, I find “attempt to explain, not persuade” very difficult to do.
In the above post, I explained the problem of SSA and SIA: they assume a specific imaginary selection process, and then base their answers on that, whereas the first-person perspective is primitive given. You judged it as “not substantive”.
Previously, you said SIA is the “natural setup” and with that you don’t see any paradoxes. I mentioned SIA also leads to counterintuitive conclusions such as SIA Doomsday (filter ahead), Presumptuous Philosopher, Naive confirmation of MWI and Multiverse. You laid out how could SIA avoid the paradoxical conclusions in each case. I said not all supporters of SIA would agree with you. You said you don’t care. I said each problem requires a different explanation seems ad-hoc, suggesting some underlying problem with SIA. You deemed they are not ad-hoc.
In another post, I argued that the MWI requires the basic assumption of a perspective-independent objective reality. Your entire response is “I deny that MWI requires that. In fact, all three of your postulates are incoherent, and I believe in a form of MWI that doesn't require any of them.” No explanations.
Here you first didn’t notice Beauty is cloned when Tails, now saying I claim Beauty knows she’s not the clone. I specifically said through the repetitions the first-person I might not refer to the same physical copy (just like in a single experiment). I only claim after waking up, from the first-person perspective it is clear which copy I am. Which allows further first-person repetitions.
I can’t help but feel many of your comments are purposely argumentative and low effort. I personally don’t find them constructive.
Replies from: ike, ike↑ comment by ike · 2020-11-12T21:29:46.472Z · LW(p) · GW(p)
I'm trying to understand your critiques, but I haven't seen any that present an issue for my model of SIA, MWI, or anything else. Either you're critiquing something other than what I mean by SIA etc, or you're explaining them bad, or I'm not understanding the critiques correctly. I don't think it should take ten posts to explain your issues with them, but even so I've read through your posts and couldn't figure it out.
It might help if you explained what you take SIA and MWI to mean. When you gave a list of assumptions you believed to be entailed by MWI, I said I didn't agree with that. Something similar may be going on with SIA. A fully worked out example showing what SIA and what your proposed alternative say for various scenarios would also help. What statements does PBR say are meaningful? When is a probability meaningful?
↑ comment by ike · 2020-11-12T21:24:45.132Z · LW(p) · GW(p)
From my point of view, you keep making new posts building on your theory/critique of standard anthropic thinking without really responding to the issues. I've tried to get clarifications and failed.
In the above post, I explained the problem of SSA and SIA: they assume a specific imaginary selection process, and then base their answers on that, whereas the first-person perspective is primitive given.
I have no idea what this means.
Re paradoxes, you appear to not understand how SIA would apply to those cases using the framework I laid out. I asked you why those paradoxes apply and you didn't answer. If there are particular SIA advocates that believe the paradoxes apply, you haven't pointed at any of them.
In another post, I argued that the MWI requires the basic assumption of a perspective-independent objective reality. Your entire response is “I deny that MWI requires that. In fact, all three of your postulates are incoherent, and I believe in a form of MWI that doesn't require any of them.” No explanations.
You gave no explanation for why MWI would imply those statements, why am I expected to spend more time proving a negative than you spent arguing for the positive? You asserted MWI implies those postulates, I asserted otherwise. I've written two posts here arguing for a form of verificationism in which those postulates end up incoherent.
Instead of adding more and more posts to your theory, I think you should single in on one or two points of disagreement and defend that. Your scenarios and your perspective based theory are poorly defined, and I can't tell what the theory says in any given case.
Replies from: dadadarren↑ comment by dadadarren · 2020-11-13T06:09:38.595Z · LW(p) · GW(p)
My position has been the same. It starts from one assumption: treat perspectives as fundamental axioms. Then reasoning from different perspectives would not be mixed, indexicals are perspective-based thus primitively identified. So we would not treat I or now as the result of some imaginary random sampling, which has lead to the anthropic debate of SSA and SIA. It has been laid out in the first-post you replied to.
You say I do not respond to issues you raised. It appears I simply cannot do so. Because every time I choose to provide a detailed explanation I got comments such as “but you are not arguing, you are just asserting” or “that is not a substantive argument”, or “I deny that what you say”. You just give judgments without getting into where you think is wrong. And when you do try to get into the reasoning, you didn’t even care what I wrote, like saying I assumed Beauty knows she’s not the clone while I clearly stated the opposite.
If you don’t know what my theory would predict, then give me some scenarios or thought experiments and make me answer them. If you do not understand something I said, make me clarify. I would like to answer them because it helps to explain my position and outline our disagreement. (Btw, the first-person perspective is primitively given simply means you instinctively know which person you are, because you are experiencing everything from its viewpoint.) But by keep dismissing my effort like above it just seems you are not interested in that. You just want to argue your position is the better one.
Regarding the MWI post. If MWI does not require perspective-independent reality. Then what is the universal wave function describing? Sure, we generally accept this objective reality. And if an interpretation suggests otherwise it is usually deemed its burden to provide such a metaphysics (e.g. like participatory realism by QBism). Sean Carrol regards this as a reason to favor or default to the MWI. I argued if Thomas Nagel’s 3 Steps of how we get the idea of objectivity is correct, then perspective-independent objectivity itself is an assumption. The response I get is that you deny my argument. That’s it. What can I possibly say after that? You want me to understand your model of MWI. But when I followed-up your statement that some CI can be considered a special version of MWI and explained why I think that is not possible, I get no feedback from you...
You say I am not pointing to SIA supporters having different opinions as you. Because you said you don’t care. And I find it hard to believe when you say you do not know any SIA supporters disagree with your position. For starters, Katja, who brought up SIA doomsday actually argued SIA is preferable to SSA due to perspective disagreements. Michael Titelbaum, a thirder, who give many strong arguments against halfers, listed naive confirmation of MWI as a problem. Your position that SIA is the “natural choice” and paradox free is a very strong claim. ( If you are that confident, maybe make a post about it?) Regarding your framework of solving the paradoxes….what is the framework? You gave every single problem a specific explanation. The framework I see is that your version of SIA is problem-free, counter-intuitive conclusions are always due to something else.
Granted, for open problems like SB or QM it is nearly impossible to convince each other. The productive thing to do would be to try to understand the counter-party’s logic and find out the root of the disagreement. That is why I ended our earlier discussion by making a list of our different positions. So while we may not agree with each other, at least we understand our different assumptions that lead to the disagreement. But looking back at your comments I just realize that is not what you are after. You are here to win. Well, I can’t keep up with this. So…you win. And as always, you will have the final word.
Replies from: ike↑ comment by ike · 2020-11-13T14:37:58.340Z · LW(p) · GW(p)
I've been trying to understand, but your model appears underspecified and I haven't been able to get clarification. I'll try again.
treat perspectives as fundamental axioms
Have you laid out the axioms anywhere? None of the posts I've seen go into enough detail for me to be able to independently apply your model.
like saying I assumed Beauty knows she’s not the clone while I clearly stated the opposite
This is not clear at all. In this comment you wrote
the first-person perspective is primitively given simply means you instinctively know which person you are, because you are experiencing everything from its viewpoint.
In the earlier comment:
from the first-person perspective it is primevally clear the other copy is not me.
I don't know how these should be interpreted other than implying that you know you're not a clone (if you're not). If there's another interpretation, please clarify. It also seems obviously false, because "I don't know which person I am among several subjectively indistinguishable persons" is basically tautological.
If MWI does not require perspective-independent reality. Then what is the universal wave function describing?
It's a model that's useful for prediction. As I said in that post, this is my formulation of MWI; I prefer formulations that don't postulate reality, because I find the concept incoherent.
But when I followed-up your statement that some CI can be considered a special version of MWI and explained why I think that is not possible, I get no feedback from you...
That was a separate thread, where I was responding to someone who apparently had a broader conception of CI. They never explained what assumptions go into that version, I was merely responding to their point that CI doesn't say much. If you disagree with their conception of CI then my comment doesn't apply.
Your position that SIA is the “natural choice” and paradox free is a very strong claim.
It seems natural to me, and none of the paradoxes I've seen are convincing.
what is the framework
Start with a standard universal prior, plus the assumption that if an entity "exists" in both worlds A and B and world A "exists" with probability P(A) and P(B) for world B, then the relative probability of me "being" that entity inside world A, compared to world B, is P(A)/P(B). I can then condition on all facts I know about me, which collapses this to only entities that I "can" be given this knowledge.
Per my metaphysics, the words in quotes are not ontological claims but just a description of how the universal prior works - in the end, it spits out probabilities and that's what gets used.
If you don’t know what my theory would predict, then give me some scenarios or thought experiments and make me answer them.
I would like to understand in what scenarios your theory refuses to assign probabilities. My framework will assign a probability to any observation, but you've acknowledged that there are some questions your theory will refuse to answer, even though there's a simple observation that can be done to answer the question. This is highly counter-intuitive to me.