Posts

The Perspective-based Explanation to the Reflective Inconsistency Paradox 2024-01-26T19:00:38.920Z
dadadarren's Shortform 2023-10-22T11:21:09.548Z
Which Anaesthetic To Choose? 2023-10-14T14:55:42.572Z
Perspective Based Reasoning Could Absolve CDT 2023-10-08T11:22:49.458Z
Which Questions Are Anthropic Questions? 2023-08-31T15:15:39.964Z
Why am I Me? 2023-06-25T12:07:03.244Z
Cognitive Instability, Physicalism, and Free Will 2022-07-16T13:13:01.915Z
An Alternative Interpretation of Physics 2022-05-09T00:52:14.352Z
Primitive Perspectives and Sleeping Beauty 2022-03-26T01:55:39.460Z
The First-Person Perspective Is Not A Random Sample 2022-03-07T16:31:59.088Z
Full Non-Indexical Conditioning Also Assumes A Self Sampling Process. 2022-02-12T01:46:42.005Z
The First Person And The Physical Person 2022-01-10T19:47:51.325Z
You Don't Need Anthropics To Do Science 2021-11-07T15:07:03.266Z
Better and Worse Ways of Stating SIA 2021-10-28T16:04:22.333Z
Don't Use the "God's-Eye View" in Anthropic Problems. 2021-10-26T13:47:53.386Z
Consciousness, Free Will and Scientific Objectivity in Perspective-Based Reasoning 2021-10-14T18:00:10.681Z
The Validity of Self-Locating Probabilities (Pt. 2) 2021-08-25T01:53:17.616Z
The Validity of Self-Locating Probabilities 2021-08-21T02:53:13.579Z
Absent-Minded Driver and Self-Locating Probabilities 2021-08-14T00:09:20.347Z
Should VS Would and Newcomb's Paradox 2021-07-03T23:45:29.655Z
Anthropics and Embedded Agency 2021-06-26T01:45:06.880Z
Anthropic Paradoxes and Self Reference 2021-06-06T02:52:12.132Z
"Who I am" is an axiom. 2021-04-25T21:59:10.566Z
A Simplified Version of Perspective Solution to the Sleeping Beauty Problem 2020-12-31T18:27:14.349Z
Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes 2020-11-09T23:17:21.624Z
Why I Prefer the Copenhagen Interpretation(s) 2020-10-31T21:06:02.500Z
Leslie's Firing Squad Can't Save The Fine-Tuning Argument 2020-09-09T15:21:19.084Z
Hello ordinary folks, I'm the Chosen One 2020-09-04T19:59:10.799Z
Anthropic Reasoning and Perspective-Based Arguments 2020-09-01T12:36:41.444Z
Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? 2019-08-01T20:10:46.445Z
Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning 2019-03-09T15:21:02.258Z
Perspective Reasoning and the Sleeping Beauty Problem 2018-11-22T11:55:22.114Z
The Sleeping Beauty Problem and The Doomsday Argument Can Be Explained by Perspective Inconsistency 2018-08-05T13:45:27.185Z

Comments

Comment by dadadarren on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2024-02-04T16:34:40.228Z · LW · GW

This post highlights my problem with your approach: I just don't see a clear logic dictating which interpretation to use in a given problem—whether it's the specific first-person instance or any instance in some reference class. 

When Alice meets Bob, you are saying she should construe it as "I meet Bob in the experiment (on any day)" instead of "I meet Bob today" because—"both awakening are happening to her, not another person". This personhood continuity, in your opinion, is based on what? Given you have distinguished the memory erasure problem from the fission problem, I would venture to guess you identify personhood by the physical body. If that's the case, would it be correct to say you regard anthropic problems utilizing memory erasures fundamentally different from problems with fissures or clones? Entertain me this, what if the exact procedural is not disclose to you, then what? E.g. there is a chance that the "memory erasure" is actually achieved by creating a clone of Alice and wake that clone on Monday, then destroy it. Then wake the original on Tuesday. What would Alice's probability calculation then? Anything changes if the fissure is used instead of cloning? What would Alice's probability of Tails when she sees Bob when she is unsure of the exact procedure?

Furthermore you are holding that if saw Bob,  Alice should interpret "I have met Bob (on some day) in the experiment". But if if she didn't see Bob, she shall interpret "I haven't met Bob specifically for Today". In another word, whether to use "specifically today" or "someday" depends on whether or not she sees Bob or not. Does this not seem problematic at all to you?

I'm not sure about what you mean in your example, Beauty is awakened on Monday with 50% chance, if she is awaken then what happens? Nothing? The experiment just ends, perhaps with a non-consequential fair coin toss anyway? If she is not awakened then if the coin toss is Tails then she wakes on Tuesday? Is that the setup? I fail to see there is any anthropic elements in this question at all. Of course I would update the probability to favour Tails in this case upon awakening. Because that is new information for me. I wasn't sure that I would find myself awake during the experiment at all. 

Comment by dadadarren on The Perspective-based Explanation to the Reflective Inconsistency Paradox · 2024-02-01T19:33:35.608Z · LW · GW

I guess my main problem with your approach is that I don't see a clear rational of which probability to use, or when to interpret it as "I see green" and when to interpret it as "Anyone see green" when both of the statement is based on the fact that I drew a green ball. 

For example, my argument is that after seeing the green ball, my probability is 0.9, and I shall make all my decisions based on that. Why not update the pre-game plan based on that probability? Because the pre-game plan is not my decision. It is an agreement reached by all participants, a coordination. That coordination is reached by everyone reasoning objectively, which does not accommodate any any first-person self identification like "I".  In short, when reasoning from my personal perspective,  use "I see green"; when reasoning from an objective perspective, use "someone see green".  All my solution (PBR) for anthropic and related questions are based on the exact same supposition of the axiomatic status of the first-person perspective. It gives the same explanation, and one can predict what this theory says about a problem. Some results are greatly disliked by many, like the nonexistence of self-locating probability and perspective disagreement, but those are clearly the conclusion of PBR, and I am advocating it. 

You are arguing the two interpretation of "I see green" and "Anyone sees green" are both valid, and which one to use depends on the specific question. But, to me, what exact logic dictates this assignment is unclear. You argue that the bets structured not depending on which exact person gets green, then "my decision" shall be based on "anyone sees green", it seems to me, a way of simply selecting whichever interpretation that does not yield a problematic result.  A practice of fitting theory to results. 

To the example I brought up in the last reply, what would you do if you drew a green ball and were told that all participants said yes, you used the probability of 0.9. Rational being you are the only decider in this case. It puzzles me because in exactly what sense "I am the only decider?" Didn't other people also decide to say "yes"? Didn't their "yes" contributed to whether the bet would be taken the same way as your "yes"?  If you are saying I am the only decider because whatever I say would determine whether the bet would be taken.  How is that different from deriving other's responses by using the assumption of "everyone in my position would have the same decision as I do"? But you used probability of 0.5 ("someone sees green") in that situation. If you are referring you being the only decider in a causal—counterfactual sense, then you are still in the same position as all other green ball holders. What justifies the change regarding which interpretation—which probability (0.5 or 0.9)—to use?

And also the case of our discussion about perspective disagreement in the other post where you and cousin-it were having a discussion.  I, by PBR, concluded there should be a perspective disagreement. You held that there won't be a probability disagreement, because the correct way for Alice to interpret the meeting is "Bob has met Alice in the experiment overall" rather than "Bob has met Alice today". I am not sure your rational for picking one interpretation over the other. It seems the correct interpretation is always the one that does not give the problematic outcome. And that to me, is a practice of avoiding the paradoxes but not a theory to resolve them. 

Comment by dadadarren on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2024-02-01T14:23:55.569Z · LW · GW

I maintain the memory erasure and fission problem are similar because I regard the first-person identification equally applies to both questions. Both the inherent identifications of "NOW" and "I" are based on the primitive perspective. I.E., to Alice, today's awakening is not the other day's awakening, she can naturally tell them apart because she is experiencing the one today. 

I don't think our difference comes from the non-fissured person always stays in Room1 while the fissure person are randomly assigned either Room 1 or Room 2. Even if the experiment is changed, so that the non-fissured person is randomly assigned among the two rooms, and the fissured person with the original left body always stays in Room 1 and the fissured person with the original right body always in Room 2 my answer wouldn't change. 

Our difference still lies in the primitivity of perspective. In this current problem by cousin-it, I would say Alice should not update the probability after meeting Bob, because from her first-person perspective, the only thing she can observe is "I see Bob (today)" vs "I don't see Bob (today)", and her probability shall be calculated accordingly. She is not in the vantage point to observe whether "I see Bob on one of the two days" vs "I don't see Bob on any of the two days", so she should not update that way. 

Comment by dadadarren on Primitive Perspectives and Sleeping Beauty · 2024-01-31T14:18:57.769Z · LW · GW

If you use this logic not for the latitude your are born in but for your birth rank among human beings, then you get the Doomsday argument. 

To me the latitude argument is even more problematic as it involves problems such as linearity. But in any case I am not convinced of this line of reasoning. 

P.S. 59N is really-really high.  Anyway if your use that information and make predictions about where humans are born generally latitude-wise it will be way-way off. 

Comment by dadadarren on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2024-01-30T20:14:22.197Z · LW · GW

I think this highlights our difference at least in the numerical sense in this example. I would say Alex and Bob would disagree (provided Alex is a halfer, which is the correct answer in my opinion). The disagreement is again based on the perspective-based self identification. From Alex's perspective, there is an inherent difference between "today's awakening" and "the other day's awakening" (provided there is actually two awakenings). But to Bob, either of those is "today's awakening", Alex cannot communicate the inherent difference from her perspective to Bob. 

In another word, after waking up during the experiment, the two alternatives are "I see Bob today" or "I do not see Bob today." Both at 0.5 chance regardless of the coin toss result. 

Comment by dadadarren on The Perspective-based Explanation to the Reflective Inconsistency Paradox · 2024-01-30T19:57:54.758Z · LW · GW

We both argue the two probabilities, 0.5 and 0.9, are valid. The difference is how we justify both. I have held that "the probability of mostly-green-balls" are different concepts if there are from different perspectives: From a participant's first-person perspective, the probability is 0.9. From an objective outsider's perspective, even after I drew a green ball, it is 0.5. The difference come from the fact that the inherent self-identification "I" is meaningful only to the first-person. Which is the same reason for my argument for perspective disagreement from previous posts. 

I purport the two probabilities should be used for questions regarding respective perspectives: for my decisions maximizing my payoffs, use 0.9; for coordination strategy prescribing action of all participants with the goal of maximizing overall payoffs, use 0.5. In fact, the paradox started with the coordination strategy from an objective viewpoint when talking about the pre-game plan, but it later switched to the personal strategy using 0.9. 

I understand you do not endorse this perspective-based reasoning. So what is the logical foundation of this duality of probabilities then? If you say they are based on two mathematic models that are both valid, then after you drew a green ball, if someone asks about your probability of the mostly-green urn what is your answer? 0.5 AND 0.9? It depends? 

Furthermore, using whatever probability that best match the betting scheme to me is a convenient way of avoiding undesirable answers without committing to a hard methodology. It is akin to endorse SSA or SIA situationally to get the least paradoxical answer for each individual question. But I also understand from your viewpoint you are following a solid methodology. 

If my understand is correct you are holding that there is only one goal for the current question: maximizing overall payoff and maximizing my personal payoff is the same goal. And furthermore there is only one strategy: my personal strategy and the coordination strategy is the same strategy. .But because the betting setup, the correct probability to use is 0.5, not 0.9. If so, after drawing the green ball and being told all other participants have said yes to the bet, what is the proper answer to maximize your own gain? Which probability would you use then? 

Comment by dadadarren on The Perspective-based Explanation to the Reflective Inconsistency Paradox · 2024-01-30T16:00:04.887Z · LW · GW

I am trying to point out the difference between the following two: 

(a) A strategy that prescribes all participants' actions, with the goal of maximizing the overall combined payoff, in the current post I called it the coordination strategy. In contrast to: 

(b) A strategy that that applies to the single participant's action (me), with the goal of maximizing my personal payoff, in the current post I called it the personal strategy. 

I argue that they are not the same things, the former should be derived with an impartial observer's perspective, while the later is based on my first-person perspective. The probabilities are different due to self-specification (indexicals such as "I") not objectively meaningful, giving 0.5 and 0.9 respectively. Consequently the corresponding strategies are not the same. The paradox equate the two,:for pre-game plan it used (a), while for during-the-game decision it used (b) but attempted to confound it with (a) by using an acausal analysis to let my decision prescribing everyone's actions, also capitalizing on the ostensibly convincing intuition of "the best strategy for me is also the best strategy for the whole group since my payoff is 1/20 of the overall payoff."

Admittedly there is no actual external observer forcing the participants to make the move, however, by committing to coordination the participants are effectively committed to that move. This would be quite obvious if we modified the question a bit: If instead of dividing the payoff equally among the 20 participants, say the overall payoff is only divided among the red ball holders. (We can incentivize coordination by letting the same group of participants play the game repeatedly for a large number of games.) What would the pre-game plan be? It would be the same as the original setup: everyone say no to the bet. (In fact if played repeatedly this setup would pay the same as the original setup for any participant). After drawing a green ball however, it would be pretty obvious my decision does not affect my payoff at all. So saying yes or no doesn't matter. But if I am committed to coordination, I ought to keep saying no.  In this setup it is also quite obvious the pre-game strategy is not derived by letting green-ball holders maximizing their personal payoff. So the distinction between (a) and (b) is more intuitive. 

If we recognize the difference between the two, then (b) does not exactly coincide with (a) is not really a disappointment or a problem requiring any explanation. Non-coordination optimal strategies for each individual doesn't have to be optimal in terms of overall payoff (as coordination strategy would).  

Also I can see that the question of  "Has somebody with blonde hair, six-foot-two-inches tall, with a mole on the left cheek, barefoot, wearing a red shirt and blue jeans, with a ring on their left hand, and a bruise on their right thumb received a green ball?" comes from your long held position of FNC. I am obliged to be forthcoming and say that I don't agree with it. But of course, I am not naive enough to believe either of us would change our minds in this regard. 

Comment by dadadarren on Primitive Perspectives and Sleeping Beauty · 2024-01-30T14:56:52.142Z · LW · GW

If one person is created in each room, then there is no probability of "which room I am in" cause that is asking "which person I am". To arrive to any probability you need to employ some sort of anthropic assumption. 

If 10 persons are are randomly assigned (or assigned according to some unknown process), the probability of "which room I am in" exists. No anthropic assumption is needed to answer it. 

You can also find the difference using a frequentist model by repeating the experiments. The latter questions has a strategy that could maximize "my" personal interest.  The former model doesn't. It only has a strategy, if abided by everyone, that could maximize the group interest (coordination strategy). 

Comment by dadadarren on The Perspective-based Explanation to the Reflective Inconsistency Paradox · 2024-01-29T14:53:54.913Z · LW · GW

The probability of 0.9 is the correct one to use to derive "my" strategies maximizing "my" personal interest. e.g. If all other participants decides to say yes to the bet, what is your best strategy? Based on the probability of 0.9 you should also say yes. But based on the probability of 0.5 you would say no. However, the former will yield you more money. It would be obvious if the experiment is repeated a large number of time. 

You astutely pinpointed that the problem of saying yes is not beneficial because you are paying the idiot versions of you's decision when you drew red. This analysis is based on the assumption that your personal decision prescribes actions of all participants in similar situations.  (The assumption that Radford Neal first argued against, which I agree) Then such a decision is no longer a personal decision, it is a decision for all and is evaluated by the overall payoff. That is a coordination strategy, which is based on an objective perspective and should use the probability of 0.5. 

The problem is setup in a way to make people confound the two. If say the payoff is not divided among all 20 participants, but instead among people holding red balls.  The resultant coordination strategy would still be the same (the motivation of coordination can be the same group of 20 participants will keep playing for a large number of games). But the distinction between personal strategy maximizing personal payoff and coordination strategy maximizing the overall payoff would be obvious, i.e., personal strategy after drawing a green ball is to do whatever you want because it does not affect you (which is well known when coming up with the pre-game coordination plan), but coordination strategy would remain the same: Saying no to the bet. People would be less likely to mix the two strategies and pose it as an inconsistency paradox in such a setup. 

Comment by dadadarren on Primitive Perspectives and Sleeping Beauty · 2024-01-29T14:28:24.904Z · LW · GW

Yep, under PBR, perspective—which agent is the "I"—is primitive. I can take it as given, but there is no way to analyze it. In another word, self-locating probability like "what is the probability that I am L" is undefined. 

Comment by dadadarren on The Perspective-based Explanation to the Reflective Inconsistency Paradox · 2024-01-27T14:42:29.807Z · LW · GW

The Sleeping Beauty problem and this paradox are highly similar,  I would say they are caused by the same thing—switching of perspectives. However, there is one important distinction. 

For the current paradox, there is an actual sampling process for the balls. Therefore there is no need to assume a reference class of "I".  Take who I am—which person's perspective I am experiencing the world from—as a given, and the ball-assigning process treats "I" and other participants as equals. So there is no need to interpret "I" as a a random sample from all 20 participants.  You can perform the probability calculations like a regular probability problem. This means there is no need to make an SIA-like assumption.The question does not depend on how you construe "the evidence....that I'm in position I". 

I think it is pretty uncontroversial if we take all the betting and money away from the question, we can all agree that the probability becomes 0.9 if I receive a green ball. So if I understand correctly, by disagreeing with this probability, you are in the same position as Ape in the Coat: the correct probability depends on the betting scheme. Which is consistent with your latter statement that "whether to use SSA or SIA...dependent on the question setup." 

My position has always been not to ever use any anthropic assumptions: SSA or SIA or FNC or anything else: They all lead to paradoxes. Instead, take the perspective, or in your words: "I am in position I", as primitively given, and reason within this perspective. In the current paradox, that means either reason from the perspective of a participant and use the probability of 0.9 to make decisions for your own interest; or, alternatively, think in terms of a coordination strategy by reasoning from an impartial perspective with the probability remains at 0.5, but never mix the two. 

Comment by dadadarren on The Perspective-based Explanation to the Reflective Inconsistency Paradox · 2024-01-27T14:09:14.533Z · LW · GW

Numerically it is trivial to say the better thing to do (for each bet, for the benefit of all participants) is not to update. The question is of course how do we justify this.  After all, it is pretty uncontroversial that the probability of urn-with mostly-green-balls is 0.9 when I get received the randomly assigned ball which turns out to be green. You can enlist a new type of decision theory such as UDT, or a new type of probability theory which allows two probability to be both valid depending on what betting scheme like Ape in the Coat's did). What I am suggesting is stick with the traditional CDT and probability theory, but recognizing the difference between the coordination vs personal strategy, because they are from different perspectives. 

For the BB example you have posted, my long held position is that there is no way to reason about the probability of "I am a BB", even with the added assumption that for each real me there are 10 BBs appear in the universe. However, if you are really a BB, then your decision doesn't matter to your personal interest as you will disappear right momentarily. So you can make your personal decision entirely based on the assumption that you are not a BB.  Or alternatively, and I would say not very realistically, you assume that real you and BB you care about each other and want to come up with a coordinating strategy that will benefit the entire group, then each faithfully follow that strategy without specifically thinking about each of their own personal strategy. In this example both will recommend the same decision of going to the gym. 

Comment by dadadarren on Reflective consistency, randomized decisions, and the dangers of unrealistic thought experiments · 2023-12-30T19:28:19.007Z · LW · GW

Late to the party but want to say this post is quite on point with the analysis. Just want to add my—as a supporter of CDT—reading to the problem,  which has a different focus. 

I agree the assumption that every person would make the same decision as I do is deeply problematic. It may seem intuitive if the others are "copies of me", which is perhaps why this problem is first brought up in an anthropic context. CDT inherently treats the decision maker as an agent apart from his surrounding world, outside of the casual analysis scope. Assuming "other copies of me"  giving the same decision as I do put the entire analysis into a self-referential paradox. 

In contrast the analysis of this post is the way to go. I shall just regard the others as part of the environment, their "decision" are nothing special but parameters I shall considered as input to the only decision-making in question—my decision making. It cuts off the feed back loop. 

While CDT treating the decision maker as separate-from-the-world agent outside the analysis scope is often regarded as it's Achille's heel, I think that is precisely why it is correct. For decision is inherently a first-person concept, where the free-will resides. If we cannot imagine reasoning from a certain thing's perspective, then whatever that thing outputs are mere mechanical products. The concept of decision never applies. 

I diverge from this post in the sense that instead of ascribing this reflective inconsistency paradox to the above mentioned assumption, I think its cause is something deeper. In particular, for the F(p) graph, it shows that IF all others sticks to the plan of p=0.9355 then there is no difference for me to take or reject the bet (so there is no incentive to deviate from the pregame plan.) However, how solid is it to assume that they will stick to the plan? It cannot be said that's the only rational scenario. And according to the graph if I think they deviated from the plan then there so must I. Notice in this analysis there is no forcing others' decisions so they must be same as mine, and our strategies could well be different. So the notion that the reflective inconsistency resolves itself by rejecting the assumption of everyone makes the same decision only works in the limited case if we take an alternative assumption that the others all stick to the pregame plan (or all others reject the bet.) Even in that limited case, my pregame strategy was to take the bet with p=0.9355 while the in game strategy was anything goes (as they do not make a difference to the reward). Sure there is no incentive to deviate from the pregame plan but I am hesitant to call it a perfect resolution of the problem.

Comment by dadadarren on Anthropical Paradoxes are Paradoxes of Probability Theory · 2023-12-29T16:42:39.153Z · LW · GW

Betting and reward arguments like this is deeply problematic in two senses:

  1. The measurement of objective is the combined total reward to all in a purposed reference class, like the 20 "you" in the example. Usually the question would try to boost the intuition of this by saying all of them are copies of "you". However, even if the created persons (actually doesn't even have to be all persons, AIs or aliens will work just fine) are vastly different, it does not affect the analysis at all. Since the question is directed at you, and the evidence is your observation —that you awake in a green room—shouldn't the bet and reward be constructed—in order to reflect the correct probability—to concern your own personal interest? Why use the alternative objective and measure the combined reward of the entire group? This takes away the first-person elements of the question, just as you have said in the post, the bet has nothing to do with "'I' see green."
  2. Since such betting arguments use the combined reward of a supposed reference class instead of using self interest, the detour is completed with an additional assertion in the spirit of "what's best for the group must be best for me." That is typically achieved by some anthropic assumption in the form of seeing "I" as a random sample from the group. Such intuitions runs so deep that people use the assumptions without acknowledging it. In trying to explain why "I" am "a person in green room" yet the two can have  different probabilities you said "The same way a person who visited a randomly sampled room can have different probability estimate than a person who visited a predetermined room." It subtly considers who "I" am: the person who was created in a green room, the same way as if someone randomly sampling the rooms and sees green. However intuitive that might be, it's an assumption that's unsubstantiated. 

These two points combined effectively changes an anthropic problem regarding the first-person "I" to a perspective-less, run-of-the-mill probability problem. Yet this conversion is unsubstantiated, and to me, the root of the paradoxes. 

Comment by dadadarren on dadadarren's Shortform · 2023-10-22T11:21:09.736Z · LW · GW

The more I think about it the more certain I am that many unsolved problems, not just anthropics, are due to the deep-rooted habit of a view-from-nowhere reasoning. Recognizing perspective as a fundamental part of logic would be the way out.

Problems such as anthropics, interpretive challenges of quantum mechanics, CDT's problem of non-self-analyzing, how agency and free will coexist with physics, Russel's paradox and Godel's incomplete theorem etc

Maybe I am the man with a hammer looking for nails. Yet deep down I have to be honest to myself and say I don't think that's the case. 

Comment by dadadarren on Which Anaesthetic To Choose? · 2023-10-15T00:21:18.717Z · LW · GW

Well, I didn't expect this to be the majority opinion. I guess I was too in my head. 

But to explain my rationale: The effects of the two drugs only differ during the operation, their end results are identical. So after the operation, barring external records like bank account information, there is no way to even tell which drug I took, their result would be the same. Taking external records into consideration, the extra dollar in the bank would certainly be more welcomed. 

The memory-inhibiting part was supposed to preclude the journey consideration. From a post-operation perspective, there is no experience of a "journey" to talk about. Now it's clear to me that people would evaluate it regardless of the memory part. 

Comment by dadadarren on Perspective Based Reasoning Could Absolve CDT · 2023-10-09T11:40:49.574Z · LW · GW

Understandable. As much as I firmly believe in my theory, I have to admit I have a hard time making it look convincing. 

Comment by dadadarren on Perspective Based Reasoning Could Absolve CDT · 2023-10-09T11:37:25.768Z · LW · GW

The conflict arises when the self at the perspective center is making the decision but is also being analyzed. With CDT it leads to a self-referential-like paradox: I'm making the decision (which according to CDT is based on agency and unpredictable) yet there really is no decision but merely generating an output.

Precommitments sidestep this by saying there is no decision at the point being analyzed. It essentially moves the decision to a different observer-moment. Thus allowing the analysis to be taken into account in the decision analysis. In Newcomb, this is like instead of asking what should I do when facing the two boxes, asking what kind of machine/brain should I design so it would perform well in the Newcomb experiments. 

Comment by dadadarren on Primitive Perspectives and Sleeping Beauty · 2023-09-16T19:23:17.595Z · LW · GW

I didn't "choose" to generalize my position beyond conscious beings. It is an integral part of it. If perspectives are valid only for things that are conscious (however that is defined), then perspective has some prerequisite and is no longer fundamental. It would also give rise to the age-old reference class problem and no longer be a solution to anthropic paradoxes. E.g. are computer simulations conscious? answers to that would directly determine anthropic problems such as Nick Bostrom's simulation argument. 

Phenomenal consciousness is integral to perspective also in the sense that you know your perspective, i.e. which is the self, precisely because the subjective experience is most immediate to it. So when a subject wakes up in the fission experiment, they know which person "I" refers to even though he cannot point that person out on a map. 

My argument is in direct conflict with physicalism. And it places phenomenal consciousness and subjective experience outside the field of physics. 

Comment by dadadarren on Primitive Perspectives and Sleeping Beauty · 2023-09-11T14:27:34.662Z · LW · GW

Consciousness has many contending definitions. e.g. if you take the view that consciousness is identified by physical complexity and the ability to process data then it doesn't have anything to do with perspective. I'm endorsing phenomenal consciousness, as in the hard problem of consciousness: we can describe brain functions purely physically,  yet it does not resolve why they are accompanied by subjective feelings. And this "feeling" is entirely first-person, I don't know your feelings because otherwise, I would be you instead of me. "What it means to be a bat is to know what it is like to be a bat."

In short, by suggesting they are irreducible and primitive, my position is incompatible with the physicalist's worldview, in terms of the definition of consciousness, and the nature of perspective. Knowing this might be regarded as a weakness by many, I feel obliged to point it out. 

Comment by dadadarren on Conservation of Expected Evidence and Random Sampling in Anthropics · 2023-09-05T20:19:11.469Z · LW · GW

I do think SIA and SSA are making extraordinary claims and the burden of proof is on them. I have proposed assuming the self as a random sample is wrong for several years. That is not the problem I have with this argument. What I disagree with is that your argument depends on phrases and concepts such as "'your' existence" and "who 'you' are" without even attempting to define what/which one is this 'you' refers to. My position is it refers to the self, based on the first-person perspective, which is fundamental,  a primitive concept. So it doesn't require any definition as long as reasoned from the perspective of an experiment subject. But your argument holds the position that perspective is not fundamental. So treating 'you', which is the first-person 'I' to the reader, as primitive is not possible. Then how do you define this critical concept? And why is your definition better than SIA or SSA's? You also have a burden of proof. Because without a clear definition, your argument's conclusion can jump back and forth in limbo. This is illustrated by the example of the modified BJE above. You said:

If it was just two people created the first always with a blue jacket and the second always without and, once again, you were not necessary meant to be the first, then this counts as random sampling and BJE analysis stands.

Isn't this treating you as a random sample when there is no actual sampling process, i.e. the position you are arguing against?  

And how is this experiment different from your FBJE? In other words, which process enables FBJE to guarantee that 'you' will be the person in the blue jacket regardless of the coin toss? How come there is no way that you be the person whose jacket depends on the toss? Some fundamental stipulation about what 'you' would be is used here. 

BTW the FBJE is not comparable to the sleeping beauty problem. In FBJE, by stipulation, you can outright say your blue jacket is not due to the coin landed Tails. But beauty can't outright say this is Monday. 

Comment by dadadarren on Conservation of Expected Evidence and Random Sampling in Anthropics · 2023-09-05T15:45:09.834Z · LW · GW

Something's not adding up. You said that anthropic paradox is not about first-person perspective or consciousness. But later:

But in ISB there are no iterations in which you do not exist. The number of outcomes in which you are created equals the total number of iterations.

The most immediate question is the definition of "you" in this logic. Why can't thirders define "you" as a potentially existing person? In which case the statement would be false. If you define it as an actually existing person then which one? Seems to me you are using the word "you" to let the reader imagine themselves being a subject created in ISB, so it would point to the intuitively understood self. But that definition uses the first-person perspective as a fundamental concept. And the later:

But Heads outcome in Incubator Sleeping Beauty is not. You are not randomly selected among two immaterial souls to be instantiated. You are a sample of one. And as there is no random choice happening, you are not twice as likely to exist when the coin is Tails and there is no new information you get when you are created.

So who you are (who the first person is) is fundamental. As well as is its existence. 

From past experience, I know this is not the easiest topic to discuss. So let's use a concrete example:

In BJE, suppose for heads, instead of creating 2 people and then randomly sampling one of them to have a blue jacket, a person is created in the blue jacket, then sometime later another person is created without a blue jacket, so there is no random sampling taking place. Is your analysis going to change? Or answer these questions: 1. Before looking down and check, what is the probability that you are wearing a blue jacket? 2. After seeing the blue jacket, what is the probability that the coin landed Heads?

Comment by dadadarren on Which Questions Are Anthropic Questions? · 2023-09-05T12:34:16.245Z · LW · GW

I don't feel there is enough common ground for effective discussion. This is the first time I have seen the position that the sleeping beauty paradox disappears when the Heads awakening is sampled between Monday and Tuesday. 

Comment by dadadarren on Which Questions Are Anthropic Questions? · 2023-09-02T18:12:27.970Z · LW · GW

Can you point out the difference why Tails and Monday, Tails and Tuesday are casually connected while the 100 people created by the incubator are not, by independent outcomes instead?

Nothing is stopping us to perceive the situations as different possible worlds, not different places in the same world.

All this post is trying to argue is statement like this requires some justification. Even if the justification is a mere stipulation, it should be at least recognized as an additional assumption. Given that anthropic problems often lead to controversial paradoxes, it is prudent to examine every assumption we make in solving them. 

Comment by dadadarren on Which Questions Are Anthropic Questions? · 2023-09-02T17:52:24.349Z · LW · GW

If we modify the original sleeping beauty problem, such that if heads you will be awakened on one randomly sampled day (either Monday/ Tuesday), would you change your answer to 1/3?

Comment by dadadarren on Which Questions Are Anthropic Questions? · 2023-09-01T12:36:55.739Z · LW · GW

Anthropic paradoxes happen only when we use events representing different self-locations in the same possible world. If the paradoxes are just problems of probability theory then why this limited scope? 

I do consider anthropic problems, in one sense or another, to be metaphysical. And I know there are people who disagree with this. But wouldn't stipulating anthropic paradoxes are solely probability problems also require arguments to justify? Apart from "a rule of thumb"?

Comment by dadadarren on Which Questions Are Anthropic Questions? · 2023-09-01T12:13:28.124Z · LW · GW

Like Question 1 and traditional probability problems, Question 3's events reflect different possible worlds, different outcomes of the room-assigning experiment.  Question 2's supposed events reflect different locations of the self in the same possible world, i.e. different centred worlds. 

Controversial anthropic probability problems occur only when the latter type is used. So there is good reason to think this distinction is significant. 

Comment by dadadarren on Learning as you play: anthropic shadow in deadly games · 2023-08-16T19:39:45.110Z · LW · GW

It seems earlier posts and your post have defined anthropic shadow differently in subtle but important ways. The earlier posts by Christopher and Jessica argued AS is invalid: that there should be updates given I survived. Your post argued AS is valid: that there are games where no new information gained while playing can change your strategy (no useful updates). The former is focusing on updates, the latter is focusing on strategy. These two positions are not mutually exclusive. 

Personally, the concept of "useful update" seems situational. For example, say someone has a prior that leads him to conclude the optimal strategy is not to play the Chinese Roulette. However, he was forced to play several rounds regardless of what he thought. After surviving those rounds (say EEEEE), it might very well be that he updates his probability enough to change his strategy from no-play to play. That would be a useful update. And this "forced-to-play" kind of situation is quite relevant to existential risks, which anthropic discussions tend to focus on. 

Comment by dadadarren on Learning as you play: anthropic shadow in deadly games · 2023-08-16T12:41:21.291Z · LW · GW

To my understanding, anthropic shadow refers to the absurdum logic in Leslie's Firing Squad: "Of course I have survived the firing squad, that is the only way I can make this observation. Nothing surprising here". Or reasonings such as "I have played the Russian roulette 1000 times, but I cannot increase my belief that there is actually no bullet in the gun because surviving is the only observation I can make".  

In the Chinese Roulette example, it is correct that the optimal strategy for the first round is also optimal for any following round. It is also correct if you decide to play for the first round then you will keep playing until kicked out i.e. no way to adjust our strategy. But that doesn't justify there is no probability update, for each subsequent decision, while all agree to keep playing, can be different. (And they should be different) It seems absurd to say I would not be more confident to keep going after 100 empty shots. 

In short, changing strategy implies there is an update, not changing strategy doesn't imply there is no update. 

Comment by dadadarren on Anthropical Motte and Bailey in two versions of Sleeping Beauty · 2023-08-04T13:29:35.777Z · LW · GW

That's not it. In your simulation you give equal chances for Head and Tails, and then subdivide Tails into two equiprobables of T1 and T2 while keeping all probability of Heads as H1. It's essentially a simulation based on SSA. Thirders would say that is the wrong model because it only considers cases where the room is occupied: H2 never appeared in your model. Thirders suggests there is new info when waking up in the experiment because it rejects H2. So the simulation should divide both Head and Tails into equiprobables of H1 H2 T1 T2. And waking up rejects H2 which pushes P(T) to 2/3. And then learning it is room 1 would push it back down to 1/2. 

Comment by dadadarren on Anthropical Motte and Bailey in two versions of Sleeping Beauty · 2023-08-03T14:04:33.752Z · LW · GW

To thirders, your simulation is incomplete. It should first include randomly choosing a room and finding it occupied. That will push the probability of Tails to 2/3. Knowing it is room 1 will push it back to 1/2. 

Comment by dadadarren on Anthropical Motte and Bailey in two versions of Sleeping Beauty · 2023-08-02T22:12:16.049Z · LW · GW

One thing that should be noted is that while Adam's argument is influential, especially since it first (to my knowledge) pointed out halfers have to either reject Bayesian updating upon learning it is Monday or accept a fair coin yet to be tossed has the probability other than 1/2. Thirders in general disagree with it in some crucial ways. Most notably Adam argued that there is no new information when waking up in the experiment. In contrast, most thirders endorsing some versions of SIA would say waking up in the experiment is evidence favouring Tails, which has more awakenings. Therefore targeting Adam's argument specifically is not very effective. 

In your incubator experiment, thirders, in general, would find no problem: waking up, evidence favouring tails P(T)=2/3. Finding it is room 1: evidence favouring Heads, P(T) decreased to 1/2.

Here is a model that might interest halfers. You participate in this experiment: the experimenter tosses a fair coin, if Heads nothing happens, you sleep through the night uneventfully. If Tails they will split you in the middle into two halves, completing each half by cloning the missing part onto it. The procedure is accurate enough that the memory is preserved in both copies. Imagine yourself waking up the next morning: you can't tell if anything happened to you, if either of your halves is the same physical piece yesterday, or if there is another physical copy in another room. But regardless, you can participate in the same experiment again. The same thing happens when you find yourself waking up the next day. and so on.....As this continues, you will count about an equal number of Heads and Tails in the experiments you have subjective experiences of...

Counting subjective experience does not necessarily lead to Thirderism. 

Comment by dadadarren on SSA rejects anthropic shadow, too · 2023-07-30T12:40:51.268Z · LW · GW

I would also point out that FNC is not strictly a view-from-nowhere theory. The probability updates it proposes are still based on an implicit assumption of self-sampling. 

Comment by dadadarren on SSA rejects anthropic shadow, too · 2023-07-30T12:30:04.689Z · LW · GW

I really don't like the pragmatic argument against the simulation hypothesis. It demonstrates a common theme in anthropics which IMO is misleading the majority of discussions. By saying pre-simulation ancestors have impacts on how the singularity plays out therefore we ought to make decisions as if we are real pre-simulation people, it subtly shifts the objective of our decisions. Instead of the default objective of maximizing reward to ourselves, doing what's best for us in our world, it changes the objective to achieve a certain state of the universe concerning all the worlds, real and simulations. 

These two objectives do not necessarily coincide. They may even demand conflicting decisions. Yet it is very common for people to argue that self-locating uncertainty ought to be treated a certain way because it would result in rational decisions with the latter objective. 

Comment by dadadarren on SSA rejects anthropic shadow, too · 2023-07-29T14:54:49.116Z · LW · GW

Exactly this. The problem with the current anthropic schools of thought is using this view-from-nowhere while simultaneously using the concept of "self" as a meaningful way of specifying a particular observer. It effectively jumps back and forth between the god's eye and first-person views with arbitrary assumptions to facilitate such transitions (e.g. treating the self as the random sample of a certain process carried out from the god's eye view). Treating the self as a given starting point and then reasoning about the world would be the way to dispel anthropic controversies. 

Comment by dadadarren on Why am I Me? · 2023-07-24T16:24:07.864Z · LW · GW

Let's take the AI driving problem in your paper as an example. The better strategy is regarded as the one that gives the better overall reward from all drivers. Whether the rewards of the two instances of a bad driver should be cumulatively or just count once is what divides halfers and thirders. Once that is determined the optimal decision can be calculated from the relative fractions of good/bad drivers/instances. It doesn't involve taking the AI's perspective in a particular instance and deciding the best decision for that particular instance, which requires self-locating probability. The "right decision" is justified by averaging out all drivers/instances, which does not depend on the particularity of self and now. 

Self-locating probability would be useful for decision-making if the decision is evaluated by its effect on the self, not the collective effect on a reference class. But no rational strategy exists for this goal

Comment by dadadarren on Why am I Me? · 2023-06-29T20:52:52.006Z · LW · GW
  1. If you are born a month earlier as a preemie instead of full-term, it can be quite convincingly said you are still the same person. But if you are born a year earlier are you still the same person you are now? There are obviously going to be substantial physical differences, different sperm and egg, maybe different gender. If you are the first few human beings born, there will be few similarities between the physical person that's you in that case and the physical person you are now. So the birth rank discussion is not about if this physical person you regard as yourself is born slightly earlier or later. But among all the people in the entire human history which one is you, i.e. from which one of those person's perspectives do you experience the world? 
  2. The anthropic problem is not about possible worlds but instead centered worlds. Different events in anthropic problems can correspond to the exact same possible world while differing in which perspective you experience it. This circles back to point 1, and the decoupling between the first-person "I" and the physical particular person.
Comment by dadadarren on Why am I Me? · 2023-06-29T20:37:24.257Z · LW · GW

When you say the time of your birth is not special, you are already trying to judge it objectively. For you personally, the moment of your birth is special. And more relevantly to the DA, from a first-person perspective, the moment "now" is special. 

  1. From an objective viewpoint, discussing a specific observer or a specific moment requires some explanation, something process pointing to it. e.g. a sampling process. Otherwise, it fails to be objective by inherently focusing on someone/sometime.
  2. From a first-person perspective, discussions based on "I" and "now" doesn't require such an explanation.  It's inherently understandable. The future is just moments after "now". Its prediction ought to be based on my knowledge of the present and past. 

What the doomsday argument saying is, the fact "I am this person"  (living now) shall be treated the same way as if someone from the objective viewpoint in 1, performs a random sampling and finds me (now). The two cases are supposed to be logically equivalent. So the two viewpoints can say the same thing. I'm saying let's not make that assumption. And in this case, the objective viewpoint cannot say the same thing as the first-person perspective. So we can't switch perspectives here. 

Comment by dadadarren on Why am I Me? · 2023-06-28T22:30:37.672Z · LW · GW

I didn't explicitly claim so. But it involves reasoning from a perspective that is impartial to any moment. This independency manifested in its core assumption: that one should regard themself to be randomly selected from all observers from its reference class from past, present and future

Comment by dadadarren on Why am I Me? · 2023-06-28T22:24:20.634Z · LW · GW

if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.

I am a little unsure about your meaning here. Say you get a reward for guessing if your number is <5 correctly, then would you also guess your number is <5 each time? 

I'm guessing that is not what you mean, but instead, you are thinking as the experiment is repeated more and more the relative frequency of you finding your own number >5 would approach 95%. What I am saying is this belief requires an assumption about treating the "I" as a random sample. Whereas for the non-anthropic problem, it doesn't. 

Comment by dadadarren on Why am I Me? · 2023-06-28T16:27:09.673Z · LW · GW

For the non-anthropic problem, why take the detour of asking a different person each toss? You can personally take it 100 times, and since it's a fair die, it would be around 95 times that it lands >5. Obviously guessing yes is the best strategy for maximizing your personal interest. There is no assuming the I" as a random sample, or making forced transcodings. 

Let me construct a repeatable anthropic problem. Suppose tonight during your sleep you will be accurately cloned with memory preserved. Waking up the next morning, you may find yourself to be the original or one of the newly created clones. Let's label the original No.1 and the 99 new clones No,2 to No 100 by the chronological order of their creation. Doesn't matter if you are old or new you can repeat this experiment. Say you take the experiment repeatedly: wake up and fall asleep and let the cloning happen each time. Everyday you wake up, you will find your own number. You do this 100 times, would you say you ought to find your number >5 about 95 times?

My argument says there is no way to say that. Doing so would require assumptions to the effect of your soul having an equal chance of embodying each physical copy, i.e. "I" am a random sample among the group. 

For the non-anthropic problem, you can use the 100-people version as a justification. Because among those people the die tosser choosing you to answer a question is an actual sampling process. It is reasonable to think in this process you are treated the same way as everyone. E.g. the experiment didn't specifically sample you only for a certain number. But there is no sampling process determining which person you are in the anthropic version. Let alone assume the process is treating you indifferently among all souls or treating each physical body indifferently in your embodiment process. 

Also, people believing the Doomsday Argument objectively perform better as a group in your thought experiment is not a particularly strong case. Thirders have also constructed many thought experiments where supporters of the Doomsday Argument (halfers) would objectively perform worse as a group. But that is not my argument. I'm saying the collective performance of a group one belongs to is not a direct substitute for self-interest. 

Comment by dadadarren on The First-Person Perspective Is Not A Random Sample · 2023-06-28T14:22:42.068Z · LW · GW

Thank you for the kind words. I understand the stance about self-locating probability. That's the part I get most disagreements. 

To me the difference is for the unfair coin, you can treat the reference class as all tosses from unfair coins that you don't know how. Then the symmetry between Head\Tail holds, and you can say in this kind of tosses the relative frequency would be 50%. But for the self-locating probabilities in the fission problem, there really is nothing pointing to any number. That is, unless we take the average of all agents and discard the "self". It requires taking the immaterial viewpoint and transcoding "I" by some assumption.

And remember, if you validate self-locating probability in anthropics, then the paradoxical conclusions are only a Bayesian update away. 

Comment by dadadarren on Why am I Me? · 2023-06-28T13:59:12.822Z · LW · GW

In anthropic questions, the probability predictions about ourselves (self-locating probabilities) lead to paradoxes. At the same time, they also have no operational value such as decision-making. In a practical sense, we really shouldn't make such probabilistic predictions. Here in this post I'm trying to explain the theoretical reason against it. 

Comment by dadadarren on Why am I Me? · 2023-06-28T13:35:43.486Z · LW · GW

Consciousness is a property of the first-person: e.g. To me I am conscious but inherently can't know you are. Whether or not something is conscious is asking if you think from that thing's perspective. So there is no typical or atypical conscious being, from my perspective I am "the" conscious being, if I reason from something else's perspective, then that thing is "the" conscious being instead. 

Our usual notion of considering ourselves as a typical conscious being is because we are more used to thinking from the perspectives of things similar to us. e.g. we are more apt to think from the perspective of another person than a cat, and from the perspective of a cat than a chair. In other words, we tend to ascribe the property of consciousness to things more like ourselves, instead of the other way around: that we are typical in some sense. 

The part where I know I'm conscious while not you is an assertion. It is not based on reasoning or logic but simply because it feels so. The rest are arguments which depend on said assertion. 


Thought the reply was addressed to me. But nonetheless, it's a good opportunity to delineate and inspect my own argument. So leaving the comment here.

Comment by dadadarren on Why am I Me? · 2023-06-26T17:30:25.469Z · LW · GW

This rewrite is still perspective dependent as it involves the concept of "now" to define who "previously come into existence". i.e. it is different for the current generation vs people in the axial age. Whereas the Doomsday Argument uses a detached viewpoint that is time-indifferent. So the problem still remains. 

Comment by dadadarren on Why am I Me? · 2023-06-26T13:58:34.265Z · LW · GW

I have actually written about this before. In short, there is no rational answer to Omega's question, to answer Omega, I can only look at the past and present situation and try to predict the future the best I could. There is no rational way to incorporate my birth rank in the answer. 

The question is about "me" specifically. And my goal is to maximize my chance of getting a good afterlife. In contrast, the argument you mentioned judge the answer's merit by evaluating the collective outcome of all humans: "If everyone guesses this way then 95% of all would be correct ...". But if everyone is making the same decision, and the objective is the collective outcome of the whole group, then the individual "I" plays no part in it. To assert this answer based on the collective outcome is also the best answer for "me" requires additional assumptions. E.g. considering myself as a random sample from all humans. That is why you are right in saying "If you accept that it's better to say yes here, then you've basically accepted the doomsday argument."

In this post I have used a repeatable experiment to demonstrate this. And the top comment by benjamincosman and my subsequent replies might be relevant. 

Comment by dadadarren on An Intro to Anthropic Reasoning using the 'Boy or Girl Paradox' as a toy example · 2023-06-22T13:25:23.257Z · LW · GW

Late to the party as usual. But I appreciate considering anthropic reasoning with the boy or girl paradox in mind. In fact, I have used it in the past, mostly as an argument against Full Non-indexical Conditioning. The boy or Girl paradox highlights the importance of the sampling process: a factually correct statement alone does not justify a particular way of updating probability, at least in some cases,  the process of how that statement is obtained is also essential. And to interpret the perspective-determined "I" as the outcome of what kind of sampling process is the crux of anthropic paradoxes. 

I see that Gunnar_Zarncke has linked my position on this problem, much appreciated. 

Comment by dadadarren on We need a theory of anthropic measure binding · 2022-11-28T15:08:52.463Z · LW · GW

The more I think about anthropics the more I realize there is no rational theory for anthropic binding. For the question "what is the probability that I am the heavy brain?" there really isn't a rational answer. 

Comment by dadadarren on Quantum Suicide and Aumann's Agreement Theorem · 2022-11-03T15:56:11.790Z · LW · GW

This experimental outcome will not produce a disagreement between Alice and Bob. As long as they are following the same anthropic logic. 

When saying Bob's chance of survival is 100% according to MWI,  the statement is made from a god's eye view discussing all post-experiment worlds: Bob will for sure survive: in one/some of the branches. 

By the same logic, from the same god's eye view, we can say, Alice will meet Bob for sure: in one/some of the branches, if the MWI is correct. 

By saying Alice shall see Bob with a 0.1% chance no matter if MWI is correct, you are talking about the specific Alice's first-person perspective, which is a self-locating probability according to MWI. As in "what is the probability I am the Alice who's in the branch where Bob survives?". 

By taking the specific subject's perspective, Bob's chance of survival is also 0.1% according to MWI. As in "what is the probability that I am actually in the branches where Bob survives?" 

As long as their reasonings are held at the same level, their answers would be the same. 

The real kicker is whether or not they should actually increase their confidence in MWI after the experiment ends (especially in the case where Bob survives). The popular anthropic camps such as SIA seem to say yes. But that would mean any quantum event, no matter the outcome would be evidence favouring MWI. So an armchair philosopher could say with categorical confidence that MWI is correct. (This is essentially the same problem as Nick Bostrom's Presumptuous Philosopher but in the quantum worlds) So SIA supporters and Thirders have been trying to argue their positions do not necessarily lead to such an update (which they called the naive confirmation of the MWI). Whether or not that defence is successful is up for debate. For more information, I recommend the papers by Darren Bradley and Alastar Wilson. 

On the other hand, if you think finding oneself exist is a logical truth, thus has 100% probability, then it is possible to produce disagreement against Aumann's Agreement Theorem. And the disagreement is valid and can be logically explained. I have discussed it here. I think this is the correct anthropic reasoning. However, this idea does not recognize self-locating probability thus fundamentally incompatible with the MWI. Therefore if Alice and Bob both favour this type of anthropic reasoning, they would still have the same confidence in the validity of MWI, 0%. 

Comment by dadadarren on How does anthropic reasoning and illusionism/eliminitivism interact? · 2022-10-07T20:26:56.196Z · LW · GW

Try this for practice, reasoning purely objectively and physically, can you recreate the anthropic paradoxes such as the Sleeping Beauty Problem?

That means without resorting to any particular first-person perspective, nor using words such as "I" "now" "here", or putting them in a unique logical position.