Posts
Comments
I think this discussion is focusing on what other's would behave towards me, and derive what ought to be regarded as my future self from there. That is certainly a valid discussion to be had. However my post is taking about a different (thought related) topic.
For example, if I for whatever crazy reason thinks that me from tomorrow:—the one with (largely) the same physical body and no trick on memory whatsoever— not my future self. Then I would do a bunch of irresponsible things that would lead to others' dislike or hostility toward me that could eventually lead to my demise. But so what? If I regard that as a different person, then to hell with him. The current me wouldn't even care. So being detrimental to that future person would not compel current me to regard him as my future self.
Luckily we do not behave that way. Everyone, rational ones at least, considers the person with the same physical body and memories of the current self as themself in the future. That is the survival instinct, that is the consensus.
But that consensus is about an idiosyncratic situation: where memory(experience) and physical body are bound together. Take that away, and we no longer have a clear, unequivocal basis to start a logical discussion. Someone's basis could be the survival of the same physical body would not step into the teletranporter, even if it is greatly convenient and could benefit the one steps out of it. Someone else could start from a different basis. They may believe that only patterns matters. So mind uploading into a silicon machine to make easy copies at the cost of adversely affecting the carbon body would be welcomed. None of these positions could be rebutted by the cost/benefit analysis of some future minds. Because they may or may not care about those minds, at different levels, in the first place.
Sure, logic is not entirely irrelevant. It comes into play after you pick the basis of your decision. But values, instead of logic, largely determines the answer to the question.
If one regard physics as a detached description of the world— like a non-interacting yet apt depiction of the objective reality, (assuming that exists and is attainable) then yes there is no distinct "me". And any explanation of subject experience ought to be explained by physical processes, such that everyone's "MEness" must be ultimately reduced to the physical body.
However my entire position stems from a different logical starting point. It starts with "me". It is an undeniable and fundamental fact that I am this particular thing, which I later referred to it as a human being called Dadadarren. (Again I assume the same goes for everyone) Everything I know about the world is through that thing's interaction with its environment, which leads to accessible subjective experience. Even physics is learned in such a way, as well as the conception that other things could have perspective different from mine own. I am not interacting with the world as someone of something else, from those thing's perspective, is just a simple realization after that.
This way physics would not be taken as the detached fundamental description of objective reality. The description has to originate from a given thing's perspective, working based on its interaction from the environment. That given perspective could be mine, could be Elon's, could be a thermometer's or an electron's. We strive for concepts and formulas that works from a wide range of perspectives. That's what physical objectivity should mean.
So it follows that physics cannot explain why I am Dadadarren and not Elon: because perspective is a prior. This makes way more sense to me personally: the physical knowledge about the two human beings doesn't even touch on why I am Dadadarren and not Elon. (and that was the purpose of the thought experiment) At least better than the alternative: that they is no ME, or that I am Elon just as I am me, in some convoluted sense such as open individualism.
So from where I stand, it is physicalism that requires justification.
Please do not take this as an insult. Though I do not intend to continue this discussion further, I feel obliged to say that I strongly disagree that we have the same position in substance and only disagree in semantics. Our position are different on a fundamental level.
The description of "I" you just had is what I earlier referred to as the physical person, which is one of the two possible meanings. For the Doomsday argument, it also used the second meaning: the nonphysical reference to the first-person perspective. I.E. the uniform prior distribution DA proposed, which is integral to the controversial Bayesian update, is not suggesting that a particular physical person can be born earlier than all human beings or later than all of them due to variations in its gestation period. In its convoluted way thanks to the equivocation, it is effectively saying among all the people in the human beings' entire history, "I" could be anyone of them. i.e. "The fact that I am Ape in the Coat living in the earlier 21 century (with a birth rank around 100 billion), rather than someone living in the far future with a birth rank around 500 billion, is evidence to believe that maybe there aren't that many human beings in total". Notice the "I" here is not equivalent to a particular physical person anymore but a reference to the first-person perspective. This claim is what gives the DA a soul-incarnation flavour.
Therefore, if we take the first-person perspective and the physical person combination as a given and deny theorizing alternatives, there won't be a Doomsday argument."I am this particular physical person, period (be it Ape in the Coat in your case or Dadadarren for me). There's no rational way of reasoning otherwise." is what ends the DA. There really is no further need of inquiring into the particular physical person's birth rank variations due to pregnancy complications. And as you said, this won't fall into the trap of soul incarnations.
And that is also my long-held position, that there is no rational way of theorizing "which person I could be", regard the first-person perspective as primitively given: "I am this physical human being" and be done with it, avoid the temptation of theorizing what the first-person could be as SSA or SIA did. Then there won't be any paradoxes.
The point of defining "me" vigorously is not about how much upstream or physically specific we ought to be, but rather when conducting discussions in the anthropic field, we ought to recognize words such as "me" or "now" are used equivocally in two different senses: 1, the specific physical person, i.e. the particular human being born to the specific parents etc. and 2, just a reference to the first person of any given perspective. Without distinguishing which meaning in particular is used in an argument, there is room for confounding the discussion, I feel most of our discourse here unproductive due to this confusion.
As long as we are not talking about me being born to different parents but simply having a different birth rank - I'm born a bit earlier, while someone else is born a bit later, for example, - then no souls are required.
This is interesting. I suppose being born to the same parents is just an example you used. e.g. are you a sample of all your siblings? And are other potential children your parents could have if a different sperm fertilize a different egg still you? In those cases there still would be the same problem of soul incarnations.
So my understanding to your position is that there is no problem as long as you consider yourself to be the same physical person, i.e. the same parents, the same sperm and eggs. The variation in birth rank is due to the events during pregnancy that makes the particular physical person born slightly earlier or later. And this is the correct way to thing about your birth rank.
If that's your position, then wouldn't your argument against regarding oneself as a random sample among all human beings (past, present, and future) ultimately be : "Because I am this particular physical human being". And that there is no sense discussing alternatives like "I am a different physical human being"?
While I agree with the notion that we cannot regard ourselves as random samples from all human beings past, present and future, I find the discussion wanting in vigorously defining the reference of "us", or "me" or by extension "my parents". Without doing that there's always the logical wiggle room for arriving at an ad hoc conclusion that does not give paradoxical results, e.g. while discussion SBP, you suggested that "today" could mean any day, then attempting to derive the probability of "today is Monday" from there. That just doesn't sit comfortably with me.
Similarly, in this post, while discussing the possibility of "my birth rank" you focused on the history of my family tree, arriving at the conclusion that I cannot be born much earlier/later than I really did (realistically speaking a few month earlier at most, otherwise I cannot physically be alive/exist). But that is not DA's claim, the uniform distribution does not imply that your physical body could be born so prematurely such to predate the first ever human being. It simply says as an unknown prior, you being the son of your current parents and you being a different human being - e.g. someone born in the 2500s, ceteris paribus, are equally likely. And the fact that you are not someone born in the 2500s is the evidence of doom soon which drives the Bayesian update. The satisfactory rebuttal to DA should undermine that.
Let's say I concede to your argument.: since I cannot be born much earlier or much later than I really am, therefore cannot regard myself as a random sample from all time. Then what? Is it reasonably to regard myself as a random sample of all human beings born within that short timeframe of my possible birth? Are you comfortable with that? Doesn't it also carry the underlying assumption of preexistence of souls being incarnated into bodies?
This post highlights my problem with your approach: I just don't see a clear logic dictating which interpretation to use in a given problem—whether it's the specific first-person instance or any instance in some reference class.
When Alice meets Bob, you are saying she should construe it as "I meet Bob in the experiment (on any day)" instead of "I meet Bob today" because—"both awakening are happening to her, not another person". This personhood continuity, in your opinion, is based on what? Given you have distinguished the memory erasure problem from the fission problem, I would venture to guess you identify personhood by the physical body. If that's the case, would it be correct to say you regard anthropic problems utilizing memory erasures fundamentally different from problems with fissures or clones? Entertain me this, what if the exact procedural is not disclose to you, then what? E.g. there is a chance that the "memory erasure" is actually achieved by creating a clone of Alice and wake that clone on Monday, then destroy it. Then wake the original on Tuesday. What would Alice's probability calculation then? Anything changes if the fissure is used instead of cloning? What would Alice's probability of Tails when she sees Bob when she is unsure of the exact procedure?
Furthermore you are holding that if saw Bob, Alice should interpret "I have met Bob (on some day) in the experiment". But if if she didn't see Bob, she shall interpret "I haven't met Bob specifically for Today". In another word, whether to use "specifically today" or "someday" depends on whether or not she sees Bob or not. Does this not seem problematic at all to you?
I'm not sure about what you mean in your example, Beauty is awakened on Monday with 50% chance, if she is awaken then what happens? Nothing? The experiment just ends, perhaps with a non-consequential fair coin toss anyway? If she is not awakened then if the coin toss is Tails then she wakes on Tuesday? Is that the setup? I fail to see there is any anthropic elements in this question at all. Of course I would update the probability to favour Tails in this case upon awakening. Because that is new information for me. I wasn't sure that I would find myself awake during the experiment at all.
I guess my main problem with your approach is that I don't see a clear rational of which probability to use, or when to interpret it as "I see green" and when to interpret it as "Anyone see green" when both of the statement is based on the fact that I drew a green ball.
For example, my argument is that after seeing the green ball, my probability is 0.9, and I shall make all my decisions based on that. Why not update the pre-game plan based on that probability? Because the pre-game plan is not my decision. It is an agreement reached by all participants, a coordination. That coordination is reached by everyone reasoning objectively, which does not accommodate any any first-person self identification like "I". In short, when reasoning from my personal perspective, use "I see green"; when reasoning from an objective perspective, use "someone see green". All my solution (PBR) for anthropic and related questions are based on the exact same supposition of the axiomatic status of the first-person perspective. It gives the same explanation, and one can predict what this theory says about a problem. Some results are greatly disliked by many, like the nonexistence of self-locating probability and perspective disagreement, but those are clearly the conclusion of PBR, and I am advocating it.
You are arguing the two interpretation of "I see green" and "Anyone sees green" are both valid, and which one to use depends on the specific question. But, to me, what exact logic dictates this assignment is unclear. You argue that the bets structured not depending on which exact person gets green, then "my decision" shall be based on "anyone sees green", it seems to me, a way of simply selecting whichever interpretation that does not yield a problematic result. A practice of fitting theory to results.
To the example I brought up in the last reply, what would you do if you drew a green ball and were told that all participants said yes, you used the probability of 0.9. Rational being you are the only decider in this case. It puzzles me because in exactly what sense "I am the only decider?" Didn't other people also decide to say "yes"? Didn't their "yes" contributed to whether the bet would be taken the same way as your "yes"? If you are saying I am the only decider because whatever I say would determine whether the bet would be taken. How is that different from deriving other's responses by using the assumption of "everyone in my position would have the same decision as I do"? But you used probability of 0.5 ("someone sees green") in that situation. If you are referring you being the only decider in a causal—counterfactual sense, then you are still in the same position as all other green ball holders. What justifies the change regarding which interpretation—which probability (0.5 or 0.9)—to use?
And also the case of our discussion about perspective disagreement in the other post where you and cousin-it were having a discussion. I, by PBR, concluded there should be a perspective disagreement. You held that there won't be a probability disagreement, because the correct way for Alice to interpret the meeting is "Bob has met Alice in the experiment overall" rather than "Bob has met Alice today". I am not sure your rational for picking one interpretation over the other. It seems the correct interpretation is always the one that does not give the problematic outcome. And that to me, is a practice of avoiding the paradoxes but not a theory to resolve them.
I maintain the memory erasure and fission problem are similar because I regard the first-person identification equally applies to both questions. Both the inherent identifications of "NOW" and "I" are based on the primitive perspective. I.E., to Alice, today's awakening is not the other day's awakening, she can naturally tell them apart because she is experiencing the one today.
I don't think our difference comes from the non-fissured person always stays in Room1 while the fissure person are randomly assigned either Room 1 or Room 2. Even if the experiment is changed, so that the non-fissured person is randomly assigned among the two rooms, and the fissured person with the original left body always stays in Room 1 and the fissured person with the original right body always in Room 2 my answer wouldn't change.
Our difference still lies in the primitivity of perspective. In this current problem by cousin-it, I would say Alice should not update the probability after meeting Bob, because from her first-person perspective, the only thing she can observe is "I see Bob (today)" vs "I don't see Bob (today)", and her probability shall be calculated accordingly. She is not in the vantage point to observe whether "I see Bob on one of the two days" vs "I don't see Bob on any of the two days", so she should not update that way.
If you use this logic not for the latitude your are born in but for your birth rank among human beings, then you get the Doomsday argument.
To me the latitude argument is even more problematic as it involves problems such as linearity. But in any case I am not convinced of this line of reasoning.
P.S. 59N is really-really high. Anyway if your use that information and make predictions about where humans are born generally latitude-wise it will be way-way off.
I think this highlights our difference at least in the numerical sense in this example. I would say Alex and Bob would disagree (provided Alex is a halfer, which is the correct answer in my opinion). The disagreement is again based on the perspective-based self identification. From Alex's perspective, there is an inherent difference between "today's awakening" and "the other day's awakening" (provided there is actually two awakenings). But to Bob, either of those is "today's awakening", Alex cannot communicate the inherent difference from her perspective to Bob.
In another word, after waking up during the experiment, the two alternatives are "I see Bob today" or "I do not see Bob today." Both at 0.5 chance regardless of the coin toss result.
We both argue the two probabilities, 0.5 and 0.9, are valid. The difference is how we justify both. I have held that "the probability of mostly-green-balls" are different concepts if there are from different perspectives: From a participant's first-person perspective, the probability is 0.9. From an objective outsider's perspective, even after I drew a green ball, it is 0.5. The difference come from the fact that the inherent self-identification "I" is meaningful only to the first-person. Which is the same reason for my argument for perspective disagreement from previous posts.
I purport the two probabilities should be used for questions regarding respective perspectives: for my decisions maximizing my payoffs, use 0.9; for coordination strategy prescribing action of all participants with the goal of maximizing overall payoffs, use 0.5. In fact, the paradox started with the coordination strategy from an objective viewpoint when talking about the pre-game plan, but it later switched to the personal strategy using 0.9.
I understand you do not endorse this perspective-based reasoning. So what is the logical foundation of this duality of probabilities then? If you say they are based on two mathematic models that are both valid, then after you drew a green ball, if someone asks about your probability of the mostly-green urn what is your answer? 0.5 AND 0.9? It depends?
Furthermore, using whatever probability that best match the betting scheme to me is a convenient way of avoiding undesirable answers without committing to a hard methodology. It is akin to endorse SSA or SIA situationally to get the least paradoxical answer for each individual question. But I also understand from your viewpoint you are following a solid methodology.
If my understand is correct you are holding that there is only one goal for the current question: maximizing overall payoff and maximizing my personal payoff is the same goal. And furthermore there is only one strategy: my personal strategy and the coordination strategy is the same strategy. .But because the betting setup, the correct probability to use is 0.5, not 0.9. If so, after drawing the green ball and being told all other participants have said yes to the bet, what is the proper answer to maximize your own gain? Which probability would you use then?
I am trying to point out the difference between the following two:
(a) A strategy that prescribes all participants' actions, with the goal of maximizing the overall combined payoff, in the current post I called it the coordination strategy. In contrast to:
(b) A strategy that that applies to the single participant's action (me), with the goal of maximizing my personal payoff, in the current post I called it the personal strategy.
I argue that they are not the same things, the former should be derived with an impartial observer's perspective, while the later is based on my first-person perspective. The probabilities are different due to self-specification (indexicals such as "I") not objectively meaningful, giving 0.5 and 0.9 respectively. Consequently the corresponding strategies are not the same. The paradox equate the two,:for pre-game plan it used (a), while for during-the-game decision it used (b) but attempted to confound it with (a) by using an acausal analysis to let my decision prescribing everyone's actions, also capitalizing on the ostensibly convincing intuition of "the best strategy for me is also the best strategy for the whole group since my payoff is 1/20 of the overall payoff."
Admittedly there is no actual external observer forcing the participants to make the move, however, by committing to coordination the participants are effectively committed to that move. This would be quite obvious if we modified the question a bit: If instead of dividing the payoff equally among the 20 participants, say the overall payoff is only divided among the red ball holders. (We can incentivize coordination by letting the same group of participants play the game repeatedly for a large number of games.) What would the pre-game plan be? It would be the same as the original setup: everyone say no to the bet. (In fact if played repeatedly this setup would pay the same as the original setup for any participant). After drawing a green ball however, it would be pretty obvious my decision does not affect my payoff at all. So saying yes or no doesn't matter. But if I am committed to coordination, I ought to keep saying no. In this setup it is also quite obvious the pre-game strategy is not derived by letting green-ball holders maximizing their personal payoff. So the distinction between (a) and (b) is more intuitive.
If we recognize the difference between the two, then (b) does not exactly coincide with (a) is not really a disappointment or a problem requiring any explanation. Non-coordination optimal strategies for each individual doesn't have to be optimal in terms of overall payoff (as coordination strategy would).
Also I can see that the question of "Has somebody with blonde hair, six-foot-two-inches tall, with a mole on the left cheek, barefoot, wearing a red shirt and blue jeans, with a ring on their left hand, and a bruise on their right thumb received a green ball?" comes from your long held position of FNC. I am obliged to be forthcoming and say that I don't agree with it. But of course, I am not naive enough to believe either of us would change our minds in this regard.
If one person is created in each room, then there is no probability of "which room I am in" cause that is asking "which person I am". To arrive to any probability you need to employ some sort of anthropic assumption.
If 10 persons are are randomly assigned (or assigned according to some unknown process), the probability of "which room I am in" exists. No anthropic assumption is needed to answer it.
You can also find the difference using a frequentist model by repeating the experiments. The latter questions has a strategy that could maximize "my" personal interest. The former model doesn't. It only has a strategy, if abided by everyone, that could maximize the group interest (coordination strategy).
The probability of 0.9 is the correct one to use to derive "my" strategies maximizing "my" personal interest. e.g. If all other participants decides to say yes to the bet, what is your best strategy? Based on the probability of 0.9 you should also say yes. But based on the probability of 0.5 you would say no. However, the former will yield you more money. It would be obvious if the experiment is repeated a large number of time.
You astutely pinpointed that the problem of saying yes is not beneficial because you are paying the idiot versions of you's decision when you drew red. This analysis is based on the assumption that your personal decision prescribes actions of all participants in similar situations. (The assumption that Radford Neal first argued against, which I agree) Then such a decision is no longer a personal decision, it is a decision for all and is evaluated by the overall payoff. That is a coordination strategy, which is based on an objective perspective and should use the probability of 0.5.
The problem is setup in a way to make people confound the two. If say the payoff is not divided among all 20 participants, but instead among people holding red balls. The resultant coordination strategy would still be the same (the motivation of coordination can be the same group of 20 participants will keep playing for a large number of games). But the distinction between personal strategy maximizing personal payoff and coordination strategy maximizing the overall payoff would be obvious, i.e., personal strategy after drawing a green ball is to do whatever you want because it does not affect you (which is well known when coming up with the pre-game coordination plan), but coordination strategy would remain the same: Saying no to the bet. People would be less likely to mix the two strategies and pose it as an inconsistency paradox in such a setup.
Yep, under PBR, perspective—which agent is the "I"—is primitive. I can take it as given, but there is no way to analyze it. In another word, self-locating probability like "what is the probability that I am L" is undefined.
The Sleeping Beauty problem and this paradox are highly similar, I would say they are caused by the same thing—switching of perspectives. However, there is one important distinction.
For the current paradox, there is an actual sampling process for the balls. Therefore there is no need to assume a reference class of "I". Take who I am—which person's perspective I am experiencing the world from—as a given, and the ball-assigning process treats "I" and other participants as equals. So there is no need to interpret "I" as a a random sample from all 20 participants. You can perform the probability calculations like a regular probability problem. This means there is no need to make an SIA-like assumption.The question does not depend on how you construe "the evidence....that I'm in position I".
I think it is pretty uncontroversial if we take all the betting and money away from the question, we can all agree that the probability becomes 0.9 if I receive a green ball. So if I understand correctly, by disagreeing with this probability, you are in the same position as Ape in the Coat: the correct probability depends on the betting scheme. Which is consistent with your latter statement that "whether to use SSA or SIA...dependent on the question setup."
My position has always been not to ever use any anthropic assumptions: SSA or SIA or FNC or anything else: They all lead to paradoxes. Instead, take the perspective, or in your words: "I am in position I", as primitively given, and reason within this perspective. In the current paradox, that means either reason from the perspective of a participant and use the probability of 0.9 to make decisions for your own interest; or, alternatively, think in terms of a coordination strategy by reasoning from an impartial perspective with the probability remains at 0.5, but never mix the two.
Numerically it is trivial to say the better thing to do (for each bet, for the benefit of all participants) is not to update. The question is of course how do we justify this. After all, it is pretty uncontroversial that the probability of urn-with mostly-green-balls is 0.9 when I get received the randomly assigned ball which turns out to be green. You can enlist a new type of decision theory such as UDT, or a new type of probability theory which allows two probability to be both valid depending on what betting scheme like Ape in the Coat's did). What I am suggesting is stick with the traditional CDT and probability theory, but recognizing the difference between the coordination vs personal strategy, because they are from different perspectives.
For the BB example you have posted, my long held position is that there is no way to reason about the probability of "I am a BB", even with the added assumption that for each real me there are 10 BBs appear in the universe. However, if you are really a BB, then your decision doesn't matter to your personal interest as you will disappear right momentarily. So you can make your personal decision entirely based on the assumption that you are not a BB. Or alternatively, and I would say not very realistically, you assume that real you and BB you care about each other and want to come up with a coordinating strategy that will benefit the entire group, then each faithfully follow that strategy without specifically thinking about each of their own personal strategy. In this example both will recommend the same decision of going to the gym.
Late to the party but want to say this post is quite on point with the analysis. Just want to add my—as a supporter of CDT—reading to the problem, which has a different focus.
I agree the assumption that every person would make the same decision as I do is deeply problematic. It may seem intuitive if the others are "copies of me", which is perhaps why this problem is first brought up in an anthropic context. CDT inherently treats the decision maker as an agent apart from his surrounding world, outside of the casual analysis scope. Assuming "other copies of me" giving the same decision as I do put the entire analysis into a self-referential paradox.
In contrast the analysis of this post is the way to go. I shall just regard the others as part of the environment, their "decision" are nothing special but parameters I shall considered as input to the only decision-making in question—my decision making. It cuts off the feed back loop.
While CDT treating the decision maker as separate-from-the-world agent outside the analysis scope is often regarded as it's Achille's heel, I think that is precisely why it is correct. For decision is inherently a first-person concept, where the free-will resides. If we cannot imagine reasoning from a certain thing's perspective, then whatever that thing outputs are mere mechanical products. The concept of decision never applies.
I diverge from this post in the sense that instead of ascribing this reflective inconsistency paradox to the above mentioned assumption, I think its cause is something deeper. In particular, for the F(p) graph, it shows that IF all others sticks to the plan of p=0.9355 then there is no difference for me to take or reject the bet (so there is no incentive to deviate from the pregame plan.) However, how solid is it to assume that they will stick to the plan? It cannot be said that's the only rational scenario. And according to the graph if I think they deviated from the plan then there so must I. Notice in this analysis there is no forcing others' decisions so they must be same as mine, and our strategies could well be different. So the notion that the reflective inconsistency resolves itself by rejecting the assumption of everyone makes the same decision only works in the limited case if we take an alternative assumption that the others all stick to the pregame plan (or all others reject the bet.) Even in that limited case, my pregame strategy was to take the bet with p=0.9355 while the in game strategy was anything goes (as they do not make a difference to the reward). Sure there is no incentive to deviate from the pregame plan but I am hesitant to call it a perfect resolution of the problem.
Betting and reward arguments like this is deeply problematic in two senses:
- The measurement of objective is the combined total reward to all in a purposed reference class, like the 20 "you" in the example. Usually the question would try to boost the intuition of this by saying all of them are copies of "you". However, even if the created persons (actually doesn't even have to be all persons, AIs or aliens will work just fine) are vastly different, it does not affect the analysis at all. Since the question is directed at you, and the evidence is your observation —that you awake in a green room—shouldn't the bet and reward be constructed—in order to reflect the correct probability—to concern your own personal interest? Why use the alternative objective and measure the combined reward of the entire group? This takes away the first-person elements of the question, just as you have said in the post, the bet has nothing to do with "'I' see green."
- Since such betting arguments use the combined reward of a supposed reference class instead of using self interest, the detour is completed with an additional assertion in the spirit of "what's best for the group must be best for me." That is typically achieved by some anthropic assumption in the form of seeing "I" as a random sample from the group. Such intuitions runs so deep that people use the assumptions without acknowledging it. In trying to explain why "I" am "a person in green room" yet the two can have different probabilities you said "The same way a person who visited a randomly sampled room can have different probability estimate than a person who visited a predetermined room." It subtly considers who "I" am: the person who was created in a green room, the same way as if someone randomly sampling the rooms and sees green. However intuitive that might be, it's an assumption that's unsubstantiated.
These two points combined effectively changes an anthropic problem regarding the first-person "I" to a perspective-less, run-of-the-mill probability problem. Yet this conversion is unsubstantiated, and to me, the root of the paradoxes.
The more I think about it the more certain I am that many unsolved problems, not just anthropics, are due to the deep-rooted habit of a view-from-nowhere reasoning. Recognizing perspective as a fundamental part of logic would be the way out.
Problems such as anthropics, interpretive challenges of quantum mechanics, CDT's problem of non-self-analyzing, how agency and free will coexist with physics, Russel's paradox and Godel's incomplete theorem etc
Maybe I am the man with a hammer looking for nails. Yet deep down I have to be honest to myself and say I don't think that's the case.
Well, I didn't expect this to be the majority opinion. I guess I was too in my head.
But to explain my rationale: The effects of the two drugs only differ during the operation, their end results are identical. So after the operation, barring external records like bank account information, there is no way to even tell which drug I took, their result would be the same. Taking external records into consideration, the extra dollar in the bank would certainly be more welcomed.
The memory-inhibiting part was supposed to preclude the journey consideration. From a post-operation perspective, there is no experience of a "journey" to talk about. Now it's clear to me that people would evaluate it regardless of the memory part.
Understandable. As much as I firmly believe in my theory, I have to admit I have a hard time making it look convincing.
The conflict arises when the self at the perspective center is making the decision but is also being analyzed. With CDT it leads to a self-referential-like paradox: I'm making the decision (which according to CDT is based on agency and unpredictable) yet there really is no decision but merely generating an output.
Precommitments sidestep this by saying there is no decision at the point being analyzed. It essentially moves the decision to a different observer-moment. Thus allowing the analysis to be taken into account in the decision analysis. In Newcomb, this is like instead of asking what should I do when facing the two boxes, asking what kind of machine/brain should I design so it would perform well in the Newcomb experiments.
I didn't "choose" to generalize my position beyond conscious beings. It is an integral part of it. If perspectives are valid only for things that are conscious (however that is defined), then perspective has some prerequisite and is no longer fundamental. It would also give rise to the age-old reference class problem and no longer be a solution to anthropic paradoxes. E.g. are computer simulations conscious? answers to that would directly determine anthropic problems such as Nick Bostrom's simulation argument.
Phenomenal consciousness is integral to perspective also in the sense that you know your perspective, i.e. which is the self, precisely because the subjective experience is most immediate to it. So when a subject wakes up in the fission experiment, they know which person "I" refers to even though he cannot point that person out on a map.
My argument is in direct conflict with physicalism. And it places phenomenal consciousness and subjective experience outside the field of physics.
Consciousness has many contending definitions. e.g. if you take the view that consciousness is identified by physical complexity and the ability to process data then it doesn't have anything to do with perspective. I'm endorsing phenomenal consciousness, as in the hard problem of consciousness: we can describe brain functions purely physically, yet it does not resolve why they are accompanied by subjective feelings. And this "feeling" is entirely first-person, I don't know your feelings because otherwise, I would be you instead of me. "What it means to be a bat is to know what it is like to be a bat."
In short, by suggesting they are irreducible and primitive, my position is incompatible with the physicalist's worldview, in terms of the definition of consciousness, and the nature of perspective. Knowing this might be regarded as a weakness by many, I feel obliged to point it out.
I do think SIA and SSA are making extraordinary claims and the burden of proof is on them. I have proposed assuming the self as a random sample is wrong for several years. That is not the problem I have with this argument. What I disagree with is that your argument depends on phrases and concepts such as "'your' existence" and "who 'you' are" without even attempting to define what/which one is this 'you' refers to. My position is it refers to the self, based on the first-person perspective, which is fundamental, a primitive concept. So it doesn't require any definition as long as reasoned from the perspective of an experiment subject. But your argument holds the position that perspective is not fundamental. So treating 'you', which is the first-person 'I' to the reader, as primitive is not possible. Then how do you define this critical concept? And why is your definition better than SIA or SSA's? You also have a burden of proof. Because without a clear definition, your argument's conclusion can jump back and forth in limbo. This is illustrated by the example of the modified BJE above. You said:
If it was just two people created the first always with a blue jacket and the second always without and, once again, you were not necessary meant to be the first, then this counts as random sampling and BJE analysis stands.
Isn't this treating you as a random sample when there is no actual sampling process, i.e. the position you are arguing against?
And how is this experiment different from your FBJE? In other words, which process enables FBJE to guarantee that 'you' will be the person in the blue jacket regardless of the coin toss? How come there is no way that you be the person whose jacket depends on the toss? Some fundamental stipulation about what 'you' would be is used here.
BTW the FBJE is not comparable to the sleeping beauty problem. In FBJE, by stipulation, you can outright say your blue jacket is not due to the coin landed Tails. But beauty can't outright say this is Monday.
Something's not adding up. You said that anthropic paradox is not about first-person perspective or consciousness. But later:
But in ISB there are no iterations in which you do not exist. The number of outcomes in which you are created equals the total number of iterations.
The most immediate question is the definition of "you" in this logic. Why can't thirders define "you" as a potentially existing person? In which case the statement would be false. If you define it as an actually existing person then which one? Seems to me you are using the word "you" to let the reader imagine themselves being a subject created in ISB, so it would point to the intuitively understood self. But that definition uses the first-person perspective as a fundamental concept. And the later:
But Heads outcome in Incubator Sleeping Beauty is not. You are not randomly selected among two immaterial souls to be instantiated. You are a sample of one. And as there is no random choice happening, you are not twice as likely to exist when the coin is Tails and there is no new information you get when you are created.
So who you are (who the first person is) is fundamental. As well as is its existence.
From past experience, I know this is not the easiest topic to discuss. So let's use a concrete example:
In BJE, suppose for heads, instead of creating 2 people and then randomly sampling one of them to have a blue jacket, a person is created in the blue jacket, then sometime later another person is created without a blue jacket, so there is no random sampling taking place. Is your analysis going to change? Or answer these questions: 1. Before looking down and check, what is the probability that you are wearing a blue jacket? 2. After seeing the blue jacket, what is the probability that the coin landed Heads?
I don't feel there is enough common ground for effective discussion. This is the first time I have seen the position that the sleeping beauty paradox disappears when the Heads awakening is sampled between Monday and Tuesday.
Can you point out the difference why Tails and Monday, Tails and Tuesday are casually connected while the 100 people created by the incubator are not, by independent outcomes instead?
Nothing is stopping us to perceive the situations as different possible worlds, not different places in the same world.
All this post is trying to argue is statement like this requires some justification. Even if the justification is a mere stipulation, it should be at least recognized as an additional assumption. Given that anthropic problems often lead to controversial paradoxes, it is prudent to examine every assumption we make in solving them.
If we modify the original sleeping beauty problem, such that if heads you will be awakened on one randomly sampled day (either Monday/ Tuesday), would you change your answer to 1/3?
Anthropic paradoxes happen only when we use events representing different self-locations in the same possible world. If the paradoxes are just problems of probability theory then why this limited scope?
I do consider anthropic problems, in one sense or another, to be metaphysical. And I know there are people who disagree with this. But wouldn't stipulating anthropic paradoxes are solely probability problems also require arguments to justify? Apart from "a rule of thumb"?
Like Question 1 and traditional probability problems, Question 3's events reflect different possible worlds, different outcomes of the room-assigning experiment. Question 2's supposed events reflect different locations of the self in the same possible world, i.e. different centred worlds.
Controversial anthropic probability problems occur only when the latter type is used. So there is good reason to think this distinction is significant.
It seems earlier posts and your post have defined anthropic shadow differently in subtle but important ways. The earlier posts by Christopher and Jessica argued AS is invalid: that there should be updates given I survived. Your post argued AS is valid: that there are games where no new information gained while playing can change your strategy (no useful updates). The former is focusing on updates, the latter is focusing on strategy. These two positions are not mutually exclusive.
Personally, the concept of "useful update" seems situational. For example, say someone has a prior that leads him to conclude the optimal strategy is not to play the Chinese Roulette. However, he was forced to play several rounds regardless of what he thought. After surviving those rounds (say EEEEE), it might very well be that he updates his probability enough to change his strategy from no-play to play. That would be a useful update. And this "forced-to-play" kind of situation is quite relevant to existential risks, which anthropic discussions tend to focus on.
To my understanding, anthropic shadow refers to the absurdum logic in Leslie's Firing Squad: "Of course I have survived the firing squad, that is the only way I can make this observation. Nothing surprising here". Or reasonings such as "I have played the Russian roulette 1000 times, but I cannot increase my belief that there is actually no bullet in the gun because surviving is the only observation I can make".
In the Chinese Roulette example, it is correct that the optimal strategy for the first round is also optimal for any following round. It is also correct if you decide to play for the first round then you will keep playing until kicked out i.e. no way to adjust our strategy. But that doesn't justify there is no probability update, for each subsequent decision, while all agree to keep playing, can be different. (And they should be different) It seems absurd to say I would not be more confident to keep going after 100 empty shots.
In short, changing strategy implies there is an update, not changing strategy doesn't imply there is no update.
That's not it. In your simulation you give equal chances for Head and Tails, and then subdivide Tails into two equiprobables of T1 and T2 while keeping all probability of Heads as H1. It's essentially a simulation based on SSA. Thirders would say that is the wrong model because it only considers cases where the room is occupied: H2 never appeared in your model. Thirders suggests there is new info when waking up in the experiment because it rejects H2. So the simulation should divide both Head and Tails into equiprobables of H1 H2 T1 T2. And waking up rejects H2 which pushes P(T) to 2/3. And then learning it is room 1 would push it back down to 1/2.
To thirders, your simulation is incomplete. It should first include randomly choosing a room and finding it occupied. That will push the probability of Tails to 2/3. Knowing it is room 1 will push it back to 1/2.
One thing that should be noted is that while Adam's argument is influential, especially since it first (to my knowledge) pointed out halfers have to either reject Bayesian updating upon learning it is Monday or accept a fair coin yet to be tossed has the probability other than 1/2. Thirders in general disagree with it in some crucial ways. Most notably Adam argued that there is no new information when waking up in the experiment. In contrast, most thirders endorsing some versions of SIA would say waking up in the experiment is evidence favouring Tails, which has more awakenings. Therefore targeting Adam's argument specifically is not very effective.
In your incubator experiment, thirders, in general, would find no problem: waking up, evidence favouring tails P(T)=2/3. Finding it is room 1: evidence favouring Heads, P(T) decreased to 1/2.
Here is a model that might interest halfers. You participate in this experiment: the experimenter tosses a fair coin, if Heads nothing happens, you sleep through the night uneventfully. If Tails they will split you in the middle into two halves, completing each half by cloning the missing part onto it. The procedure is accurate enough that the memory is preserved in both copies. Imagine yourself waking up the next morning: you can't tell if anything happened to you, if either of your halves is the same physical piece yesterday, or if there is another physical copy in another room. But regardless, you can participate in the same experiment again. The same thing happens when you find yourself waking up the next day. and so on.....As this continues, you will count about an equal number of Heads and Tails in the experiments you have subjective experiences of...
Counting subjective experience does not necessarily lead to Thirderism.
I would also point out that FNC is not strictly a view-from-nowhere theory. The probability updates it proposes are still based on an implicit assumption of self-sampling.
I really don't like the pragmatic argument against the simulation hypothesis. It demonstrates a common theme in anthropics which IMO is misleading the majority of discussions. By saying pre-simulation ancestors have impacts on how the singularity plays out therefore we ought to make decisions as if we are real pre-simulation people, it subtly shifts the objective of our decisions. Instead of the default objective of maximizing reward to ourselves, doing what's best for us in our world, it changes the objective to achieve a certain state of the universe concerning all the worlds, real and simulations.
These two objectives do not necessarily coincide. They may even demand conflicting decisions. Yet it is very common for people to argue that self-locating uncertainty ought to be treated a certain way because it would result in rational decisions with the latter objective.
Exactly this. The problem with the current anthropic schools of thought is using this view-from-nowhere while simultaneously using the concept of "self" as a meaningful way of specifying a particular observer. It effectively jumps back and forth between the god's eye and first-person views with arbitrary assumptions to facilitate such transitions (e.g. treating the self as the random sample of a certain process carried out from the god's eye view). Treating the self as a given starting point and then reasoning about the world would be the way to dispel anthropic controversies.
Let's take the AI driving problem in your paper as an example. The better strategy is regarded as the one that gives the better overall reward from all drivers. Whether the rewards of the two instances of a bad driver should be cumulatively or just count once is what divides halfers and thirders. Once that is determined the optimal decision can be calculated from the relative fractions of good/bad drivers/instances. It doesn't involve taking the AI's perspective in a particular instance and deciding the best decision for that particular instance, which requires self-locating probability. The "right decision" is justified by averaging out all drivers/instances, which does not depend on the particularity of self and now.
Self-locating probability would be useful for decision-making if the decision is evaluated by its effect on the self, not the collective effect on a reference class. But no rational strategy exists for this goal
- If you are born a month earlier as a preemie instead of full-term, it can be quite convincingly said you are still the same person. But if you are born a year earlier are you still the same person you are now? There are obviously going to be substantial physical differences, different sperm and egg, maybe different gender. If you are the first few human beings born, there will be few similarities between the physical person that's you in that case and the physical person you are now. So the birth rank discussion is not about if this physical person you regard as yourself is born slightly earlier or later. But among all the people in the entire human history which one is you, i.e. from which one of those person's perspectives do you experience the world?
- The anthropic problem is not about possible worlds but instead centered worlds. Different events in anthropic problems can correspond to the exact same possible world while differing in which perspective you experience it. This circles back to point 1, and the decoupling between the first-person "I" and the physical particular person.
When you say the time of your birth is not special, you are already trying to judge it objectively. For you personally, the moment of your birth is special. And more relevantly to the DA, from a first-person perspective, the moment "now" is special.
- From an objective viewpoint, discussing a specific observer or a specific moment requires some explanation, something process pointing to it. e.g. a sampling process. Otherwise, it fails to be objective by inherently focusing on someone/sometime.
- From a first-person perspective, discussions based on "I" and "now" doesn't require such an explanation. It's inherently understandable. The future is just moments after "now". Its prediction ought to be based on my knowledge of the present and past.
What the doomsday argument saying is, the fact "I am this person" (living now) shall be treated the same way as if someone from the objective viewpoint in 1, performs a random sampling and finds me (now). The two cases are supposed to be logically equivalent. So the two viewpoints can say the same thing. I'm saying let's not make that assumption. And in this case, the objective viewpoint cannot say the same thing as the first-person perspective. So we can't switch perspectives here.
I didn't explicitly claim so. But it involves reasoning from a perspective that is impartial to any moment. This independency manifested in its core assumption: that one should regard themself to be randomly selected from all observers from its reference class from past, present and future
if you get a reward for guessing if your number is >5 correctly, then you should guess that your number is >5 every time.
I am a little unsure about your meaning here. Say you get a reward for guessing if your number is <5 correctly, then would you also guess your number is <5 each time?
I'm guessing that is not what you mean, but instead, you are thinking as the experiment is repeated more and more the relative frequency of you finding your own number >5 would approach 95%. What I am saying is this belief requires an assumption about treating the "I" as a random sample. Whereas for the non-anthropic problem, it doesn't.
For the non-anthropic problem, why take the detour of asking a different person each toss? You can personally take it 100 times, and since it's a fair die, it would be around 95 times that it lands >5. Obviously guessing yes is the best strategy for maximizing your personal interest. There is no assuming the I" as a random sample, or making forced transcodings.
Let me construct a repeatable anthropic problem. Suppose tonight during your sleep you will be accurately cloned with memory preserved. Waking up the next morning, you may find yourself to be the original or one of the newly created clones. Let's label the original No.1 and the 99 new clones No,2 to No 100 by the chronological order of their creation. Doesn't matter if you are old or new you can repeat this experiment. Say you take the experiment repeatedly: wake up and fall asleep and let the cloning happen each time. Everyday you wake up, you will find your own number. You do this 100 times, would you say you ought to find your number >5 about 95 times?
My argument says there is no way to say that. Doing so would require assumptions to the effect of your soul having an equal chance of embodying each physical copy, i.e. "I" am a random sample among the group.
For the non-anthropic problem, you can use the 100-people version as a justification. Because among those people the die tosser choosing you to answer a question is an actual sampling process. It is reasonable to think in this process you are treated the same way as everyone. E.g. the experiment didn't specifically sample you only for a certain number. But there is no sampling process determining which person you are in the anthropic version. Let alone assume the process is treating you indifferently among all souls or treating each physical body indifferently in your embodiment process.
Also, people believing the Doomsday Argument objectively perform better as a group in your thought experiment is not a particularly strong case. Thirders have also constructed many thought experiments where supporters of the Doomsday Argument (halfers) would objectively perform worse as a group. But that is not my argument. I'm saying the collective performance of a group one belongs to is not a direct substitute for self-interest.
Thank you for the kind words. I understand the stance about self-locating probability. That's the part I get most disagreements.
To me the difference is for the unfair coin, you can treat the reference class as all tosses from unfair coins that you don't know how. Then the symmetry between Head\Tail holds, and you can say in this kind of tosses the relative frequency would be 50%. But for the self-locating probabilities in the fission problem, there really is nothing pointing to any number. That is, unless we take the average of all agents and discard the "self". It requires taking the immaterial viewpoint and transcoding "I" by some assumption.
And remember, if you validate self-locating probability in anthropics, then the paradoxical conclusions are only a Bayesian update away.
In anthropic questions, the probability predictions about ourselves (self-locating probabilities) lead to paradoxes. At the same time, they also have no operational value such as decision-making. In a practical sense, we really shouldn't make such probabilistic predictions. Here in this post I'm trying to explain the theoretical reason against it.
Consciousness is a property of the first-person: e.g. To me I am conscious but inherently can't know you are. Whether or not something is conscious is asking if you think from that thing's perspective. So there is no typical or atypical conscious being, from my perspective I am "the" conscious being, if I reason from something else's perspective, then that thing is "the" conscious being instead.
Our usual notion of considering ourselves as a typical conscious being is because we are more used to thinking from the perspectives of things similar to us. e.g. we are more apt to think from the perspective of another person than a cat, and from the perspective of a cat than a chair. In other words, we tend to ascribe the property of consciousness to things more like ourselves, instead of the other way around: that we are typical in some sense.
The part where I know I'm conscious while not you is an assertion. It is not based on reasoning or logic but simply because it feels so. The rest are arguments which depend on said assertion.
Thought the reply was addressed to me. But nonetheless, it's a good opportunity to delineate and inspect my own argument. So leaving the comment here.