Posts

A Simplified Version of Perspective Solution to the Sleeping Beauty Problem 2020-12-31T18:27:14.349Z
Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes 2020-11-09T23:17:21.624Z
Why I Prefer the Copenhagen Interpretation(s) 2020-10-31T21:06:02.500Z
Leslie's Firing Squad Can't Save The Fine-Tuning Argument 2020-09-09T15:21:19.084Z
Hello ordinary folks, I'm the Chosen One 2020-09-04T19:59:10.799Z
Anthropic Reasoning and Perspective-Based Arguments 2020-09-01T12:36:41.444Z
Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? 2019-08-01T20:10:46.445Z
Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning 2019-03-09T15:21:02.258Z
Perspective Reasoning and the Sleeping Beauty Problem 2018-11-22T11:55:22.114Z
The Sleeping Beauty Problem and The Doomsday Argument Can Be Explained by Perspective Inconsistency 2018-08-05T13:45:27.185Z

Comments

Comment by dadadarren on A Simplified Version of Perspective Solution to the Sleeping Beauty Problem · 2021-01-12T21:36:22.739Z · LW · GW

I think your question is not naive at all. Stuart_Armstrong argued for a similar point that anthropic questions should be about decision making rather than probability assigning here. However I do remember a post in lesswrong claiming he has modified his position but I couldn't find that post right now.

That said, I think ignoring probability in anthropics is not the way to go. We generally regard probability as the logical basis of rational decision making. There is no good reason why anthropic problems are different. In my opinion, focusing on decision making mitigate one problem: it forces us to state the decision objective as premises. So Halfers and Thirders can each come up with their own objectives reflecting their answers. It avoids the question of which objective is reflective of the correct probability. By perspective-based reasoning, I think the correct objective should be simple selfish goals, where the self is primitively identified I by its immediacy to subjective experience (i.e. the perspective center). And for some paradoxes such as the doomsday argument and presumptuous philosopher, converting them into decision making problem seems rather strange: I just want to know if their conclusions are right or wrong, why even ask if I care about the welfare of all potentially existing humans?

Comment by dadadarren on Sunzi's《Methods of War》- Introduction · 2020-12-24T03:01:47.356Z · LW · GW

No, the translation omits too much. That roughly translates to:

吾以此知胜负矣: That's is how I know victory from defeat (in advance).

将听吾计,用之必胜,留之;将不听吾计,用之必败,去之。: (If) the general follow my advice, victory is ensured, (make him) stay. (If) the general does not follow my advice, it would lead to defeat, (make him) leave.

计利以听,乃为之势,以佐其外: Given (my) strategies are followed, create situations, to have a synergizing environment (for the strategies).

势者,因利而制权也: such situations are objective/circumstance dependent. (basically saying be flexible, resourceful)

Classical Chinese is something strange. Anybody with a middle school education can have a fairly good understanding. Yet a precise translation is difficult. The above is the best translation I can give. It shall not be regarded as error-free. But the original one definite is not complete.

Comment by dadadarren on On Arguments for God · 2020-11-14T15:23:10.788Z · LW · GW

I can't agree with arguments for God using Bayesian updates, e.g. the universe is fine-tuned for life. Why is the argument focused on life in the first place? Why not focus on something that doesn't exist and conclude the universe is not designed?

This argument only seems right because it looks like a self-analysis. We, humans, are life. So paying attention to it seems natural. But wouldn't the existence of oneself be a prerequisite of self-analysis? From our perspective, finding the universe compatible with our existence is guaranteed. Just like I am guaranteed to find myself exist. The fact that any tiny fluctuation in history would be causing me not to be born does not mean my existence is chosen by god.

Comment by dadadarren on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-13T06:09:38.595Z · LW · GW

My position has been the same. It starts from one assumption: treat perspectives as fundamental axioms. Then reasoning from different perspectives would not be mixed, indexicals are perspective-based thus primitively identified. So we would not treat I or now as the result of some imaginary random sampling, which has lead to the anthropic debate of SSA and SIA. It has been laid out in the first-post you replied to.

You say I do not respond to issues you raised. It appears I simply cannot do so. Because every time I choose to provide a detailed explanation I got comments such as “but you are not arguing, you are just asserting” or “that is not a substantive argument”, or “I deny that what you say”. You just give judgments without getting into where you think is wrong. And when you do try to get into the reasoning, you didn’t even care what I wrote, like saying I assumed Beauty knows she’s not the clone while I clearly stated the opposite.

If you don’t know what my theory would predict, then give me some scenarios or thought experiments and make me answer them. If you do not understand something I said, make me clarify. I would like to answer them because it helps to explain my position and outline our disagreement. (Btw, the first-person perspective is primitively given simply means you instinctively know which person you are, because you are experiencing everything from its viewpoint.) But by keep dismissing my effort like above it just seems you are not interested in that. You just want to argue your position is the better one.

Regarding the MWI post. If MWI does not require perspective-independent reality. Then what is the universal wave function describing? Sure, we generally accept this objective reality. And if an interpretation suggests otherwise it is usually deemed its burden to provide such a metaphysics (e.g. like participatory realism by QBism). Sean Carrol regards this as a reason to favor or default to the MWI. I argued if Thomas Nagel’s 3 Steps of how we get the idea of objectivity is correct, then perspective-independent objectivity itself is an assumption. The response I get is that you deny my argument. That’s it. What can I possibly say after that? You want me to understand your model of MWI. But when I followed-up your statement that some CI can be considered a special version of MWI and explained why I think that is not possible, I get no feedback from you...

You say I am not pointing to SIA supporters having different opinions as you. Because you said you don’t care. And I find it hard to believe when you say you do not know any SIA supporters disagree with your position. For starters, Katja, who brought up SIA doomsday actually argued SIA is preferable to SSA due to perspective disagreements. Michael Titelbaum, a thirder, who give many strong arguments against halfers, listed naive confirmation of MWI as a problem. Your position that SIA is the “natural choice” and paradox free is a very strong claim. ( If you are that confident, maybe make a post about it?) Regarding your framework of solving the paradoxes….what is the framework? You gave every single problem a specific explanation. The framework I see is that your version of SIA is problem-free, counter-intuitive conclusions are always due to something else.

Granted, for open problems like SB or QM it is nearly impossible to convince each other. The productive thing to do would be to try to understand the counter-party’s logic and find out the root of the disagreement. That is why I ended our earlier discussion by making a list of our different positions. So while we may not agree with each other, at least we understand our different assumptions that lead to the disagreement. But looking back at your comments I just realize that is not what you are after. You are here to win. Well, I can’t keep up with this. So…you win. And as always, you will have the final word.

Comment by dadadarren on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-12T18:44:04.179Z · LW · GW

Ike, please don’t take this the wrong way, but with every exchange between us ending with your verdict, I find “attempt to explain, not persuade” very difficult to do.

In the above post, I explained the problem of SSA and SIA: they assume a specific imaginary selection process, and then base their answers on that, whereas the first-person perspective is primitive given. You judged it as “not substantive”.

Previously, you said SIA is the “natural setup” and with that you don’t see any paradoxes. I mentioned SIA also leads to counterintuitive conclusions such as SIA Doomsday (filter ahead), Presumptuous Philosopher, Naive confirmation of MWI and Multiverse. You laid out how could SIA avoid the paradoxical conclusions in each case. I said not all supporters of SIA would agree with you. You said you don’t care. I said each problem requires a different explanation seems ad-hoc, suggesting some underlying problem with SIA. You deemed they are not ad-hoc.

In another post, I argued that the MWI requires the basic assumption of a perspective-independent objective reality. Your entire response is “I deny that MWI requires that. In fact, all three of your postulates are incoherent, and I believe in a form of MWI that doesn't require any of them.” No explanations.

Here you first didn’t notice Beauty is cloned when Tails, now saying I claim Beauty knows she’s not the clone. I specifically said through the repetitions the first-person I might not refer to the same physical copy (just like in a single experiment). I only claim after waking up, from the first-person perspective it is clear which copy I am. Which allows further first-person repetitions.

I can’t help but feel many of your comments are purposely argumentative and low effort. I personally don’t find them constructive.

Comment by dadadarren on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-11T17:23:00.613Z · LW · GW

Twice as many copies for tails means in the long run any given copy is likely to be near 2/3ds tails.

The keyword is “any given copy”. The debate between common camps (SSA vs SIA) is how the given copy is selected.

If at the end of the day a random copy is selected, then her expected relative frequency of Heads would be 1/3. Because every time Tails occurs, the number of copy doubles. In this model, a principle of indifference applies to all resulting copies.

Alternatively, a copy can be selected at the end of each iteration, so when the repetition finishes we end up with a chosen one. Here the expected relative frequency of the selected would be 1/2. Because even though the number of Beauty is doubled after Tails, they only have half the chance of being chosen. In this model, the principle of indifference is only applied to copies experienced the same tosses.

There is no disagreement so far. The debate is this: thirders would say the first selection model reflects Beauty’s probability while halfers may say the second selection model reflects Beauty’s probability. I am arguing they are both wrong.

Beauty does not perform any selection. The single copy is given by something else: her first-person perspective. The perspective is fundamental that cannot be logically reduced further. I am this copy, end of the story, no further explanation. So the relative frequency should be based on repeating the experiment from her first-person perspective. I.E. imagine in Beauty’s shoes: I fall asleep - I wake up, I fall asleep - I wake up…. do this again and again. Here I would experience about equal numbers of Heads and Tails. Granted, when it is Tails, there would exist another copy (consistent with the fact that according to the first selection model the result would be 1/3), but I don’t have to even consider that because from the first-person perspective it is primevally clear the other copy is not me.

In this first-person model, there is no principle of indifference among copies at all. Indexicals such as I is inherently the focus. While the indifferences mentioned earlier are based on selections from an outsider’s perspective (or a god’s eye view, or a view from nowhere if one prefers to call it). I argue because they are based on different perspectives, this uniqueness and indifference should not mix. Choose one perspective and stick with it through the entire analysis. Mix them would lead to assuming the indexical as the result of some selection process (SSA and SIA) and cause anthropic paradoxes.

Comment by dadadarren on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-10T16:48:09.197Z · LW · GW

That's fair. I think it would be beyond my expertise to criticize other notions of correlational non-locality. I just wanted to point once treating perspectives as fundamental, the non-locality cannot be formulated in QM. Somewhat similar to QBism's treatment of nonlocality.

Comment by dadadarren on Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes · 2020-11-10T16:34:33.628Z · LW · GW

The first-person experience would give equal numbers of Heads vs Tails. When the toss results in Tails, there exists another copy beside myself. So there is twice the number of copies experiencing Tails.

Comment by dadadarren on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-03T15:45:30.591Z · LW · GW

They can write their result on two pieces of paper and bring them together. But as long as we treat perspective as fundamental, there still won’t be any non-locality. Because the spins are perceived by actions upon a perspective center. And those actions cannot be spacelike separated. For example, if the two pieces of paper are sent to me, from my perspective all I can say is something like “since I got Alice’s result as spin up, safe to say I will get Bob’s result as spin down.” Non-local correlations like “Alice measured up, so Bob will measure down” is either a perspective switch between the two or a view from nowhere statement.

I agree that measurements are objective once made (in the sense of perspective-invariant not perspective-independent). But Alice and Bob are not analyzing the same set of measurements. Each of them from their respective perspective analyzes actions upon him/herself. Their deduction about the spins would have opposite causal arrows. But as long as we don’t switch perspectives, things would be local.

Comment by dadadarren on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-03T15:43:21.466Z · LW · GW

Fair enough.

Comment by dadadarren on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-02T19:52:48.806Z · LW · GW

If not employing a “view from nowhere” then the non-locality has to be formulated with a perspective switch. E.g. Alice considers her own measurement then imagine Bob’s measurement from his perspective (as opposes to find out Bob’s measurement when it affects actions upon her). This non-locality does not apply to any perspective.

Another way to look at it: QM is local. But the conceptual perspective switch is instantaneous/FTL. Thus the non-local statistics.

Comment by dadadarren on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-02T19:34:43.466Z · LW · GW

Because MWI needs perspective-independent objectivity, that it is meaningful to describe reality with “a view from nowhere”, as the universal wave function does. So it needs to accept the postulate in Step 3. CI could (and I argue should) do without. No matter how commonly accepted it is, Step 3 is still an assumption.

Comment by dadadarren on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-02T19:23:38.974Z · LW · GW

Yes. The special role of observers does not mean Wigner style consciousness-induces-collapse theories, which I am firmly against. That could potentially lead to some bizarre conclusions such as a dog could collapse the wavefuntion but a cat can’t. CI doesn’t say that (it didn’t say much at all). For example, PBR suggests the perspective center (the observer) can be a person just like it can be a Geiger counter.

That being said, PBR can be regarded as a consciousness-related theory if one pushes it. Our natural first-person perspective can be simple treated as primitively given or fundamental. But if there has to be an explanation to this perspective, “why is it apparent that I am the perspective center?”, then the answer ought to be because the only subjectivity available is of this particular human/physical system. By postulation, we can choose to reason from other perspectives (e.g. a Geiger counter’s). Now asking why the Geiger counter is the perspective center would potentially point to a subjectivity of its own. Granted, it does not suggest its subjective experience has got anything in common with humans’. Yet that in essence would lead to panpsychism.

Of course, all this has got nothing to do with actions upon the perspective center or anything physically detectable. So I don’t think they are related to the discussion of QM. Just some random thoughts.

Comment by dadadarren on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-02T18:28:39.753Z · LW · GW

I think you are right. Preferred basis is another problem.

Comment by dadadarren on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-02T17:44:46.158Z · LW · GW

I think no matter how board CI is can MWI be regarded as a special case of it. The two are very distinct. The MWI is top-down. It describes the world with the universal wavefuntion as objective reality from a view from nowhere. So its problem is explaining our individual experience and experiment results, e.g. the Born rule. The CI is bottom-up, it describes our experiments and measurements. So its problem is to provide a comprehensible explanation of reality. The unresolvable conflict between the two is the special role of the observer. CI recognizes it, MWI rejects it.

Comment by dadadarren on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-02T17:32:58.102Z · LW · GW

Maybe some variation of CI involves collapse, but strictly-speaking bare bone CI does not require it. This is typically not discussed by supporters of other interpretations. Even Sean Carroll says according to CI the world follows two distinct rules, one when you are not looking, a different one when you are making a measurement (collapse). However, that is not necessarily CI. For example, as PBR shows, there is only one rule in CI, which describes the behavior of actions. By rejecting “the view from nowhere”, there is no sense in examining how the world behaves when it's not affecting the perspective center. Here the wavefuntion is epistemic rather than ontic. There is no physical collapse needed.

Comment by dadadarren on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-02T17:11:05.075Z · LW · GW

I definitely see the value of choosing the least complex theory or Occam’s Razor. The problem is that it works really well in hindsight. But before the debate is settled it is hard to measure which theory is the simpler one.

The appeal of MWI is that it gives a very coherent explanation of quantum phenomenons (maybe not Born rule) without assuming anything extra. There is no additional collapse, there are no extra hidden variables. The existence of parallel worlds that many people find uncomfortable is not its assumption but the result of vigorous logical deductions. I would be lying by saying I don’t see the simplicity and beauty of it. However, I just want to point out MWI does need to assume perspective-independent objectivity. Which CI could (and I think should) go without. If Thomas Nagel’s steps are valid, then it can be argued that CI is less complex. But again, complexity is hard to compare, I have no problem if others find MWI simpler.

I also think we do start with perspective-based reasoning. Even thinking about what happens within my brain requires “a view from nowhere” (assuming perspective-independent reality, step 3), or an outsider’s perspective (step 2).

Comment by dadadarren on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-01T23:32:12.674Z · LW · GW

MWI suggests wavefuntion describes the objective world or wavefuntion is the objective world. That in itself assumes objectivity is perspective-independent. I.e. we can think about reality with a “view from nowhere”. I am arguing that is an assumption, that epistemically speaking, perspectives and actions are more fundamental than reality as an absolute conception.

Comment by dadadarren on Why I Prefer the Copenhagen Interpretation(s) · 2020-11-01T22:14:13.731Z · LW · GW

Yes, EPR does not suggest non-locality as FTL signalling. I was saying the correlational non-locality requires first accepting the absolute (a view from nowhere) objectivity. I.E. it is meaningful to think about Alice’s measurement and how it correlates with the space-like separated Bob’s measurement. This reasoning is directly examining the perspective-independent reality. If we reject Step 3, therefore making perspectives fundamental then this correlational non-locality could not exist. For example, from Alice’s perspective, Bob’s measurement can only affect actions upon her when its light cone reaches her, which would definitely happen within the light cone of her own measurement, making any correlation local. I’m under the impression that QBism argues against quantum non-locality with similar logic.

Comment by dadadarren on Leslie's Firing Squad Can't Save The Fine-Tuning Argument · 2020-09-11T14:34:27.605Z · LW · GW

Well, in that case, our arguments actually have a lot in common. My position regarding anthropics is that perspectives are axiomatic in reasoning. So a valid argument/notion must be formulated from one single perspective. This postulate refutes the probability shift in fine-tuning. It also invalidates the notion of self-locating probabilities like in the case of the doomsday argument.

Comment by dadadarren on Leslie's Firing Squad Can't Save The Fine-Tuning Argument · 2020-09-10T20:23:55.006Z · LW · GW

Very interesting...

I think proponents of the fine-tuning arguments for design are saying there doesn't need to be a reliable way to assign a prior. You can assign any prior you deem reasonable. Nonetheless, after considering our seemingly unlikely existence, the probability would greatly shift towards a teleological conclusion that the universe is designed for life.

So unless you are willing to commit that not only there is no reliable way to assign a prior, but also assigning a probability in this situation is invalid in itself, it doesn't counter their argument per se. It would be just pointing out even with the probability shift we should still be skeptical that the universe is designed due to the unknowns (but less skeptical than before considering fine-tuning). Are you saying what I think you are saying?

Just to be clear, I am not against the invalidity of probability in this situation. In fact, I probably support it more than you would like. I just choose to counter the proposed probability update because that is more direct.

Comment by dadadarren on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-08T16:26:35.289Z · LW · GW

At this point, I think it might be more productive to list our differences rather than try to convince each other.

1. I say a probability cannot be formulated from a single perspective is invalid. You say it doesn't matter.

BTW, you said Question 2 and 3 are no different if both labeling outcomes actualizes in two parallel universes. Yes, if that is the case Question 2 and Question 3 are the same. They are both self-locating probabilities and both invalid according to PBA. However, what you are saying is essentially the MWI. Given I already argued against MWI's origin of probability that is not a counter-argument. It is just a restatement of what I have said.

2. I say SIA is based on an imagined sampling from a reference class. You say it is not.

Here I am a little upset about the selective quoting of the lesswrong wiki to fit your argument. Why not quote the definition of SIA? "All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers." The set of all possible observers being selected is the reference class. Also, you have miss-interpreted the part you quoted "SIA is not dependent on the choice of reference class, as long as the reference class is large enough to contain all subjectively indistinguishable observers." It is saying the choice of reference class, under some conditions, would not change the numerical value of probability. Because the effect of the choice cancels out in problems such as sleeping beauty and doomsday argument. Not that there is no reference class to begin with. Also, just a suggestion, given the ongoing nature of the debate of anthropic principles and the numerous paradoxes, try not to take a single source, even the lesswrong wiki, as given facts.

3. You think SIA solves all anthropic paradoxes. I think not.

To name a few problems that I can think of right away: Does every single observation we make confirms the MWI? Refer to the debate between Darren Bradley and Alastair Wilson if you are unfamiliar. Does my simple existence confirm the existence of the multiverse? Applying SIA to the simulation argument. Wouldn't my existence alone confirm there are numerous ancestor simulations? In that case, wouldn't "I" be almost certainly simulated? Contrary to the simulation argument, applying SIA would suggest the great filter is most likely ahead. So we should be pessimistic about reaching a technologically mature state as described by the simulation argument. In Dr. Evil and Dub, what conclusion would SIA make? Is the brain arm race correct? Does my existence already confirm the arms race has already happened? Can all the above questions be satisfactorily answered using one consistent choice of reference class?

Basing on your opinion previously given on some paradoxes, it seems you think they have idiosyncratic explanations. Which is not wrong per se. But it does seems ad hoc. And if the idiosyncratic reasons are so important, are the paradoxes really solved by SIA or those individual explanations?

Comment by dadadarren on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-07T21:51:19.247Z · LW · GW

I don't think I can give further explanations other than the ones already said. But I will try.

As for the difference between Questions 2 and 3, the fundamental difference is the probability cannot be formulated from a single perspective for 3. It requires the assumption of an innate reference class for the indexical "I". Both different from Question 2 which is regarding a random/unknown experiment. Again the argument has nothing to do with Omega can or cannot give definitive and differentiable answers to either of them.

Bayesianism is perfectly capable of assigning probabilities here. You haven't actually argued for this claim, you're just asserting it.

I only asserted that perspective is an important starting point of reasoning like an axiom. Arguments cannot be formulated from one consistent perspective is therefore false. That includes SIA, SSA, FNC, any notion of reference class for indexicals. And of course self-locating probabilities. I have also shown why self-locating probabilities cannot be formulated with a frequentist model. The same assertion about perspective's axiomatic importance also leads to other conclusions such as rational perspective disagreement. Whether or not my position is convincing is of debate. But I feel it is unfair to say I just asserted self-locating probabilities' invalidity without arguing for it.

You can, of course, do this for any question. You can refuse to make any predictions at all. What's unclear is why you're ok with predictions in general but not when there exist multiple copies of you.

I am not refusing to make a prediction. I am arguing in these cases there is no rational way to make predictions. And keep in mind, the nature of probability in MWI is a major ongoing debate. The fact that the probability comes from a complete known experiment with a deterministic outcome is not easily justifiable. So I think self-locating probabilities' validity should be at least debatable. Therefore I do not think your assertion that "Bayesianism is perfectly capable of assigning probabilities here" can be regarded as an obvious truth.

I don't see any paradoxes. SIA is the natural setup.

I think this is our fundamental disagreement. I do not think all anthropic paradoxes are settled by SIA. Nor do I think SIA is natural. Whatever that means. And I am pretty sure there will be supporters of SIA who's unhappy with your definition of the reference class (or lack thereof).

Comment by dadadarren on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-07T19:41:39.861Z · LW · GW

I am not sure about "expectations" in this context. If you meant mathematics expectations, i.e. the arithmetic means of large independent repetitions, then as I have demonstrated with the frequentist argument, the relative frequency does not converge on any value. So expectation does not exist for self-locating probabilities. If the"expectation" here just means the two alternatives, being the original vs the clone, are meaningful facts then I agree with this assessment. In fact, I argue the fact about the perspective center is extremely meaningful that they are axiomatic. So there is no rational way to assign probabilities to propositions regarding it. My argument against self-locating probability does not depend on the observer never being able to find out the answer. It does not hinge on the lack of record or differentiable observables. The answer (e.g. "original" vs "clone") could be on the back of a piece of paper laying right in front of me. There is still no way to make a probability out of what it says.

If you are familiar with the Many-Worlds Interpretation (MWI), its proponents often use self-locating probability to explain the empirical randomness of quantum mechanics. The argument typically starts with experiments with highly symmetrical outcomes (e.g. measuring the spin of an electron at along a perpendicular axis). Then after the experiment but before observing the outcome, it is argued the probability of "me" in a particular branch must be 1/2. However, one counter-argument is to question the validity of this claim. Why there has to be a probability at all? Why can I just say "I don't know"? Sean Carroll (one of the most famous supporters of MWI) calls it" the simple-minded argument, yet surprisingly hard to counter." (not verbatim) In the MWI construct, the experiment outcome is definitively observable. Yet that does not automatically justify a probability to it.

PBA starts with the plausible assumption of the importance of perspectives, the invalidity of self-locating probability is one of its conclusions. I think it is much less ad hoc than simply making the judgment call of saying there should/shouldn't be a probability in those situations. If we say there should be a probability to it, then it comes to the question of how. SSA or SIA, what counts as an observer, which reference class to use in which case etc. There's one judgment call after another. Paradoxes ensue.

Regarding the individual analysis of the paradoxes, I understand your position. If you do not agree with the invalidity of self-locating probabilities you would not agree with the takedowns. That is the nature of PBA. There is no flexibility in the argument such as the choice of reference classes like other schools of thought. Yet I would consider it an advantage rather than a weakness.

Comment by dadadarren on Hello ordinary folks, I'm the Chosen One · 2020-09-06T02:38:39.298Z · LW · GW

Basing on the reply I am not very certain of your exact position. I kind of suspect it is implying the multiverse response to fine-tuning. It suggests the reason for observed fine-tuning is because there are many universes in total, and only in the ones compatible with life can give rises to observers pondering upon the parameters. Therefore finding ourselves in a universe compatible with life is not a surprise. I.e. It is not statistically incredible because of the huge number of universes out there.

I have to say that answer is very problematic. It treats "I" or "us" as the outcome of a sampling process subjected to survivorship bias, which interprets the WAP as an Observer Selection Effect (OSE). This conceptual selection has to be done from a god's eye perspective. It makes the same mistake as the fine-tuning argument by mixing first-person reasoning with objective reasoning.

In my opinion, this actually justifies the fine-tuning argument. Furthermore, it hijacks the anthropic rebuttal (which should be a simple tautology based on consistent perspective thinking). It also leaves the door open for rebuttals such as Leslie's firing squad and the fine-tuned multiverse.

In a sense, the fine-tuning argument is still an ongoing debate because currently, anthropic reasoning is inadequate. It is filled with paradoxes and controversies. All popular assumptions (SSA, SIA) treats indexicals as the outcome of some sampling process, implying the OSE. My Perspective-Based Argument (PBA) is an attempt to change that.

Comment by dadadarren on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-04T14:54:55.386Z · LW · GW

"The drivers in the next lane are going faster" is true both from a driver's first-person view and from a god's eye view. However, none of those two are self-locating probabilities. This is explained by PBA's position on self-locating probabilities, by the link mentioned above.

The lane assignment can be regarded as an experiment. The lane with more vehicles assigned to it moves slower. Here, from a god's eye view, if a random car is selected then the probability of it from the slow lane is higher. From a driver's first-person view, "I" and the other drivers are in symmetrical positions in this lane assigning experiment. So the probability of me being in the slow lane is higher. According to PBA, both probabilities are valid. However, they are not the same concept, though they have the same value. (This point have been illustrated by Question 1 and Question 2 in the link above)

However, neither of them are self-locating probabilities in the anthropic context. Some anthropic reasoning suggests there is an innate reference class for indexicals like "I". E.g. SSA assumes "I" can be considered a randomly chosen human from all humans. This requires both the first-person to identify "I" and a god's eye view to do the choosing. It does not depend on any experiment. Compare this with the driver's first-person view above, the reference class is all drivers on the road. It is defined by the lane assigning experiment. It does not even matter if other drivers are humans or not. They could be all pigs and they would still be in symmetrical positions with me. The PBA argues the self-locating probabilities are invalid. (This point has been demonstrated by Question 3 in the link above.)

Since we are discussing Nick Bostrom's position, he made the explicit statement in "The Mysteries of Self-Locating Belief and Anthropic Reasoning" that an experiment is unnecessary in defining the reference class. We can always treat "I" as the result of an imaginary sampling process. This is in direct conflict with my PBA. According to PBA, anthropics is not an observation selection effect, just recognizing the perspective of reasoning.

Lastly, you are clearly interested in this topic. I just find the questions you raised have already been covered by the argument presented on my website. I can only recommend you give it a read. Because this question and answer model of communication is disproportionally effort heavy on my part. Cheers.

Comment by dadadarren on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-02T20:40:12.635Z · LW · GW

It may seem very natural to say "I" am a randomly chosen observer (from some proposed reference class). But keep in mind that is an assumption. PBA suggests that assumption is wrong. And if we reason from one consistent perspective such kind of assumptions are unnecessary.

Comment by dadadarren on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-02T18:12:51.356Z · LW · GW

The answer is simple yet unsatisfying. In those situations, assuming the objective is simple self-interest, there is no rational choice to be made.

If we assume the objective is the combined interest of a proposed reference class, and we further assume every single agent in the reference class follows the same decision theory, then there would be a rational choice. However, that does not correspond to the self-locating probability. It corresponds to a probability that can be consistently formulated from the god's eye view. E.g. the probability that a randomly chosen observer is simulated rather than the probability that "I" am simulated. Those two are distinctly different unless we mix the perspectives and accept some kind of anthropic assumption such as SSA or SIA.

Comment by dadadarren on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-02T15:26:09.112Z · LW · GW

First of all, I am pessimistic about finding evidence of the multiverse. That being said, If we take the multiverse as given the WAP is still not the complete picture. Because there are two separate questions here. And the WAP answers only one of them. Let me show this with an example.

Say the subject is my parents' marriage. There are two ways to think about it. One way is I take my first-person view and ask a perspective dependent question "why (do I find) they married each other?" Here a WAP type answer is all that's needed. Because if they didn't I wouldn't exist. However, if the question is formulated impartially/objectively (e.g. from a god's eye view): "why did they marry each other?". Then it calls for an impartial answer, maybe a causal model. The WAP doesn't apply here. The key is to keep reasoning from different perspectives separate. Back to the fundamental parameters. The WAP explains why we find the parameters compatible with our existence. Yet that is not the scientific (impartial) explanation for their values. (If the multiverse is confirmed then the scientific answer could be it's just random). If we do not recognize the importance of perspective to reasoning, we would mix the above two questions and treat it as one problem. By doing so teleological conclusions can always be made. Instead of the fine-tuned universe, they would just argue for a fine-tuned multiverse. Which has already been done by intelligent design proponents IIRC.

Comment by dadadarren on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-02T13:18:30.351Z · LW · GW

I think whether SSA suggests life is more likely to arise in other planets depends on the reference class chosen. For example, if the reference class is all observers from the multiverse, then I am more likely to be in a populous universe. I.e. we should expect life to be more common than observed evidence suggests.

According to PBA, analyzing the fundamental parameters of the universe basing on their compatibility to support life is an egocentric act. We pay attention to life because that is what we are. This reasoning is perspective dependent. If you ask a perspective based question such as "why is everything compatible with my existence?" then you must accept a perspective based answer: "Because you can only find yourself exist." The weak anthropic principle (WAP) essentially.

On the other hand, if we want a scientific explanation of the fundamental parameters, we must reason objectively/impartially. That means giving up the self-attention rooted in our first-person perspective. We must accept life is not inherently logically significant to the universe, also recognize WAP is not a scientific explanation of the fundamental parameters.

The fine-tuning argument is false because it askes the perspective-based question "why is everything compatible with my existence?" then demands an impartial/objective answer. Effectively assuming we are logically significant to the universe. That is why it always ends up with teleological conclusions (the universe is designed to support life etc).

My complete argument, including a rebuttal to Leslie's firing squad can be found here: https://www.sleepingbeautyproblem.com/about-fine-tuned-universe/

Comment by dadadarren on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-02T12:17:56.416Z · LW · GW

There is no restriction on which view to take. You can choose to reason from your natural first-person perspective. You can reason from other people's (or other thing's) perspective. We can even imagine a perspective such as a god's eye view and reason from there. What PBA argues is that once you choose a perspective/view, stick with it for the entire analysis. It's like an axiomatic system. We can't derive triangles' internal angle sum as 180 from Euclidian geometry then use it in Ecliptic geometry.

Self-locating probabilities are invalid because they need both the first-person view AND the god’s eye view to formulate.

Comment by dadadarren on Anthropic Reasoning and Perspective-Based Arguments · 2020-09-01T23:39:54.719Z · LW · GW

Sure. In the Simulation Argument, the probability of me being simulated is a self-locating probability. Self-locating probabilities are invalid concepts according to PBA as their formulations require reasoning from multiple perspectives. The complete argument (with a thought experiment) against self-locating probability can be found on this page. https://www.sleepingbeautyproblem.com/part-3-self-locating-probability/

Specifically, the simulation argument treats the fraction of simulated observers as the probability that “I” am simulated. It considers the indexical “I” as a random sample from the implied reference class (the reference class includes all observers with human-like experience, simulated AND base level organic). It needs a god’s eye view to be indifferent to all observers while also need my first-person view to identify and focus the analysis on “I”. Such a perspective mix is invalid according to PBA.

Comment by dadadarren on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2019-08-29T22:17:06.298Z · LW · GW

Probability should not depend on the type of rewards. Of course, a complicated system of reward could cause decision making to deviate from simple probability concerns. But probability would not be affected. If it helps then consider a simple reward system that each correct answer is awarded one util. As a participant, you take part in the same toss and clone experiment every day. So when you wake up the following day you do not know if you are the same physical person the day before. So you guess again for the same reward. Let your utils be independent of possible clones. E.g. if for each correct guess you are rewarded with a coin then the cloning would apply to the coins in your pocket too. Such that my cumulative gain would only be affected by my past guesses.

Why the extent of care to other clones matter? My answer and other clones' utils are causally independent. The other clone's utility depends on his answer. If you are talking about the possible future fissions of me it is still unrelated. Since my decision now would affect the two equally.

Surely, if "the probability distribution of me being the original or the clone" exists then it would be simple to devise a guessing strategy to maximize my gains? But somehow this strategy is elusive. Instead, the purposed self-locating probability could only help to give strategies to maximize the collective (or average) utilities of all clones even though some are clearly not me as the probability states. And that is assuming all clones make exactly the same decision as I do. If everyone must make the same decision (so there is only one decision making) and only the collective utility is considered then how is it still guided by a probability about the indexical me? That decision could be derived from the probability distribution of a randomly selected participant. Assuming I am a randomly selected participant is entirely unsubstantiated, and unnecessary to decision making as it brings nothing to the table.

Comment by dadadarren on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2019-08-29T15:31:39.033Z · LW · GW

Sorry for abandoning the discussion and reply so late. I think even if the sole purpose of probability is to guide decision making the problem remains about these self-location probabilities. In the cloning example, suppose we are giving a reward for participants' every correct guess whether they are the original or clone. "The probability distribution of me being the original or the clone" doesn't help us to make any decision. One may say these probabilities guide us to make decisions to maximize the overall benefit of all participants combined. However such decisions are guided by "the probability distribution of a randomly selected participant being the original or the clone" without the use of indexical. And this purposed use of self-locating probability is based on the assumption that I am a randomly selected observer among certain reference class. In effect, an unsupported assumption is added yet it doesn't allow us to make any new decisions. From a decision-making point of view, the entire purpose of this assumption seems to be finding an use of these self-locating probabilities.

"The probability distribution of me being the original or the clone" would be useful to decision making if it guides us on how to maximize the benefit of me specifically as stated in the probability distribution. But such a strategy do not exist. If one holds the view that other than decision making probability serves no purpose, then he should have no problem accepting self-locating probabilities do not exist since they do not have any purpose.

Comment by dadadarren on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2019-08-04T04:01:29.775Z · LW · GW

I can relate to that. In fact, that is the most common criticism I have faced. After all, it is quite counter-intuitive.

I want to point to the paradox regarding the probability of me being a Boltzmann Brain. The probability of "this awakening being the first" is of the same format: the probability of an apparent indexical being a member of some default reference class. There is no experiment deciding which brain is me just as there is no experiment determining which day is today. There is no reason to apply a principle of indifference among the members of the default reference class. Yet that is essential to come up with a probability.

Of course one can define the experience. But I am not arguing "today is Monday" is a nonsensical statement, only there is no probability distribution. Yes, we can even wager on it. But we do not need probability to wager. Probability is however needed to come up with a betting strategy. Imagine you are the participant in the cloning-with-a-friend example who's repeating the experiment a large number of times. You enter wagers about whether you are the original or clone after each wake-up. Now there exist a strategy to maximize the total gain of all participants or a strategy to maximize the average gain of all participants . (assuming all participants would act the same way as I do.) However, there is no strategy to simply maximize the gain of the self-apparent me. That is a huge red flag for me.

Of course one may argue there is no such strategy because of this beneficiary me is undefined. (it's just an indexical after all). Then would it be consistent to say the related probability exists and well-defined?

Comment by dadadarren on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2019-08-04T03:28:59.481Z · LW · GW

Obviously I don't agree but I respect your judgment.

I agree with your first example. It is equivalent to the cloning with a friend experiment. (I'm sorry but I'm so used to Head 1 awakening, Tail 2 awakenings setups as most literatures set it that way. I know it is reversed in your example. But for the sake of consistency, I would still discuss it this way. Please forgive my stubbornness.) In that setup Alice and Bob would come into disagreement as long as Alice is a halfer, no matter her reasons. I can understand if you treat this as evidence for halferism being wrong. At the end of the day, I have to admit this is very peculiar. Nonetheless, what I did was try to explain why this disagreement is valid. The reason I used a cloning example instead of the original memory wipe example is that it makes the expression much easier. But I would like to take this opportunity to apply the same argument to explain the disagreement in a memory wipe setup.

Frequentist reason: repeating the experiment from a participant's perspective is different from repeating it from an observer's perspective. While this is much easier to show in the cloning example, it is messier for memory wipes. The SBP is essentially, in case of Tails, dividing the total duration of the experiment (2 days) into 2 halves with a memory wipe. So there would be 2 subjectively indistinguishable instances. For Alice, repetitions must be in the same structure. Yet prior iterations should not affect the later ones. So each subsequent experiment must be shorter in duration. So if the first experiment takes 2 days. Then the second can only take 1 day. The third half a day, the fourth quarter a day, etc. This way Alice can repeat the experiment as many times as needed. And the relative frequency would approach to 1/2. For Bob, repeating it would always be randomly waking up at a potential awakening of Alice. Structure of repetition is irrelevant. for him. The relative frequency of Heads is 1/3 given he wakes up with Alice.

Bayesian reason: they interpret the meeting differently. To Bob, the meeting means one of Alice's awakening(s) is on the day Bob's awake. To Alice, the meeting means this specific awakening is on the day that Bob's awake. Alice is able to specify this specific awaken from any possible others because it is her perspective center. It is inherently special to her.

Regarding the second experiment. I am aware of this type of argument. Jacob Ross calls it *Hypothetical priors arguments". Variations of it have been purposed by Dorr 2002, Arntzenius 2003, and Horgan 2004, 2008. Basically it adds the missing identical awakening of Heads back. And sometime after waking up that added awakening is rejected by some information. Since the four possible awakenings are clearly symmetrical so each of which must have a probability of 1/4. Removing a possibility would call for a Bayesian update to cause the probability of Heads to drop to 1/3. This argument was not successful in convincing the opposition because it relies on its equivalency to the original Sleeping Beauty Problem. This equivalency however is largely intuition-based. So halfers would just say the two problems are different and noncomparable and thirders would disagree. There would be some back an forth between the two camps but not many valuable discussions can be had. That explains why this argument is typically seen in earlier papers. Nonetheless, I want to present my reasons why they are not equivalent. The first-person identification of today or this awakening is based on its perspective center. Which is based on its perception and subjective experience. If there is no waking up, then there is no first-person perspective to begin with. It is vastly different from wake up first then reject this awakening as a possibility. Also, as discussed in the main post., there is no probability distribution for an indexical being a member of default a reference class. So I'm against assigning 1/4 to the four events and the subsequent conditional update.

I am grateful for your reply. I'm not naive enough to think I can change your mind. Yet I appreciate the opportunity you gave for me to present some ideas that don't fit in the flow of the main post. Especially the messy explanation of the disagreement in memory-wipe experiments​.

Comment by dadadarren on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2019-08-02T16:24:54.109Z · LW · GW

Not trying to put it in any negative way, but I honestly find the reply vague and hard to respond to. I get a general impression about what you are trying to say but feel I'm guessing. Do you disagree with me interpreting probability as relative frequencies in the disagreement example? Or do you think there has to be a defined cost/reward setup to make it a decision-making problem to talk about probabilities in anthropics? Or maybe something else?

Regarding different answers to different questions of the various instances of me. Again I'm not very sure what the argument is or how is it related to anthropics. Are you trying to say the disagreement on probability is due to different interpretations of the question? Also, I want to point out that not all anthropic problems are related to different instances of an observer. Take the Doomsday Argument, or the cloning experiment for example, the paradox is formed at the agent level, no special consideration of time/instances is needed.

Comment by dadadarren on Perspective Reasoning and the Sleeping Beauty Problem · 2019-05-20T19:45:49.338Z · LW · GW

Hello Marc,

Appreciate the reply. Allow me to explain why the perceived ambiguity is in my opinion caused by mixing of perspectives. When we try to analysis an anthropic related problem, such as sleeping beauty or an equivalent cloning experiment, there are actually two school of thoughts. One is to reason from the participant's first person perspective. The other is to reason purely objectively as an impartial observer (third-person perspective).

Reasoning from the participant's first-person perspective makes concepts such as I, now or here primitively understood. These indexical concepts inherently standout to the first-person as they are defined by their subjective immediacy to the perspective center. (an everyday example would be an identical twin can inherently distinguish himself from his brother. He can do this without knowing any difference between them which is impossible for an impartial outsider.) At the same time, reasoning from the first-person perspective would affirm self-existence/consciousness. It is necessarily true that from first-person perspective "I am, here, now." (cogito ergo sum)

If reasoned objectively then no agent, time or place would be inherently unique. It is uncentered. So technically from the third-person perspective there is no I now or here. e.g. to objectively specify an agent instead of using the indexical I some feature such like the proper name dadadarren have to be used. (In everyday language we tend to use indexical terms even when trying to reason objectively. In such instances words such as I should not be regarded as indexicals pointing to a perspective center but a conventional shorthand representing various objective features defining an agent.) From this objective third-person perspective no agent's existence/consciousness is guaranteed. It is logical to ask about any agent's probability of existence. Furthermore, from this perspective all agents/times are logical equals. So a principle of indifference can be applied and they can belong to the same reference class.

Now if we are to ask the probability density function of today being Monday or Tuesday, or the pdf of I being the original or clone, problems arise. At one end, to use the indexicals I and now to specify an agent or date requires we take the participant's first-person perspective. They cannot be used to specify a particular agent or time if we are reasoning objectively as an impartial observer since these indexicals have no objective significance. This is why using these terms in anthropic problems seems ambiguous. On the other hand if we reason from the first-person perspective of a participant, yes the meaning of I or now are clear. Yet a sample space containing all possible agents (the original and the clone) or all times (Monday and Tuesday) cannot be constructed. Because from a participant's first-person perspective all perspective centres are not logical equals. A principle of indifference among the two days was thrown out of the window the minute a particular day is regarded as inherently special such that it can be specified not by its objective difference from other days but by a simple utterance of of the word "today". Therefore even though it is correct to say that I am either the original or the clone it is impossible to put a probability on either alternative. This is also backed by frequentist analysis. Which btw, if a consistent perspective is used the long run frequency of Heads should be 1/2.

When I ask my coworker "what day is it today?" The today is still primitively defined from the first-person perspective by both me and my coworker. I don't think it is a shared concept. It is just that the time taken of our communication is minuscule compare to the duration of interest (day) such that one's perspective center can be used to approximate the other's perspective center without causing problems in communication. If instead you find a message in a bottle on the beach which says "what day is it today?" Then this question becomes impossible to answer. Since you would have no idea what this today refers to.

Since my entire position is that first-person (centered) and third-person (uncentered) reasoning should not mix I cannot agree with your mirror argument. Although I would say due to its simplicity (this is a compliment) thirders would have a hard time countering it. Yet the current reality of the Sleeping Beauty discussion is less of finding mistakes in other’s argument but more of whose position have less undesirable consequences. So I won’t be surprised if the mirror argument fail to convince many thirders. In your paper you seems to agree with David Lewis that if the coin toss has already happened and beauty is told it is Monday then the probability of Heads should be rightfully 2/3. Only when the coin toss is yet to happen then the probability shall be 1/2. That the time of coin toss has a material influence on the probability. If this is the correct understanding then I feel the argument is quite problematic. I don’t think there is any logical significance to the physical coin toss. You argued the toss is a chancy event whose timing could affect probability. Yet it can also be deemed as a deterministic event: as long as one has the detailed information of the various variables, such as magnitude and direction of force and air resistance and impact surface shape etc, the result can be readily predicted. The randomness could be interpreted as entirely due to the lack of information. Whether it is a truly random event or a pseudo one shouldn’t affect the probability calculation. Yet this version of double-halving argument depends on that.

Comment by dadadarren on Perspective Reasoning and the Sleeping Beauty Problem · 2019-05-03T20:22:51.916Z · LW · GW

Thank you for the reply. Let's take a step back. Thirders argue there is new information when waking up in the sleeping beauty problem. That information is "I am awake today". Similarly for the cloning problem thirders argue the new information is "I exist". Both of which are interpreted as evidence favouring more observers even though not explaining who is this specific I or what is this today's date. It is treated as if they are primitively understood concepts do not need any explanation. By this standard I do not need to explain who is this specific me in the question of "what's the probability of me being a clone?". If you think the question is invalid/ambiguous because by using me I did not specify a particular clone then by the same logic thirders' argument for new information also fails. When waking up in cloning experiment instead of saying a specific I exist thirders are only saying an unspecific copy exists, (i.e. there is at least one copy exists). That is not new information.

The reason why thirder's argument seems convincing is because if reasoned from an observer's first person perspective the indexical concepts such as I, now or here are indeed primitively understood. E.g I do not need to know how I look like to tell this is me. I do not need to know the exact date and hour to tell when is this now (and subsequently today). They do not need any further explanation because they are primitively identified base on their immediacy to the subjective experience. In another word the meaning of these indexicals depends on the thinker's perspective center. If we reason purely objectively (from a third-person perspective) then these indexicals are meaningless.

So in a sense you are correct. The questions are invalid because a particular copy is not specified by using the indexical me. Because in this third-person question it is asking about someone only identifiable from the first-person perspective. To make the question valid, instead of using the indexical me an individual must be specified objectively. e.g. a randomly selected person etc.

Comment by dadadarren on [Answer] Why wasn't science invented in China? · 2019-04-25T13:50:07.530Z · LW · GW

I am by no means expert in this. My theory is effective writing in general is a way to signal one's intelligence in most medieval societies. This is especially so if one can write and read in a form of ancient text. But in Western Europe this was achieved by directly using a old language - Latin. Proficiency in a different language by itself is enough to be an indicator of intelligence. However the Chinese to an extent have been using the same language (or at least writing) for the entire history. An example would be for a typical grade 8 Chinese language textbook would include many old passages some of which was written 18 centuries ago. Being able to write plainly in an everyday language is not something hard. So the Chinese scholars have a greater urge to show their status by using poetic and archaic expressions. Very often at the expense of clarity.

Comment by dadadarren on [Answer] Why wasn't science invented in China? · 2019-04-25T03:20:31.358Z · LW · GW

Ahh, the famous Lun Yu. It is full of such expressions that direct translation gives you a headache. To me the most famous example would be "民可使由之不可使知之". Due to the lack of punctuation it can be translated in two different ways:

1: 民可使由之,不可使知之:common people shall be commanded, (but) not enlightened.

2: 民可,使由之。不可,使知之。:(if) common people are well educated let them act on their own. If not, enlighten them. Pretty drastically different political ideal here.

Comment by dadadarren on [Answer] Why wasn't science invented in China? · 2019-04-24T15:59:09.941Z · LW · GW

As a Chinese I want to contribute some thought into this topic.

One thing I want to mention is the difference in language. Classical Chinese is a language extremely difficult to master. It literally take decades of effort to be able to write a decent piece. It is hard not because of complicated grammar or complex sentence structure. But because it focus on poetic expressions and scholarly idioms. This language is very enjoyable to read and relatable when used in expressing emotions and ideas. However it is quite cumbersome in expressing precise logic and definitions. Yet at least before the new cultural movement in 1916 it is generally regarded that anything worth put into writing should be done in Classical Chinese. This severely limits the participation of the general populace. Even if someone is trained enough to put down scientific related topics in Classical Chinese it is unlikely to be regarded as a masterful piece and gather much audience. Just like if a poorly written piece is posted in lesswrong we are more likely to skip it regardless of the content it is expressing.

Comment by dadadarren on Where to Draw the Boundaries? · 2019-04-14T02:49:15.230Z · LW · GW

Interesting article. I dare not say I understand it fully. But to argue for some categories as more or less wrong than others is it fair to say you are arguing against the ugly duckling theorem?

Comment by dadadarren on Would solving logical counterfactuals solve anthropics? · 2019-04-07T15:02:02.197Z · LW · GW

Not sure if I'm following. I don't see in anyway the original is privileged over its copies. In each repetition after waking up I could be the newly created clone just like in the first experiment. The only privileged concepts are due to my first-person perspective such as here, now, this, or the "me" based on my subjective experience.

Comment by dadadarren on Would solving logical counterfactuals solve anthropics? · 2019-04-06T22:56:27.800Z · LW · GW

In this case beauty still shouldn't use the reference class logic to assign a probability of 0.5. I argue for sleeping beauty problem the probability of "today" being Monday/Tuesday is an incoherent concept so it do not exist. To ask this question we must specify a day from the view of an outsider. E.g. "what's the probability the hotter day is Monday?" or " what is the probability the randomly selected day among the two is Monday?".

Imagine you participate in a cloning experiment. At night when you are sleeping a highly accurate clone of you with indistinguishable memory is created in an identical room. When waking up there is no way to tell if you are old or new. It might be tempted to ask "what's the probability of "me" being the clone?" I would guess your answer is 0.5 as well. But you can repeat the same experiment as many times as you want by falling asleep let another clone of you be created and wake up again. Each time waking up you can easily tell "this is me", but the is no reason to expect in all these repetitions the "me" would be the new clone about half the times. In fact there is no reason the relative frequency of me being the clone would converge to any value as the number of repetition increases. However if instead of this first person concept of "me" we use an outsider's specification then the question is easily answerable. E.g. what is the probability the randomly chosen version among the two is the clone? The answer is obviously 0.5. If we repeat the experiments and each time let an outsider randomly choose a version then the relative frequency would obviously approach 0.5 as well.

On a side note this also explains why double-halving is not unBayesian.

Comment by dadadarren on Would solving logical counterfactuals solve anthropics? · 2019-04-06T17:34:36.340Z · LW · GW

From an agent's first-person perspective there is no reference class for himself, i.e. he is the only one in its reference class. A reference class containing multiple agents only exists if we employ an outsider view.

When beauty wakes up in the experiment she can tell it is "today" and she's experiencing "this awakening". That is not because she knows any objective differences between "today" and "the other day" or between "this awakening" and "the other awakening". It is because from her perspective "today" and "this awakening" is most immediate to her subjective experience which makes them inherently unique and identifiable. She doesn't need to consider the other day(s) to specify today. "Today" is in a class of its own to begin with. But if we reason as an objective outsider and not use any perspective center in our logic then none of the two days are inherently unique. To specify one among the two would require a selection process. For example a day can be specified by say "the earlier day of the two", "the hotter day of the two" or the old fashioned "the randomly selected among of the two". (an awakening can similarly be specified among all awakenings the same way) It is this selection process from the outsider view that defines the reference class.

Paradoxes happens when we mix reasonings from the first-person perspective and the outsider's perspective in the same logic framework. "Today" becomes both uniquely identifiable while at the same time also belongs to a reference class of multiple days. The same can be said about "this awakening". This difference leads to the debate between SIA and SSA.

The importance of perspectives also means when using betting argument we need to repeat the experiment from the perspective of the agent as well. This also means from an agent's first-person perspective, if his objective is simply to maximize his own utility no other agent's decision need to be considered.

Comment by dadadarren on Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning · 2019-03-21T14:18:44.826Z · LW · GW

Beauty can make a rational decision if she changes the objective. Instead of the first-person apparent "I" if she try to maximize the utility of a person distinguishable by a third-person then a rational decision can be made. The problem is that in almost all anthropic school of thought the first-person center is used without discrimination. E.g. in sleeping beauty problem the new evidence is I'm awake "today". In Doomsday argument it considers "my" birth rank. In SIA's rebuttal to Doomsday Argument the evidence supporting more observers is that "I" exist. In such logics it doesn't matter when you read the argument the "I" in your mind is a different physical person from the "I" in my mind when I read the same argument. Since the "I" or "Now" is defined by first-person center in their logic it should be used the same way in the decision making as well. The fact a rational decision cannot be made while using the self-apparent "I" only shows there is a problem with the objective. That using the self-apparent concept of "I" or "Now" indiscriminately in anthropic reasoning is wrong.

Actually in this regard my idea is quite similar to your FNC. Of course there are obvious differences. But I think a discussion of that deserves another thread.

I got a feeling that our discussion here is coming to an end. While we didn't convince each other, as expected for any anthropic related discussion, I still feel I have gained something out of it. It forced me to try to think and express more clearly and better structure my argument. I also want to think I have a better understanding of potential counter arguments. For that I want to express my gratitude

Comment by dadadarren on Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning · 2019-03-20T14:31:02.670Z · LW · GW

I think while comparing cloning and sleeping beauty problem you are not holding them to the same standard. You said we have good reason to think that "good-enough" memory erasure is possible. By good-enough I think you meant the end result might not be 100% same from a previous mental state but the difference is too small for human cognitives to notice. So I think when talking about cloning the same leniency should be given and we shouldn't insist on a exact quantum copy either. You also suggested if our mental state is determined by our brain structure at a molecular level then it can be easily revered. But then suggests cloning would be impossible if our mind is determined by the brain at a quantum level. If our mind is determined at a quantum level simply reverting the molecular structure would not be enough to recreate a previous mental state either. I feel you are giving the sleeping beauty problem a easy pass here.

Why would the use of first-person me render all use of probability invalid? Regarding the risk of a medical procedure we are talking about an event with different possible outcomes that we cannot reliably predict for certain. Unlike the color of the eyes example you presented earlier this uncertainty can be well understood from the first-person perspective. For example when talking about the probability of winning the lottery you can interpret it from the third-person perspective and say if everyone in the world enters then only one person would win. But it is also possible to interpret it from the first-person perspective and say if I buy 7 billion tickets I would have only 1 winning ticket (or if I enter the same lottery 7 billion times I would only win once). They both work. Imagine while repeating the cloning experiment, after each wake up you toss a fair coin before going back to sleep again for the next repetition of cloning. As the number of repetitions increases the relative frequency of Heads of the coin tosses experienced by "me" would approach 1/2. However there is no reason the relative frequency of "me" being the original would converge to any value as the number of repetitions increase.

The reason there is no way to decide on whether or not to eat the cookie is because the only objective is to maximize the pleasure of the self-explanatory "me" and the reward is linked to "me" being the original. Not only my theory cannot handle the situation. I am arguing the situation is setup in a way no theory could handle it. People claiming beauty can make a rational decision is either changing the objective (e.g. be altruistic towards other copies instead of just the simple self) or did not use the first-person "me" (e.g. trying to maximize the pleasure of the person defined by some feature or selection process instead of this self-explanatory me).

Comment by dadadarren on Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning · 2019-03-18T14:48:47.960Z · LW · GW

You mentioned if our consciousness is quantum state dependent then creating a clone with indistinguishable memory would be impossible. (Because to duplicate someone's memory would require complete information about his current quantum state, if I understand correctly) But at the same time you said sleeping beauty experiment is still possible since memory erasing only requires acting on the quantum state of the person without measuring it in its entirety. But wouldn't the action's end goal to revert the current state to a previous (Sunday nights) one? It would ultimately require beauty's quantum state to be measured at Sunday night. Unless there is some mechanics to exactly reverse the effect of time on something. But that to me appears even more unrealistic. I do agree that the practical difficulty between the two experiment is different. Cloning with memory does require more advanced technology to carry out. However I think that does not change how we analysis the experiments or affect probability calculations. Furthermore, I do not think this difference in technical difficulty means we are too primitive to ponder about the cloning example while sleeping beauty problem is fair game.

The reason I bring out the cloning example is because it makes my argument a lot easier to express than using the sleeping beauty problem. You think the two problem are significantly different because one may be impossible in theory the other problem is definitely feasible. So I felt obligated to show the two problems are similar especially concerning theoretical feasibilities. If you don't feel the theoretical feasibility is crucial to the discussion I'm ok to drop it from here on. One thing I want to point out is that all argument made by using the cloning experiment can be made by using the sleeping beauty problem. It is just that the expression would be very longwinded and messy.

You mentioning that no matter how we put it one of the copies is the original while the other is the clone. Again I agree with that. I am not arguing "I am the original" is a meaningless statement. I am arguing "the probability of me being the original" is invalid. And it is not because being the original or the clone makes no difference to the participant. But because in this question the first-person self explanatory concept of "me" should not be used. From the participant's first-person perspective imagine repeating this experiment. You fall asleep and undergone the cloning and wake up again. After this awakening you can guess again whether you are the original for this new experiment. This process can be repeated as many times as you want. Now we have a series of experiment that you have first-person subjective experience. However there is no reason the relative frequency of you being the original in this series of experiments would converge to any particular value.

Of course one could argue the probability must be half because half of the resulting copies are original and the other half is the clone. However this reasoning is actually thinking from the perspective of an outsider. It treats the resulting clones as entities from the same reference class. So it is in conflict with using the first-person "me" in question. This reasoning is applicable if the entity in question is singled out among the copies from a third-person perspective, e.g. "the probability of a randomly selected copy being original." Whereas the process described in the previous paragraph is strictly from the participants first-person perspective and inline with the use of first-person "me".

Now we can modify the experiment slightly such that the cloning only happens if a coin toss land on Tails. This way it exactly mirrors the sleeping beauty problem. After wake up we can give each of them a cookie that is delicious to the original but painful to the clone. Because from first-person perspective repeating the experiment would not converge to a relative frequency there is no way to come up with an strategy for the participant to decide whether or not to eat them that will benefit "me" in the long run. In another word if beauty's only concern is the subjective pleasure and pain of the apparent first-person "me", then probability calculation could not help her to make a choice. Beauty have no rational way of deciding to eat the cookie or not.

Comment by dadadarren on Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning · 2019-03-14T04:06:59.753Z · LW · GW

Thank you for the speedy reply professor. I was worried that with my slow response you might have lost interest in the discussion.

Forgive me for not discussing the issues in the order you presented. But I feel the most important argument I want to challenge is that sleeping beauty problem is physically possible but the cloning experiments are strictly fantastical.

In the cloning experiment the goal is not to make a physically exact copy of someone but to make the copy accurate enough that a human could not differentiate. Which is no different from the sleeping beauty problem. Considering the limitations of human cognitive ability and memory this doesn't remotely require a exact quantum state copy. Unless you take the position that the human memory is so sophisticated it is quantum state dependent. But then it means to revert beauty's memory to a earlier state would require her brain to change back to a previous quantum state. Complete information about that quantum state cannot be obtained unless she is destroyed at the time. I.E. Sleeping beauty would be against no-cloning theorem thus non-feasible as well. Apart from memory there is also the problem of physical differences. It is understood during the first day beauty would inevitably undergone some physical changes. E.g. her hair may grow, her skin is aging a tiny bit, etc. This is not considered a problem for the experiment because it is understood human cannot pick on such minuscule details. So even with these physical changes beauty would still think this is her first awakening after waking up the second time. The same principle applies to the cloning example. As long as the copy is physically close enough for human's cognitive ability to not notice any differences the clone would believe he is the original. In summary if sleeping beauty problem is physically possible the cloning example must be as well. In this problem after waking up in the experiment the "probability of me being the original" makes no sense. Even if you consider repeating the experiment many times there is no answer to it. Again it is referring to the primitively understood first-person "me" not someone identified form a third-person perspective such as "the probability of a randomly selected copy being the original."

As for the question of my soul getting incarnated into somebody. This is not my idea. Various anthropic school of thoughts lead to such an expression. For example in Doomsday Argument's prior probability calculation SSA argues I am equally likely to be born as any human ever exists. SIA adds on top of it and suggests I am more likely to be born into this world if more human ever exists. They both closely resemble the idea of soul embodiment into a pool of candidate. I mentioned this expression because it neatly describes what "the probability of me being a man" refers to in the anthropic context. And let's not loose the big picture here that I am arguing such probabilities do not make sense. So I complete agree with you that using soul embodiment as an experiment to assign probability is highly questionable. In fact I am arguing such notions are outright wrong.

Regarding the case of eye color quite clearly we are not discussing anything resembling the above idea. By surveying other people in the tribe I would know what percentage of the tribesman have blue eyes. If we say that percentage is the probability of someone having blue eyes then there is an underlying assumption this someone is an ordinary member of such tribe. He is not special. This is against the first-person perspective where I am inherently different from anybody else. Meaning this person is identified among the tribesman from a third-person perspective. Therefore that percentage is not the probability the first-person "I" have blue eyes. But rather the probability of an randomly selected tribesman have blue eyes. A optimal level of avoiding sun exposure can be derived from that survey number. However it cannot be said that strategy is optimal for myself. All we know is that if every tribesman follows this strategy then it would be optimal for the tribe as a whole.

I think by using a betting argument there is an underlying assumption that someone trying to maximize her own earning would follow a strategy determined by the correct probability. This I agree. However that is when the decision maker and the reward receiver is the same person. It is to say if beauty is contemplating the probability of "today" being Monday, then the reward for a correct guess should be given to today's beauty. That's what I meant by Monday Beauty's correct decision should reward Monday Beauty alone. In the setup you presented that is not the case. In your setup the objective is to maximize the accumulated earnings. For this objective the concept of a self-explanatory "today" is never used. So the calculation is not reflecting the probability of "today being Monday". But rather reflecting the probability that "the day beauty remembers the previous experiment is Monday". Essentially it has the same problem as the eye color example. The first-person center concept of "today" is switched to a third-person identity. If we go back to the cloning experiment, you are arguing after waking up "the probability of a randomly selected copy being original" is valid and meaningful. I agree with this. I am arguing using first-person center "the probability 'me' being the original" do not exist.

For the cookie experiment yes the painful reaction and delicious bliss are of course meaningful. But it only means "today is Monday" and "today is Tuesday" are both meaningful to her. This I never argued against. However if a probability of "today is Monday" exists then there should be an optimal strategy for "beauty in the moment" to maximize her pleasure. Notice strategies exists to maximize the pleasure throughout the two day experiment. Strategies also exists to optimize the pleasure of beauty exists at the end of experiment. But there is no strategy to maximize the pleasure of this self apparent "beauty in the moment". We can even repeat the experiment for this "today's beauty". Let her sleep now and enter another round of sleeping beauty experiment. Instead of the two potential awakenings 24 hour apart, this time they are 12 hours apart. So this new experiment fit into 1 day, beauty would not experience the memory wipe from the original experiment. (Here I'm assuming the actual awakening and interviewing takes no time, for the ease of expression). Again in the first awakening she would be given allergic cookies and the second awakening good cookies. When she wakes up she would be facing the same choice again. We can repeat the experiment further with later iterations' awakenings closer and closer. But there is no strategy to maximize a "beauty in the moments" overall pleasure. (Here it shows why I want to use the cloning example, because to repeat the sleeping beauty experiment from beauty's first-person perspective is very messy. And question such as if the pain is completely forgotten does it still matter comes into question.)