Posts

The First Person And The Physical Person 2022-01-10T19:47:51.325Z
You Don't Need Anthropics To Do Science 2021-11-07T15:07:03.266Z
Better and Worse Ways of Stating SIA 2021-10-28T16:04:22.333Z
Don't Use the "God's-Eye View" in Anthropic Problems. 2021-10-26T13:47:53.386Z
Consciousness, Free Will and Scientific Objectivity in Perspective-Based Reasoning 2021-10-14T18:00:10.681Z
The Validity of Self-Locating Probabilities (Pt. 2) 2021-08-25T01:53:17.616Z
The Validity of Self-Locating Probabilities 2021-08-21T02:53:13.579Z
Absent-Minded Driver and Self-Locating Probabilities 2021-08-14T00:09:20.347Z
Should VS Would and Newcomb's Paradox 2021-07-03T23:45:29.655Z
Anthropics and Embedded Agency 2021-06-26T01:45:06.880Z
Anthropic Paradoxes and Self Reference 2021-06-06T02:52:12.132Z
"Who I am" is an axiom. 2021-04-25T21:59:10.566Z
A Simplified Version of Perspective Solution to the Sleeping Beauty Problem 2020-12-31T18:27:14.349Z
Sleeping Beauty, Cloning Beauty, and Frequentist Arguments In Anthropic Paradoxes 2020-11-09T23:17:21.624Z
Why I Prefer the Copenhagen Interpretation(s) 2020-10-31T21:06:02.500Z
Leslie's Firing Squad Can't Save The Fine-Tuning Argument 2020-09-09T15:21:19.084Z
Hello ordinary folks, I'm the Chosen One 2020-09-04T19:59:10.799Z
Anthropic Reasoning and Perspective-Based Arguments 2020-09-01T12:36:41.444Z
Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? 2019-08-01T20:10:46.445Z
Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning 2019-03-09T15:21:02.258Z
Perspective Reasoning and the Sleeping Beauty Problem 2018-11-22T11:55:22.114Z
The Sleeping Beauty Problem and The Doomsday Argument Can Be Explained by Perspective Inconsistency 2018-08-05T13:45:27.185Z

Comments

Comment by dadadarren on The First Person And The Physical Person · 2022-01-12T14:06:24.141Z · LW · GW

I think it is a good stratergy. It will highlight the paradoxes in anthropics are due to the involvement of indexicals such as "I" or "now" in the argument.

I didn't use decision process to discuss anthropics because it involves defining the subject of utility. How one specify it involves assumptions that already determines the answer. For example is a program running on two independent instance considered the same actor? Should the results be pooled together upon evaluation? etc.

Comment by dadadarren on You Don't Need Anthropics To Do Science · 2021-11-09T16:19:06.646Z · LW · GW

We favor quantum mechanics because it can explain/predict some experiment observartions while classical mechanics cannot. This reasoning is exactly what I am arguing for.

Anthropics however argue without regarding "I" as a random sample there is no way to use our observations to evaluate theories. Because no matter how unlikely, any observation possible would have happened in the entire universe.

Frankly, I don't see any parallel here.

Comment by dadadarren on You Don't Need Anthropics To Do Science · 2021-11-08T14:43:08.934Z · LW · GW

No. Anthropics is primarily about how to treat indexical information (such as "I", "here" and "now") in reasoning. Most camps regard Bayesianism as an independent probability interpretation that requires no justification from frequentism. I actually think most anthropic camps do not pay enough attention to the frequentist interpretation.

Comment by dadadarren on SIA > SSA, part 1: Learning from the fact that you exist · 2021-10-25T17:55:32.611Z · LW · GW

> For those familiar with the Sleeping Beauty problem, though, you can think of SIA as “thirding,” and SSA as “halfing” — at least to a first approximation.

I think it should be noted while majority of thirders endorses SIA, most halfers do not support SSA. So a more accurate description would be SSA as a (minor) camp of halving.

Comment by dadadarren on Consciousness, Free Will and Scientific Objectivity in Perspective-Based Reasoning · 2021-10-18T14:34:24.697Z · LW · GW

I think Intersubjectivity is the right direction, the detail is in how to aggregates multiple subjective experiences. I suppose the only way is to find the reports that are common to all subjectives being considered. So objectivity becomes all things that can be inter-perspectively tested. And yes, for almost all purposes the inter-perspective objectivity and view-from-nowhere objectivity function the same. That's why we typically don't pay attention to it. And in cases they do matter we end up in debates and paradoxes. 

Comment by dadadarren on Consciousness, Free Will and Scientific Objectivity in Perspective-Based Reasoning · 2021-10-18T14:14:43.724Z · LW · GW

>The presupposition of free will in this question is not the act of taking the other person's perspective, it is the framing of the question in terms of what should he do (assuming he is free to do it), not what does he do.

When you make decisions such as which movie to watch, which shirt to buy, etc, do you ever do so by analyzing your brain's structure and function thus deducing what result it would produce? I will take a wild guess and say that's not how you think. You decide by comparing the alternatives based on preference. This reasoning is clearly different from reductively studying the brain, I wouldn't call it just a framing difference. 

As for debugging, I am not telling you how to do it. Debugging is essentially figuring why a Turing machine is not functioning as intended. One can follow its actions step by step to find the error, but that would still be reductively analyzing it rather than imagine oneself being the program. That would involve imagining how it feels to be the program. I don't even think that is possible. So I'm certainly not saying to assume the program's "mind" being the same as your own, as Typical Mind Fallacy says. 

Comment by dadadarren on Consciousness, Free Will and Scientific Objectivity in Perspective-Based Reasoning · 2021-10-15T18:08:28.454Z · LW · GW

Let me use a crude example. Say a person is facing the choice of taking 10 dollars versus getting a healthy meal. What should he do?

We can analyze it by imagine taking his perspective, consider the outcome of both actions, and choose the one I like better based on some criteria. This process assumes the choice is unrestricted from the on-set. (More on this later)

Alternatively we can just analyze that person physically, monitor what electrical signals he receives from his eyes, how in his brain the neuron networks functions to reductively deduce his action. In this method, there is no alternative action at all. The whole analysis is derivative. And we did not take his perspective. As I said, consciousness and free will are always due to the self.  In this analysis, free will is not presupposed for the experiment subject, (it is not the self). 

When we analyze a computer bug, it is actually the second type of method we are using. It is no different from trying to figure out why an intricate machine doesn't work. That is not taking the program's perspective. If I am taking the computer/program's perspective, then I will not be reductively studying it but rather assume I am the program, with subjective experience attached to it. Similar to "What is it like to be a Bat?", it would be knowing what it is like to be that program. Doing so, the analysis would presuppose "I" (the program) has free will as well. 

That may be a little hard to imagine due to the irreducible nature of subjective experience. I find it is easier to think from the opposite direction. Say we are actually computer simulations, (like in various simulation arguments), then we know what it is like to be a program already. It is also easy to see why from the simulator's viewpoints we have no free will. 

As to why the first-person self has to presuppose free will. Because I have to first assume my thought is based on logic and reason, rather than some predetermined mumble-jumble. Otherwise, there are no reliable beliefs or rational reasoning at all, which would be self-defeating. That is especially important if perspective-based reasoning is fundamental. 

Comment by dadadarren on The Simulation Hypothesis Undercuts the SIA/Great Filter Doomsday Argument · 2021-10-07T21:52:14.605Z · LW · GW

I see nothing grumpy here.

I think supporters of the doomsday argument are saying you should consider all evidence, but the doomsday argument still stands. So we should use all the information available to make a prediction of the future and then, on top of all that, apply the doomsday argument so that the future looks bleaker. And that should be the case unless we find a logical error in the argument.

I think the error for the doomsday argument is to try and find an explanation for why I am this person, living in this time. It should be regarded as something primitively given, a reasoning starting point. Instead, it treated it as a sampling outcome. That is why I am against both SSA and SIA.

Comment by dadadarren on SIA > SSA, part 1: Learning from the fact that you exist · 2021-10-07T18:43:07.073Z · LW · GW

For the last point, is it fair to say that you don't have to consider SIA in terms of any made-up reference class?

Even if you only consider “people in your epistemic situation”, epistemic situation is a subjective state, and the only subjective experience you have is that of your own. So what qualifies “people in your epistemic situation” has to be a judgment call, or in a sense, made up. Do brains in jars that are being fed with similar neural signals count? as Adam Elga discussed in "Defending earth with self-locating probability". What about computer programs like Bostrom's Simulation Argument? Depending on your judgment of what “people in your epistemic situation” include, the answer to those problems would be drastically different. You are either certain you are a physical person or be quite confident that you are just a brain/program.

The only problems where such kinds of judgments won't affect the conclusions are those cases the effect of reference class cancels out. Like in Doomsday's argument.

Don't get me wrong I am not supporting SSA in any way. The reference class problem is definitely worse for SSA. But SIA is not free from it, after all, the people-you-could-actually-be is still a made-up concept that lacks a solid definition.

Comment by dadadarren on SIA > SSA, part 2: Telekinesis, reference classes, and other scandals · 2021-10-07T18:10:40.938Z · LW · GW

A very good summary of the problems faced by SSA. However I think saying SSA metaphysically means God is dead set on creating Alice/Bob may be a bit unfair. The premise is simply "I have to conclude myself exist/ I can only find myself exist", which in itself is hardly wrong. It's SSA's way of interpreting this premise leads to the problems mentioned. (Full disclosure, I am not supporting SSA at all. But not liking SIA either)

In daily language, "I" or "self" can mean two different things. In a strict indexical sense, they just mean the first-person, whoever the perspective might be. But very often it is understood as a particular physical person, identified by some features such as the name Bob. The two meanings are constantly used together. i.e. if we are talking, I could say "I'm hungry", and in your mind it would probably be understood as "dadadarren is hungry". Even though the two meanings are often used, the distinction is important.

Halfers would say, I will use halfers instead of SSA-ers as it is more general, it is not Bob or Alice that has to exist. It is the first-person that doing the thinking must exist. The problem is how to explain this first-person. SSA's answer: consider it as a random sample of all exist observers. I think that's wrong. The first-person is something inherently understood from any perspective. "I'm this person, living in this time" is a reasoning starting point that does not need nor have an explanation. The paradoxes are caused by our habit of reasoning from a god's eye view, even though anthropic problems are based on a specific person or moment, i.e. specific perspectives

Comment by dadadarren on The Validity of Self-Locating Probabilities (Pt. 2) · 2021-08-27T16:51:54.338Z · LW · GW

OK. Then consider doing this: After the first experiment, take part in the same experiment again, then again and again. You can keep the bars you earned in your pocket.

Say you participated for 100 iterations, by not entering the bet you would have 200 bars. By entering the bet do you expect to have approximately 100*1/3*5=167 bars?

Comment by dadadarren on The Validity of Self-Locating Probabilities (Pt. 2) · 2021-08-25T19:34:31.726Z · LW · GW

The experiment can take the following steps: 1. Sleep, 2.Scan, 3.Wake up the Orignal, 4. The original tosses the coin, 5. If tails create the Clone. 6. Wake up the Clone so he has indiscernible experience as the Orignal in 3. This whole process is disclosed to you.

Now after waking up the coin may or may not have been tossed, what is the probability of Heads? And what is the probability that I am the Orignal (i.e. the coin has yet to be tossed)?

Comment by dadadarren on The Validity of Self-Locating Probabilities (Pt. 2) · 2021-08-25T19:28:37.463Z · LW · GW

Say you can have several gold bars in your pocked when going into the experiment. And the mad scientist knows this. To make sure you have no new information when looking into your pocked, he will place the same amount of gold bars into the clone's pocket before waking him up. And you know this. So if you had 3 bars before, after waking up you will for sure find 3 gold bars in your pocket.

The rest of the problem is the same. You will be given 2 bars and offered the bet. Does this changes anything? Would you still reject the bet?

Comment by dadadarren on The Validity of Self-Locating Probabilities (Pt. 2) · 2021-08-25T18:12:38.504Z · LW · GW

> However, after waking up, I have to consider the possibility (with its associated probability) that I am the clone, so that changes my answer.

The original and the clone are treated exactly the same regarding gold bars and bets. And both of them are offer after waking up to you, regardless if you are the orignal or the clone. You are just trying to maximize your own gain. Do you still consider not taking the bet a better decision?

Comment by dadadarren on The Validity of Self-Locating Probabilities (Pt. 2) · 2021-08-25T18:05:42.884Z · LW · GW

I don't see why owning other property would change the objective of having more gold. You are using gold to bet gold, where do other properties come into play? Nonetheless, if it bothers you just let's just assume the subject has no other wealth other than what's no him. Does that mean you still would not enter the bet?

The mad scientist does not need to lie. The experiment is changed to: 1. Sleep, 2.Scan, 3.Wake up the Orignal, 4. The original tosses the coin, 5. If tails create the Clone. 6. Wake up the Clone so he has indiscernible experience as the Orignal in 3. This whole process is disclosed to you. Now after waking up the coin may or may not have been tossed, what is the probability of Heads? What is the probability if you are the original? If you say they are both 1/2 then what is the probability that you are the clone?

Perhaps what confuses me the most is that you are arguing for both thirders and Halfers at the same time. If you think halfers are correct to say the probability of Heads is 1/2 wouldn't you have taken the bet? If you think thirders are correct won't you say the probability should be updated according to the standard Bayes rule? Why are you arguing against both of these? What's your position?

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-25T17:51:49.760Z · LW · GW

You just stated Self-Sampling Assumption's calculation.  

Given you said "The most likely way I could find to arrive at a "close to 0" number was to make an error that I have seen a few times in the context of students calculating probabilities, but not previously in self-locating probabilities." about Self-Indication Assumption's method. 

Are you endorsing SSA over SIA? Or you are just listing the different camps in anthropic paradoxes?

Comment by dadadarren on The Validity of Self-Locating Probabilities (Pt. 2) · 2021-08-25T17:34:16.590Z · LW · GW

I don't think rejecting self-locating probability means totally rejecting probability as a measure of uncertainty. Because self-locating probability only applies to very specific anthropic problems. E.g.

  1. An incubator creates two observers, the first in a blue room and the second in a red room. Given I am one of the created observers but don't know if I am the first or the second. What is the probability that I will see blue when I turn on the lights?
  2. Some people put me and another person into two rooms. One Blue, one red but the process is random or unknown to me. Before turning on the light what is the probability that I am in the blue room?

My position is that the two problems are fundamentally different, only problem 1 is what has been referred to as self-locating probability in anthropic paradoxes. The entire experiment is known from a god's eye view. The uncertainty is which of the two observers is ME. ME (as well as Now or Here) are not some physical or observable identification but primitive concepts due to reasoning from a first-person perspective. So there is no reasonable way to attach a probability to it. 

Problem 2 is different. I know which person is ME all along. The uncertainty is not about which is me but what happened to me. About the room assignment process. This whole problem can be described from a god's eye view and is still comprehensible. I.E. "dadadarren and another person has been put into two rooms respectively, what is the probability that dadadarren is in the blue room?" So even though it askes which room I am in, it is different from the self-locating probabilities being discussed in anthropic problems. Probabilities like this are obviously valid. 

You probably think this distinction is not meaningful. So saying self-locating probabilities are invalid would lead to all probability as a measure of uncertainty being invalid. But that is not the argument I am making. Granted, for some metaphysical views, the is no difference between the two types. E.g. the Many-Worlds Interpretation considers the self-locating probability as the source of probability. So my argument is not compatible with the MWI, i.e. it is a counter-argument against MWI.  

Also, I am not making assumptions that the copies cannot go outside or ask others whether they are the Original. And they can certainly get into situations where the outcome depends on whether they are the original or the clone. I am arguing in such situations when a decision is involved, given my objective is about MY own benefit (as in the benefit of the "I" in self-locating probability) then there is no singular rational decision. Rational decisions only exist if the objective is about the collective benefit (total or average) of the copies, or a random sample from these copies. Yet it is hard to argue "maximizing MY own benefit" is disconnected from reality, something a real person will not do. 

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-25T01:57:28.597Z · LW · GW

Problem is POI is not a solid theory to rely on and it often lead to paradoxes. In anthropic problems in particular, it is unclear what exactly should be regarded indifferent. See this post for an example.

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-25T01:55:19.192Z · LW · GW

Alright, please see this post. Which camp you are in? And how do you answer the related problem. 

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-25T01:48:38.118Z · LW · GW

Because the thirder camp is currently the dominating opinion for the Sleeping Beauty Problem. Because Self-Indication Assumption has way more supporters than Self-Sampling Assumption. Self-Indication Assumption treats "I" as a randomly selected observer from all potentially existing observers. Which in this case would give a probability of being the Original close to 0.

I am not saying you have to agree with it. But do you have a method in mind to arrive at a different probability? If so what is the method? Or do you think there is no sensible probability value for this case?

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-25T00:37:46.341Z · LW · GW

>Personally no, I wouldn't say close to 0 in that situation. While the expected value of number of clones is 10000 and hence the expected value of number of observers is 10001, I can't think of a measure for which dividing by this quantity results in anything sensible. 

Wait, are you saying there is no sensible way to assign a value to self-locating probability in this case? Or you are disagreeing with this particular way of assigning a self-locating probability and endorse another method? 

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-25T00:33:47.659Z · LW · GW

Since you said 1/2 is a valid answer for its own model. You would want to know if that model is self-consistent? Not just picking whichever answer that seems least problematic?

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-24T01:53:58.111Z · LW · GW

Ok.

You say you are using the first-person perspective to answer the probability "I am the Original", and focusing on yourself in the analysis. However, you keep bring up there are two copies. That "one is the original, the other is the clone." So the probability "I am the Original" is 50%.

Do you realize that you are equating "I" with "a random one of the two" in this analysis? There is an underlying assumption of "I am a random sample" or "I am a typical observer" here.

For repeating the experiment, I am talking about being the Original in each iteration. You may come out as the Clone from the first experiment. You can still participate in a second experiment, after waking up from the second experiment, you may be the Original (or the Clone) of the second experiment. And no matter which one you are, you can take part in a third experiment. You can come out of the third experiment as the Orignal (or the Clone) of the third experiment. And so on. Keep doing this, and keep counting how many times you came out as the Orignal vs the Clone. What is the rationale that they will become roughly equal? I.E. as you repeat more experiments you will experience being the Original roughly half of the time. Again, the justification would be "I" am a random copy.

I am not saying the existence of other copies must be ignored. I am saying if you reason from the first-person perspective, imagine yourself waking up from the experiments, then it is primitively clear all other copies are not the "I" or "myself" questioned by self-locating probability. Because you are very used to take the god's eye view and consider all copies together (and treating "I" as a random sample of all copies) I suggested to not pay attention to anyone else but imagine you as a participant, and focus on yourself. But evidently, this doesn't work.

It is a tricky matter to communicate for sure. If this still seems convoluted maybe I shall use examples with solid numbers and bets to highlight the paradox of self-locating probability. Would you be interested in that?

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-24T00:49:24.647Z · LW · GW

I was asking if your reasoning for equal probabilities of Original vs Clone can be summarized as the Principle of Indifference. Not suggesting you do not care which copy you are. Would I be wrong to assume you endorse POI in this problem?

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-23T18:52:11.392Z · LW · GW

> I think what's going on is that dadadarren is saying to  repeat the experiment. We begin with one person, the Original. Then we  split, then split each, then split each again, etc. Now we have 2^n  people, with histories [Original, Original, ..., Original] and  [Original, Original, ..., Clone] and etc. There will be n*(n-1)/2 people  who have participated in the experiment n times and been the Original  n/2 times, they have subjectively seen that they came out the Original  50% of the time. But there have also been other people with different  subjective impressions, such as the one who was the Original every time.  That one's subjective impression is "Original 100%!". 


Ok, slow down here. What you are describing is repeating the experiment, but not from the subject's first-person perspective. Let's call this a description from a god's eye view. There is no "I" in the problem if you describe the experiment this way. Then how do you ask the "probability "I" am the Original?"


What I described in the post is to put yourself inside the subject's shoes. Imagine you are participating in the experiment from a first-person perspective. Hence, after waking up, you know exactly which one is "I". Despite there is another copy that is physically indiscernible and you don't know if you are the Orignal or Clone. This self-identification is primitive. 


If this seems convoluted, imagine a case of identical twins. Other people have to differentiate them by some observable features. But for a twin himself, this is not needed. He can inherently tell apart the "I" from the other twin, without needing to know what is the physical difference.


The probability is about "I" being the Original. So in a frequentist analysis, keep the first-person perspective while repeating the experiment. Imagine yourself take part in the same experiment again and again. Focus on "I" throughout these iterations. For your experience, the relative proportion of "I am the Original" has no reason to converge to any value as the iteration increases. 

What you are doing is to use the god's eye model instead. Because there is no "I" in this model, you are substituting "I" with "a random/typical copy". That's why I talk about the decision of one person only: the primitively identified "I". While you are talking about all of the copies as a group. Hence you say "Then they'll be much more wrong, collectively

It seems very natural to regard "I" as a randomly selected observer. Doing so will justify self-locating probabilities. Nonetheless, we should recognize that is an additional assumption. 

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-23T17:45:32.329Z · LW · GW

Would you say your reasoning is a principle of indifference between "I am the Original" vs "I am the Clone"?

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-22T21:45:56.875Z · LW · GW

If you think 1/2 is a valid probability in its own model. I would assume you are also interested in the probability update rule of this model, i.e. how can Beauty justify the probability of Heads to be 1/2 after learning it is Monday.

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-22T21:29:24.946Z · LW · GW

Still the same. All I can say is I am either the Orignal or the Clone. For the credence of each is still "I don't know".

And this number-crunching goes both ways. Say if the Mad scientist is only producing valid Clone in 1% of the experiments. However if he successes, he will produce 1 Million of them. Then what is the probability of me being the Original? I assume people would say close to 0. 

This logic could lead to some strange actions such as Brain-Race as described by Adam Elga here. You could force someone to act to your liking by make 100s of his Clones with the same memory. For if he doesn't do so, you will torture all these clones. Then the best strategy for him is to play ball because he is most likely a clone. However, he could counter that by making 1000s of Clones of himself that will be tortured if they act to your liking. But you could make 100000s of Clones, and he could make 10000000s, etc. 

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-22T21:05:37.187Z · LW · GW

Our behavior should be different in many cases. However, base on my past experience, people who accept self-locating probabilities would often find various explanations so our decisions would still be the same. 

For example, in "Repeating the Experiment" the relative frequency of Me being the Original won't converge on any particular value. If we bet on that, I will say there is no strategy to maximize My personal gain. (There is a strategy to max the combined gain of all copies if everyone abides by it. As reflected by the probability of a randomly sampled copy being Original is 1/2)

On the other hand, you would say if I repeat the experiment long enough the relative frequency of me being the Original would converge on 50%, and the best strategy to max my personal gain is to bet accordingly. 

The problem of this example is that personal gain can only be verified by the first-person perspective of the subject. A verifiable example would be this: change the original experiment slightly. The Mad scientist would only perform the cloning if a fair coin toss landed on Tails. Then after waking up how should you guess the probability of Heads? What's the probability of Heads if you learn you are the Original? (Essentially the sleeping beauty problem).

If you endorse self-locating probability, then there are two options. First, the thirder. After waking up the probability of I am the Original is 2/3. The probability of Heads is 1/3. After learning I am the Original the probability of Heads updates to 1/2. 

The other option is to say after waking the probability of Heads is 1/2, the probability of I am the Original is 3/4. After learning I am the Orignal the probability of Heads needs to be updated. (How to do this update is very problematic, but let's skip it for now. The main point is the probability for Heads would have to be smaller than 1/2. And this is a very weak camp compare to the thirders)

Because I reject self-locating probability, I would say the probability of Heads is 1/2. And it is still 1/2 after learning I am the Original. No update because there is no probability in the first place. 

This should result in different betting strategies. Say you have just experienced 100 iterations of this toss and cloning and haven't learned whether you were the Orignal or the Clone in any of those iterations. Now you are offered to enter a bet for 2 dollars that will pay 5 dollars if the coin landed on Heads for each of those 100 iterations. Assuming you are a thirder, then you should not enter these bets, since you believe the probability of Heads is only 1/3. Whereas I would enter all these bets. But again, base on past experience thirders would come up with some explanation as to why they would also enter these bets. So our decisions would still be the same. 

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-22T02:25:29.695Z · LW · GW

>if this happened lots of times, and you answered randomly, you would be right roughly 50% of the time

How can this be verified? Like I have outlined in Repeating The Experiment, if you keep participating in the experiment again and again there is no reason to think the relative frequency for you being the Original in each experiment would converge to any particular value. Of course, we can select one copy each time and it will converge to 1/2. But that would reflect the probability of the random sample being the Orignal is 1/2.

It can be asserted that the two probabilities are the same thing. But at least we should recognize that as an additional assumption. 

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-22T02:06:48.854Z · LW · GW

First of all, strong upvote. The points you raised have made me thought hard as well.

I don't think the probability about which room I am in is the same as the self-locating probability. Coincidentally I made my argument use color coding as well ( the probability of I am red or blue). The difference being which color I get labeled is determined by a particular process, the uncertainty is due to the randomness of that process or my lack of knowledge about it. Whereas for self-locating probability, there is nothing random/unknown about the experiment. The uncertainty, i.e. which physical person I am, is not determined by anything. If I ask myself why am I this particular human being? Why am I not Bill Gates? Then the only answer seems to be "Because the available subjective is connected to this person. Because I am experiencing the world from this person's perspective, not of Bill Gates'." It is not analyzable in terms of logic, only be regarded as a reasoning starting point. Something primitive.

Whether or not the questioner knows which person is being referred to by "I" is another interesting matter. Say the universe is infinite, and/or there are countless universes. So there could be many instances of human beings that are physically indistinguishable from me. But does that mean I don't know which one I am? It can be said that I do not know because I cannot provide any discernable details to distinguish myself from all of them. But on the other hand, it can be said I inherently know which is me. I can point to myself and say "I am this person" and call it a day. The physical similarities and differences are not even in the concern. This identification is nothing physical, it is inherently understandable to me because of my perspective. It is because of this primitive nature people consider "the probability of I am the Orginal" as a valid question instead of asking who is this "I" before answering.

My way of rejecting the self-locating probability is incompatible with the Many-Words interpretation. Sean Carroll calls this idea the "simple-mined" objection for the source of probability in Many-Worlds. Yet he admits that's a valid objection. I think treating perspectives as primitives would naturally lead to Copenhagen interpretation. It should also be noted that for Many-Worlds, "I" or "this branch" are still used as primitive notions when self-locating probabilities are derived.

Finally, the self-locating probability is not useful to decision-making. So even as tools, they are not justifiable. Goals such as maximizing the total or average benefit of a group can be determined by using probabilities of random samples from said group. e.g. probability of a randomly selected copy being Original. If the goal is strictly about the primitively identified "I" as in self-locating probability then there exists no valid strategy. As shown by the frequentist analysis in the post.

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-22T00:33:51.520Z · LW · GW

The Orignal/Clone is referring to the two physical persons in the experiment. One is a physical copy that existed before, the other created by mad scientists during the experiment. You can change the Original to "the causal descendant of the Original, that in a world without the mad scientist I would still exist?". But I don't think that's significant. Because the question does not depend on that. 

To illustrate this we can change the experiment. Instead of a direct cloning process, now the mad scientist will split you through the middle into two halves: the left part (L), and the right (R). Then he will complete the two by cloning the missing half onto them. So we still end up with two indiscernible copies. L and R. Now after waking up the second day, you can ask yourself  "what is the probability that I am L?". It is still a self-locating probability. And I thought about using this example in the post since it is more symmetrical. I ended up against it because it seems too exotic. 

Comment by dadadarren on The Validity of Self-Locating Probabilities · 2021-08-22T00:09:50.433Z · LW · GW

Of course, whether I am the Orignal or the Clone is a matter of fact. There is a definitive answer to it. I also see no problem saying "the probability I am the Original" is essentially the same as " What is the probability, given that he answers truthfully, that the scientist will say I am the original when I ask? ".

But does being a matter of fact imply there is a probability to it? Subsequently, what is the justification for 50% being the correct answer?

Comment by dadadarren on Absent-Minded Driver and Self-Locating Probabilities · 2021-08-20T18:33:36.398Z · LW · GW

OK, I think that is clearer now. I assume you think the strategy to coordinate on should be determined by maximizing the planning utility function. Not by maximizing the action utility function nor finding the stable point of the action utility function. I agree with all of this.

The difference is that you think the self-locating probabilities are valid. The action utility function that uses them is valid but can only be used in superficially similar problems such as multiple drivers being randomly assigned to intersections.

While I think self-locating probabilities are not valid, therefore the action utility functions are fallacious. Whereas in problems where multiple drivers are randomly assigned to intersections, the probability for someone assigned to an intersection is not self-locating probabilities.

Comment by dadadarren on Absent-Minded Driver and Self-Locating Probabilities · 2021-08-18T18:55:11.092Z · LW · GW

> Before starting the drive, the driver determines that always turning at the first intersection will be optimal. I didn't think we disagreed on that.

But the driver does not have to do any calculation before starting the drive. He can do that, yes. He also can simply choose to only think about the decision when arrived at an intersection. It is possible for him to derive the "action optimals" chronologically before deriving the "planning optimal". As I said earlier, they are two independent processes.

>Yes, it is. You can verify this by finding the explicit expression for action utility as a function of p....

No, it was not found by maximizing the action utility function. In Aumann's process, the action utility function was not represented by a single variable p, but with multiple variables representing casually disconnected decisions (observation 1). Because the decisions ought to be the same (observations 2) the action optimal ought to be symmetrical Nash equilibriums or "stable points". You can see an example in Eliezer Yudkowsky's post. For this particular problem, there are three stable points for the action utility functions. p=0, p=7/30 and p=1/2. Among these three p=1/2 gives the highest action payoff, 7/30 the lowest.

I will take your words for it that p=1/2 also maximizes action utility. But that is just a coincidence for this particular problem. Not how action optimals are found per Aumann.

For the sake of clarity let's take a step back and examine our positions. Everyone agrees p=1/2 is not the right choice. Aumann thinks it is done through 2 steps.

1. Derive all action optimals using by finding the stable point of the action utility function. ( p=1/2 is one of them, as well as p=0)

2. p=1/2 is rejected because it is not possible for the driver at different intersections to coordinate on it due to absentmindedness.

I disagree with both points 1 and 2, reason being the action utility function is fallacious. Are you rejecting both, or point 2 only, or are you agreeing with him?

Comment by dadadarren on Absent-Minded Driver and Self-Locating Probabilities · 2021-08-18T18:14:58.934Z · LW · GW

I think the interesting thing is what does AB+CD actually means? If we treat the fraction of decisions at X as the probability here is X, same for Y, then AB+CD should be the expected payoff of this decision. Typically the best decision should be derived by maximizing it. But clearly, that leads to wrong results such as 4/9. So what's wrong?

My position is that AB+CD is meaningless. It is a fallacious payoff because self-locating probabilities are invalid. This also resolves double-halfers problems. But let's leave that aside for the time being. 

If I understand correctly, your position is maximizing AB+CD is not correct. Because when deciding we should be maximizing the payoff of runs instead of this decision. Here I just want to point out the payoff for runs (planning utility function) does not use self-locating probabilites. You didn't say if AB+CD is something meaningful or not. 

Aumann thinks AB+CD is meaningful. However, maximizing it is wrong. He pointed out that the decision at X and at Y are causally disconnected, yet the two decisions ought to be the same. A symmetrical Nash equilibrium. So the correct decision is a stable point of AB+CD. The problem is when there are multiple stable points, which one is the optimal decision, the one with the highest AB+CD? Aumann says no. It should be the point that maximizes the planning payoff function (the function not using self-locating probabilities). 

I am not convinced by this. First of all, it lacks compelling reason. The explanation is ad-hoc "due to the absentmindedness". Second, by his reasoning, AB+CD effectively plays no part in the decision-making process. The decision maximizing planning utility function is always going to be a stable point of AB+CD, and it is always going to be chosen regardless of what value it gives for AB+CD. So the whole argument about AB+CD being meaningful lacks substantial support. 


 

Comment by dadadarren on Absent-Minded Driver and Self-Locating Probabilities · 2021-08-17T18:56:58.995Z · LW · GW

Thank you for the input. I think point 4 is the key problem.

> The overall average payoff is not the simple combination of probability at X times payoff at X plus probability at Y times payoff at Y.

This is hard to justify by usual probability calculations. Putting it into the context of the sleeping problem we get:

"The overall probability for Heads is not the simple combination of probability "today is Monday" times p(Heads|Monday) plus the probability "today is Tuesday" times p(Heads|Tuesday)."

This is typically denied by most. And the Achilles' heel of double-halving.

Comment by dadadarren on Absent-Minded Driver and Self-Locating Probabilities · 2021-08-17T18:08:56.702Z · LW · GW

> The "action optimal" reasoning assumes that the driver applies it at every intersection

This is a pretty obvious assumption that has been reflected by Aumman's observation 2. " The driver is aware he will make (has made) an identical decision at the other intersection too. " I do not see any reason to challenge that. But if I understand correctly, you do.

> which means that it's only worth considering under the assumption that the driver changed their mind from p=0 to p=1/2 sometime before the first intersection

I disagree that can be referred to as a change of mind. The derivation of action optimal is an independent process from the derivation of planning optimal. Maybe you mean the driver could only coordinate on the planning optimal due to the absentmindedness similar to Aumman's reasoning. But then again, you don't seem to agree with his observation point 2. So I am not entirely sure about your position.

If you say there is no reason for the driver to change his decision from the planning stage since there is no new information. Then we are making the same point. However, for the driver, the "no new information" argument applies not only to the first intersection but to any/all intersections. So again I am not sure why stressing on the first intersection. And then there is the problem of no new information, i.e. not changing decision, vs there are multiple action optimal points with higher payoffs, i.e. why not choosing them. Which I think lacks a compelling explanation.

> Maximizing that quantity is a mistake whether or not self-locating probabilities are used.

The p=1/2 is not found by maximizing the action utility function. It is derived by finding stable points/nash equilibriums. p=1/2 is one of them, the same as p=0. Among these stable points, p=1/2 has the highest expected utility. In comparison, the planning utility function does not use self-locating probability. And maximizing it gives the planning optimal, which is uncontroversially useful.

Comment by dadadarren on Absent-Minded Driver and Self-Locating Probabilities · 2021-08-15T18:47:40.622Z · LW · GW
the basis for 50/7 "action" expected value is that the driver might have previously switched strategies from the optimal one (p=0) to a poorer local maximum (p=1/2).

I don't think that is the basis. p=1/2 as one of the action optimal is derived by finding a stable point of the action output function. The expected payoff is obtained by subbing p=1/2 into the action payoff. In this process, the planning optimal of p=0 was not part of the derivation. So it is not a "switch" of strategy per se. The fact that I may have already driven through some intersections is an inherent part of the problem (absentmindedness), any mixed strategy (CONTINUE with a random chance) would have to face that. Not special to action optimal like p=1/2.

Furthermore, if we are considering the action payoff function (i.e. the one using probabilities of "here is X/Y/Z") then p=1/2 is not a inferior local maximum. At the very least it is a better point than the planning optimal p=0. Also as long as he uses the action payoff function, the driver should indeed apply the same analysis at every intersection and arrives at p=1/2 independently. i.e. it is consistent with observation point 2: "The driver is aware he will make (has made) an identical decision at the other intersection too. "

I agree using p=1/2 is a mistake. As you have pointed out, it is especially obvious at the first intersection. My position is this mistake is due to the action payoff function being fallacious. Because it uses self-locating probability. As oppose to Aumman's explanation: that the driver could not coordinate on p=1/2, due to the absentmindedness they can only coordinate at the planning stage.

Comment by dadadarren on A Simplified Version of Perspective Solution to the Sleeping Beauty Problem · 2021-07-25T20:47:56.654Z · LW · GW

The double-halfer logic you just described: not conditionalizing on self-locating information unless it rejects a possible-world (like seeing blue rejects TT in Loaria's example), is called the "halfer rule" by Rachael Briggs. It has obvious shortcomings very well countered by Michael Titelbaum in "An Embarrassment for Double Halfers" and by Vincent Contizer in "A Devastating Example for the Halfer Rule". 

My position is different from any (double) halfer argument that I know of. I suggest perspectives cannot be reasoned or explained, they are defined by the subjective. So if we want to use "today" as a specific day in the logic, then we have to imagine being the subject waking up in the experiment. Here "today" is a primitively defined moment. Because it is primitive, there is no way to assign any probability to "today is the first day" or "today is the second day". I'm arguing self-locating probabilities like these simply cannot exist. Different from other double-halfer camps that think self-locating probability exists yet try to come up with special updating rules for self-locating information.

So there are a few points not consistent with my position. You said experiencing "blue" is not a random event, but I think it is. Imagine waking up during the experiment as the first-person, before checking the color, I understand the time is "today": a moment primitively defined. I do not know the color for today because it depends on today's coin toss: a random event. After seeing Blue I know today's toss is H, but knows nothing about the toss of "the other day". So the probability of both coin having the same result remains at 1/2. If you are interested in my precise position of self-locating probabilities check out my page here

In this analysis whether "today" is the first or the second day was not part of the consideration. However if you really wish to dig into it then here is the analysis: If today is the first then the two possibilities are HT and HH, if today is the second then the two possibilities are HH and TH. In each case, the two are equally probable. But once again, there is no probability for "today is the first day" or "today is the second day". It is a primitive reasoning starting point that cannot be analyzed. 

Comment by dadadarren on Practical anthropics summary · 2021-07-09T16:17:29.172Z · LW · GW

"If there are no issues of exact copies, or advanced decision theory, and the questions you're asking aren't weird, then use SIA. "

So practically FNC? I understand that FNC and SIA converges when the reference class is so restrictive that it only contains the one observer. But I find counter arguments like this quite convincing.

Comment by dadadarren on The SIA population update can be surprisingly small · 2021-07-09T16:07:47.241Z · LW · GW

Does "population" in this passage and "population" in presumptuous philosopher have different meanings?

It seems here by "population difference" is kind of like density. How likely we are going to find aliens (on other planets). But in presumptuous philosopher it meant overall number. T2 does have a trillion more observers, yet it does not explain how much of that is due to higher density and how much is due to a larger universe.

Comment by dadadarren on Anthropics in infinite universes · 2021-07-09T15:51:47.018Z · LW · GW

I find the use of P(X|Y) with radius r centered on location l very refreshing. Though I have a different focus. Here by letting radius approaches infinity the choice of centered location becomes irrelevant. I'm more interested in what if the radius is not infinite, what if it is quite small or even approaching zero? What is the location centered on then?

I think the location outght to be centered on us, where we are at in spacetime, and the radius would represent the extend of our observations. R approaching zero could represent a state of ignorance, a prior state before making any observations about the universe. Then the location centers on the primity concept of self and now. This way one's own existence is always a prior knowledge.

Comment by dadadarren on Should VS Would and Newcomb's Paradox · 2021-07-09T00:33:38.725Z · LW · GW

I guess that is our disagreement. I would say not taking the money require some serious modification to causal analysis (e.g. retro-causal). You think there doesn't need to be, it is perfectly resolved by Simpson's paradox.

Comment by dadadarren on Should VS Would and Newcomb's Paradox · 2021-07-09T00:28:55.068Z · LW · GW

I think I kind of getting where our disagreement lies. You agree with the "all choices are illusions". By this, there is no point in thinking about "how should I decide". We can discuss what kind of decision-maker would benefit most in this situation, which is the "outsider perspective". Obviously, one-boxing decision-makers are going to be better off. 

The controversy is if we reason as the first-person when facing the two boxes. Regardless of the content of the opaque box, two-boxing should give me 1000 dollars more. The causal analysis is quite straightforward. This seems to be a contradiction with the first paragraph. 

What I am suggesting is the two reasoning are parallel to each other. They are based on different premises. The "god's eye view" treats the decision-maker as an ordinary part of the environment like a machine. Whereas the first-person analysis treats the self as something unique: a primitively identified irreducible perspective center, i.e. THE agent-- as opposed to part of the environment.  (Similar to how a dualist agent consider itself) Here free will is a premise. I think they are both correct, yet because they are based on different perspectives (thus different premises) they cannot be mixed together. (Kind of like deductions from different axiomatic systems cannot be mixed.) So from a first-person perspective, I cannot put how Omega has analyzed me (like a machine) thus filled the box into consideration. For the same reason, from a god's eye view, we cannot imagine being the decision-maker himself when facing the two boxes and choose.

If I understand correctly, what you have in mind is that those two approaches must be put together to arrive at a complete solution. Then the conflict must be resolved somehow. It is done by letting the god's eye view dominate over the first-person approach. This makes sense because after all treating oneself as special does not seem objective. Yet that would deny free will which could make all casual decision-making processes into question. Also, this brings to a metaphysical debate of which is more fundamental? Reasoning from a first-person perspective or reasoning objectively?

I bring up anthropics because I think this is the exact same reason which leads to the paradoxes in that field, mixing reasoning from different perspectives. If you do not agree with treating perspectives as premises and keeping two approaches separate then there is indeed little connection between that and Newcomb's paradox. 

Comment by dadadarren on Should VS Would and Newcomb's Paradox · 2021-07-07T20:34:43.609Z · LW · GW

Well, to my defense you didn't specify how is Omega 99.9% accurate either. But that does not matter. Let me change the question to fit your framework.

I get this feeling for some "easily read" people. I am about 51% right in both directions of them, and it isn't correlated with how certain they themselves are about taking the money. Now, suppose you are one of the "easily read" people and you know it. After putting the envelope in your pocket, would you also take the 1000 dollar on the table? Will rejecting it make you richer?

Comment by dadadarren on Should VS Would and Newcomb's Paradox · 2021-07-07T20:15:50.834Z · LW · GW

“ You are in front of two boxes ..... you believe you can two-box and both boxes will be filled”

No....That is not first-person decision. I do not think if I choose to two-box both will be filled. I think the two boxes' contents are predetermined. Whatever I choose can no longer change what is already inside. Two-boxing is better because it gives me 1000 dollar more. So my decision is right regardless if the second box is empty or not. 

Outsiders and the first-person give different counterfactuals even when facing the same outcome. Say the outcome is two-boxing and the second box is empty. The outsider would think the counterfactual is to make the machine (decision-maker) always one-box, so the second box is filled. The first-person would think the counterfactual is that I have only chosen the second box which is empty. 

Facing the same outcome while giving different counterfactuals is the same reason for perspective disagreement in anthropics. 

Comment by dadadarren on Are coincidences clues about missed disasters? It depends on your answer to the Sleeping Beauty Problem. · 2021-07-06T20:02:48.110Z · LW · GW

Base on my experience most halfers nowadays are actually double-halfers. However, not everyone agrees why. So I am just going to explain my approach. 

The main point is to treat perspectives as fundamental and recognize in first-person reasoning indexicals such as "I, here, now" are primitive. They have no reference class other than themselves. So self-locating probabilities such as "the probability of now being Monday" is undefined. This is why there is no Bayesian update.

This also explains other problems of halferism. e.g. robust perspectivism: two parties sharing all information can give different answers to the same probability question. It is also immune to counter-arguments against double halfers as pointed out by Michael G. Titelbaum.

Comment by dadadarren on Should VS Would and Newcomb's Paradox · 2021-07-06T19:33:56.291Z · LW · GW

I don't think Omega being a perfect predictor is essential to the paradox. Assume you are playing this game with me. Say my prediction is only 51% correct. I will fill an envelope according to the prescribed rule. I read you then give you the envelope (box B). After you put it in your pocket I put 1000 dollars on the table. Do you suggest not taking the 1000 dollar will make you richer? If you thinking you should take the 1000 in this case, then how good would I need to be for you to give that up? (somewhere between 51% and 99.9% I presume) I do not see a good reason for this cutoff. 

I think the underlying rationale for two-boxing is to deny first-person decision-making in that particular situation. e.g. not conducting the causal analysis when facing the 1000 dollars. Which is your strategy, commit to taking one box only, let Omega read you, and stick to that decision. 

Comment by dadadarren on Should VS Would and Newcomb's Paradox · 2021-07-06T19:06:47.282Z · LW · GW

They will observe the same result. Say the result is the opaque box is empty.

From a first-person perspective, if I had chosen this box only then I would have gone empty-handed.

From an outsider's perspective, making a one-boxing decision-maker would cause the box to be filled with 1 million dollars.

This "disagreement" is due to the two having different reasoning starting points. In anthropics, the same reason leads to robust perspectivism. I.E. Two people sharing all their information can give different answers to the same probability question.