Anthropic Reasoning and Perspective-Based Arguments

post by dadadarren · 2020-09-01T12:36:41.444Z · LW · GW · 59 comments

Contents

59 comments

This pandemic has allowed me some time to finish up my anthropic arguments. The complete argument is now accessible on my website, https://www.sleepingbeautyproblem.com

My approach is a Perspective-Based Argument (PBA). I think anthropic problems often end up paradoxical because a critical aspect of reasoning has been consistently overlooked. I propose perspectives are integral parts of reasoning. If an argument seems to be purely objective and does not appear to originate from any particular agent, that typically means it is formulated from a god's eye perspective, i.e. a view from nowhere.

PBA can be expressed as follows:

  1. Perspective and its center are primitively identified. It cannot be derived. Nor is it the outcome of any event. E.g. If Caesar ponders upon his perspective center and asks "Why am I Caesar?" Then there is no logical explanation to that question other than "it just feels so". The perspective center is a reasoning starting point. Very much like an axiom or a postulate.

  2. Indexicals, such as "I", "now", and "here", are references to the perspective center. Each of them points to a different aspect of it. "I" refers to the agent at the perspective center; "now" the time; and "here" the location.

  3. Due to their relationship with the perspective center, indexicals are logically unique. E.g. the concept of "I" and the concepts of other physical persons are incomparable. In plain language, it just means "to me, I am inherently special."

  4. Indexicals' uniqueness is perspective-bounded. So the person referred to as "I" from my perspective is not inherently unique from your viewpoint. If we reason as an impartial outsider, i.e. with a god's eye view, then no particular person/time would be unique. Due to this indifference, an explanation is needed when attention is given to a particular person/time. That explanation would be, conceptually speaking, a sampling process.

  5. Because perspectives are reasoning starting points, logics from different perspectives must not be mixed. It's like propositions from different axiomatic systems cannot be used together. A valid argument must be formulated from one consistent perspective.

  6. Anthropic paradoxes treat all observers/moments with indifference, yet their arguments focus on "I" or "now" without any justification. The indifference is valid from a god's eye view while the special focus is valid from the first-person view. They are conflicting since they are based on different perspectives.

  7. It might be tempting to try to validate this conflict. The easiest way would be to regard indexical such as "I" and "now" as the outcome of some fictional sampling process. That would lead to common approaches such as SSA and SIA (and FNC in a less obvious manner). However, that is unjustified as perspective centers are primitively identified. It is not the outcome of some event.

  8. There is no valid reference class for indexicals. Subsequently, self-locating probabilities (the probabilities of indexicals being a particular member of some proposed reference class) are also invalid concepts. Examples include "the probability that "today" is Monday/Tuesday" in the sleeping beauty problem; "the prior probability distribution of "my" birth rank among all humans " in the doomsday argument, etc.

  9. Perspective disagreement over probability while sharing all information can exist in anthropic problems. It happens when the probability's underlying meaning depends on the answerer's perspective. Just like the question of "Am I a man?". Its answer depends on who the responder is.

Base on the above, PBA leads to the following conclusions:

  1. Double-halving in the Sleeping Beauty Problem without being unBayesian
  2. The Doomsday Argument is false.
  3. The presumptuous philosopher is wrong
  4. In the Simulation Argument, the probability of me being simulated is an invalid notion
  5. The idea of the fine-tuned universe is invalid.

Obviously I am biased. However, I genuinely believe PBA starts with plausible postulates then explains all paradoxes in this field without ever being ad hoc. If anything, I hope people would pay more attention to other anthropic arguments besides common approaches as SSA, SIA, and FNC.

59 comments

Comments sorted by top scores.

comment by [deleted] · 2020-09-02T00:27:41.682Z · LW(p) · GW(p)
The idea of the fine-tuned universe is invalid.

Could you elaborate? What's the paradox that's being dissolved here? As far as I know SSA does not indicate a fine-tuned universe, just that our existence doesn't give us a clue about how likely life is to arise in any universe/planet.

Replies from: dadadarren
comment by dadadarren · 2020-09-02T13:18:30.351Z · LW(p) · GW(p)

I think whether SSA suggests life is more likely to arise in other planets depends on the reference class chosen. For example, if the reference class is all observers from the multiverse, then I am more likely to be in a populous universe. I.e. we should expect life to be more common than observed evidence suggests.

According to PBA, analyzing the fundamental parameters of the universe basing on their compatibility to support life is an egocentric act. We pay attention to life because that is what we are. This reasoning is perspective dependent. If you ask a perspective based question such as "why is everything compatible with my existence?" then you must accept a perspective based answer: "Because you can only find yourself exist." The weak anthropic principle (WAP) essentially.

On the other hand, if we want a scientific explanation of the fundamental parameters, we must reason objectively/impartially. That means giving up the self-attention rooted in our first-person perspective. We must accept life is not inherently logically significant to the universe, also recognize WAP is not a scientific explanation of the fundamental parameters.

The fine-tuning argument is false because it askes the perspective-based question "why is everything compatible with my existence?" then demands an impartial/objective answer. Effectively assuming we are logically significant to the universe. That is why it always ends up with teleological conclusions (the universe is designed to support life etc).

My complete argument, including a rebuttal to Leslie's firing squad can be found here: https://www.sleepingbeautyproblem.com/about-fine-tuned-universe/

Replies from: None
comment by [deleted] · 2020-09-02T14:13:56.662Z · LW(p) · GW(p)

Would you agree that, given that the multiverse exists (verified by independent evidence), the WAP is sufficient to explain the fundamental parameters?

Replies from: dadadarren
comment by dadadarren · 2020-09-02T15:26:09.112Z · LW(p) · GW(p)

First of all, I am pessimistic about finding evidence of the multiverse. That being said, If we take the multiverse as given the WAP is still not the complete picture. Because there are two separate questions here. And the WAP answers only one of them. Let me show this with an example.

Say the subject is my parents' marriage. There are two ways to think about it. One way is I take my first-person view and ask a perspective dependent question "why (do I find) they married each other?" Here a WAP type answer is all that's needed. Because if they didn't I wouldn't exist. However, if the question is formulated impartially/objectively (e.g. from a god's eye view): "why did they marry each other?". Then it calls for an impartial answer, maybe a causal model. The WAP doesn't apply here. The key is to keep reasoning from different perspectives separate. Back to the fundamental parameters. The WAP explains why we find the parameters compatible with our existence. Yet that is not the scientific (impartial) explanation for their values. (If the multiverse is confirmed then the scientific answer could be it's just random). If we do not recognize the importance of perspective to reasoning, we would mix the above two questions and treat it as one problem. By doing so teleological conclusions can always be made. Instead of the fine-tuned universe, they would just argue for a fine-tuned multiverse. Which has already been done by intelligent design proponents IIRC.

Replies from: None
comment by [deleted] · 2020-09-03T02:41:32.785Z · LW(p) · GW(p)

Thanks, that cleared up a lot.

comment by avturchin · 2020-09-01T15:17:59.594Z · LW(p) · GW(p)

Could you explain more how you come from your premises to your conclusion, e.g. that simulation argument is false?

Replies from: dadadarren
comment by dadadarren · 2020-09-01T23:39:54.719Z · LW(p) · GW(p)

Sure. In the Simulation Argument, the probability of me being simulated is a self-locating probability. Self-locating probabilities are invalid concepts according to PBA as their formulations require reasoning from multiple perspectives. The complete argument (with a thought experiment) against self-locating probability can be found on this page. https://www.sleepingbeautyproblem.com/part-3-self-locating-probability/

Specifically, the simulation argument treats the fraction of simulated observers as the probability that “I” am simulated. It considers the indexical “I” as a random sample from the implied reference class (the reference class includes all observers with human-like experience, simulated AND base level organic). It needs a god’s eye view to be indifferent to all observers while also need my first-person view to identify and focus the analysis on “I”. Such a perspective mix is invalid according to PBA.

Replies from: avturchin
comment by avturchin · 2020-09-02T01:02:55.354Z · LW(p) · GW(p)

If such self-location probability view is invalid, should I always use only God's view?

Replies from: dadadarren
comment by dadadarren · 2020-09-02T12:17:56.416Z · LW(p) · GW(p)

There is no restriction on which view to take. You can choose to reason from your natural first-person perspective. You can reason from other people's (or other thing's) perspective. We can even imagine a perspective such as a god's eye view and reason from there. What PBA argues is that once you choose a perspective/view, stick with it for the entire analysis. It's like an axiomatic system. We can't derive triangles' internal angle sum as 180 from Euclidian geometry then use it in Ecliptic geometry.

Self-locating probabilities are invalid because they need both the first-person view AND the god’s eye view to formulate.

Replies from: avturchin
comment by avturchin · 2020-09-02T17:24:34.560Z · LW(p) · GW(p)

Your intuition seems reasonable, but what about situations where I have to make a choice based on self-location believes?

Replies from: dadadarren
comment by dadadarren · 2020-09-02T18:12:51.356Z · LW(p) · GW(p)

The answer is simple yet unsatisfying. In those situations, assuming the objective is simple self-interest, there is no rational choice to be made.

If we assume the objective is the combined interest of a proposed reference class, and we further assume every single agent in the reference class follows the same decision theory, then there would be a rational choice. However, that does not correspond to the self-locating probability. It corresponds to a probability that can be consistently formulated from the god's eye view. E.g. the probability that a randomly chosen observer is simulated rather than the probability that "I" am simulated. Those two are distinctly different unless we mix the perspectives and accept some kind of anthropic assumption such as SSA or SIA.

Replies from: avturchin
comment by avturchin · 2020-09-02T18:53:21.940Z · LW(p) · GW(p)
the probability that a randomly chosen observer is simulated rather than the probability that "I" am simulated.

But if randomly chosen observer is simulated, and I am randomly chosen observer, I should be simulated?


Another way to reason here - in a situation where we can't make a rational choice – is "meta-doomsday argument" which I discussed before: I assume that both alternatives have equal probabilities, based on logical uncertainty about self-location believes. E.g. it gives 5/12 for Sleeping Beauty.

Replies from: TAG, dadadarren
comment by TAG · 2020-09-04T11:16:20.300Z · LW(p) · GW(p)

What does it mean metaphysically to be a randomly chosen observer? Who's doing the choosing? Does it mean that all the other counterparts are zombies?

Replies from: avturchin
comment by avturchin · 2020-09-04T12:02:56.307Z · LW(p) · GW(p)

Practically, it means difference in the expected probabilities of future observations. What is your opinion on these questions?

Replies from: TAG
comment by TAG · 2020-09-04T15:07:57.517Z · LW(p) · GW(p)

You didn't answer the question as stated.

If you don't know what the ontology of random selection is, how can you predict experience from it?

My opinion is that there are well defined defined multiversal theories , and poorly defined ones.

Replies from: avturchin
comment by avturchin · 2020-09-06T22:25:29.057Z · LW(p) · GW(p)

Ok, updated my world model and now I think that:

There is no randomness in metaphysical sense: everything possible exists.

However, there is relation inside an observer which looks like randomness: For any thought "this is a dog" there a billion possible different observations of different dogs. In some sense it looks like that there are billion observers of the reference class dog-seers. This relation between macro interpretations and its micro variants, is similar to entropy and it is numerical and could be regarded as probabilities for practical purposes.

Replies from: TAG
comment by TAG · 2020-09-07T10:00:12.986Z · LW(p) · GW(p)

There is no randomness in metaphysical sense: everything possible exists.

Do you have a reason for believing that?

Replies from: avturchin
comment by avturchin · 2020-09-07T13:19:03.579Z · LW(p) · GW(p)

There are several not mutually exclusive and plausible theories which implies existence of everything.

If universe for whatever reason is infinite, then everything possible exist. If MWI is true, everything possible exist. If Bolztmann brains are possible, again, all possible observers do exist. If Tegmarks mathematical universe is possible, again, all possible observers do exist.

Moreover, the fact that I exist at all implies very large number of attempts to create an observer, including something like 10^500 universes with different physical laws, which itself implied the existence of some unlimited source of attempts to create different things.

Replies from: TAG
comment by TAG · 2020-09-07T17:19:28.784Z · LW(p) · GW(p)

There are several not mutually exclusive and plausible theories which implies existence of everything.

They could all be wrong.

Moreover, the fact that I exist at all implies very large number of attempts to create an observer, including something like 10^500 universes with different physical laws, which itself implied the existence of some unlimited source of attempts to create different things

Not without many other assumptions.

  1. List item
Replies from: avturchin
comment by avturchin · 2020-09-07T17:50:00.358Z · LW(p) · GW(p)

There is also a metaphysical argument, not depending on any empirical data, so it is less likely to be wrong. It may be more difficult to explain but I will try.

I call the argument "the unboundedness of nothingness". It goes as following:

1. The world as we see it, appeared from nothing via some unknown process.

2. "Nothing" doesn't have any properties by definition, so it doesn't have a counter of worlds which appeared from it.

3. Thus if it create one world, it will produce infinitely many of them, because its ability to create worlds can't be exhausted or stopped.

Or, in other words, if everything-that-exists has finite size and its growth is limited by some force, there is a contradiction as such force will not be a part of everything-that-exist. Thus such force doesn't exist.

Replies from: TAG, TAG
comment by TAG · 2020-09-08T15:48:25.580Z · LW(p) · GW(p)
  1. The world as we see it, appeared from nothing via some unknown process.
  1. “Nothing” doesn’t have any properties by definition, so it doesn’t have a counter of worlds which appeared from it

Absolute metaphysical "nothing" also has no powers and no properties, so it had no power or property of universe creation. (Popular accounts of cosmology talk about universes appearing from nothing , but that is a loose usage of language).

Replies from: avturchin
comment by avturchin · 2020-09-08T16:10:19.945Z · LW(p) · GW(p)

Ok, I suggested you three independent lines of reasoning which implies that everything possible exists (physical theories, self-sampling logic similar to presumptuous philosopher and the idea that if Big Bang happened once it should also happen uncountably many times.)

Also, If only limited number of thing exist, there should be ontological force which prevent them popping from existence - and given that we exist we know that such popping is possible. The only thing which can limit the number of appearing is God. Bingo, we just got new proof of God's existence!

But jokes asides, we obviously can't prove factually existence of everything as it is unobservable, but we could use logical uncertainty to estimate probability of such claim. It is much more probable that everything possible exists, as there are three independent ways argue for it, and also if we assume the opposite, we have to invent some "limiting force" similar to God, which has low apriori probability.

Based on these my confidence in "everything possible exists" is 80-90 per cent.

Replies from: TAG
comment by TAG · 2020-09-08T19:20:01.178Z · LW(p) · GW(p)

Ok, I suggested you three independent lines of reasoning which implies that everything possible exists

You never compared or contrasted with any small/single universe theory.

but we could use logical uncertainty to estimate probability of such claim. It is much more probable that everything possible exists,

On the same theme, you can't say how much probability mass multiversal theories have, without knowing how much single universal theories have.

Based on these my confidence in “everything possible exists” is 80-90 per cent

On the same theme, how can that be a meaningful number when you have never even thought about the rival theories?

Also, If only limited number of thing exist, there should be ontological force which prevent them popping from existence

Everything is based on assumptions. You are making an "anything will happen so long as it is not prevented" assumption. Many philosophers in the early modern period made an opposite assumption ... that nothing can happen without Sufficient Reason.

Replies from: avturchin
comment by avturchin · 2020-09-09T13:13:02.961Z · LW(p) · GW(p)

Any reasoning is based on some assumptions and it is not a problem. We may list these assumptions and convert the into constrains of the model (with some probabilities).

Ok, lets try to prove the opposite thing.

Firstly, Kant in "Critiques of pure reason" explored these topics of the universe infinity in space in time and find that both propositions could be equally proved (finite and infinite), from which he concluded that the topics can't be solved and is beyond human knowledge. However, Kant suggested on the margins one more proof of modal realism (I cite here by memory): "If a thing is possible in all aspects, there is no difference between it and real thing".

The strongest argument against the existence of everything is non-randmoness of our experiences. If I am randomly selected from all possible minds, my observations should be very chaotic as most random minds are just random. There are several conter-arguments here: related either to chains of observer-moments converging to less random mind, or different measure of different minds, or that selection process of self-aware minds is a source of antirandomness, or that we in fact are random but can't observe it, of that the internal structure of an observer is something like a convolutional neural net where randomness is concentrated to inputs and "order" to output. I will not elaborate these arguments here as it will be very long.

Another line of reasoning is connected with idea of actuality. In it, only me-now is real, and everything else is just possible. This line of reasoning is plausible, but it is even more weird than modal realism.

Then again, the idea of (Christian) God which creates only a few worlds. Improbable.

During last year EA forum, the following prove of the finitness of the universe was suggested:

"1) Finite means that available compute in the quantum theoretic sense in our future light cone is finite.

2) The Bekenstein bound says the information in a region is bounded proportional to area.

3) The universe's expnasion is accelerating, so the there is a finite region of space that determines our future light cone.

4) Quantum mechanics is reversible, so the information of our future light cone is finite.

5) Only finite compute can be done given a finite information bound without cycling."

But it is applicable only to our universe, but not to other universes.

Replies from: TAG, TAG
comment by TAG · 2020-09-09T18:20:16.395Z · LW(p) · GW(p)

The strongest argument against the existence of everything is non-randmoness of our experiences. If I am randomly selected from all possible minds, my observations should be very chaotic as most random minds are just random. There are several conter-arguments here: related either to chains of observer-moments converging to less random mind, or different measure of different minds, or that selection process of self-aware minds is a source of antirandomness, or that we in fact are random but can’t observe it, of that the internal structure of an observer is something like a convolutional neural net where randomness is concentrated to inputs and “order” to output. I will not elaborate these arguments here as it will be very long

There seems to be a common pattern where you start off with an assumption that mispredicts experience , and then make a further assumption to fix the situation. But that's one step backwards, one step forwards. You end up with a more complex theory than one that takes one step forward, and just predicts experience.

Replies from: avturchin
comment by avturchin · 2020-09-10T19:04:59.624Z · LW(p) · GW(p)

It looks like that you think that modal realism is false and everything possible doesn't exist. What is the argument which convinced you in it?

Replies from: TAG
comment by TAG · 2020-09-11T14:49:07.065Z · LW(p) · GW(p)

I haven't said much about the object level issue. Im inclined to agree with the OP that anthropic probability doesn't work. I haven't seen you argue against small/single worlds except to quote a probability!

comment by TAG · 2020-09-09T17:51:28.312Z · LW(p) · GW(p)

Any reasoning is based on some assumptions and it is not a problem

All reasoning is based on assumptions and it's a problem, because it makes it hard to converge on beliefs, or ever settle questions.

There's a partial solution, in that not all assumptions are equal , and not all numbers of assumptions are equal.

That's a fairly traditional version of Occams Razor based on minimising the number, and maximising the likelihood of assumptions.

Kant suggested on the margins one more proof of modal realism (I cite here by memory): “If a thing is possible in all aspects, there is no difference between it and real thing”.

Umm..well, suggestion, not proof.

comment by TAG · 2020-09-08T15:45:24.591Z · LW(p) · GW(p)

All arguments depend on assumptions ,and yours is no exception.

For one thing , you are assuming fairly strong realism about time. That's not a feature of all theories, or even all multiversal theories. Tegmarks mathematical multiverse struggles to explain time as a subjective phenomenon.

Replies from: avturchin
comment by avturchin · 2020-09-08T16:14:16.434Z · LW(p) · GW(p)

Actually, I didn't assume realism about time, but the language we use works this way. Popping into existence may relate to Boltzmann brains which don't have time.

Replies from: TAG
comment by TAG · 2020-09-08T19:06:33.237Z · LW(p) · GW(p)

Boltzman brains that have some sort of ongoing experience of a stable universe are very problematical ,too.

Replies from: avturchin
comment by avturchin · 2020-09-08T19:17:13.363Z · LW(p) · GW(p)

The could form chains, like in dust theory or its mathematical formalism here: https://arxiv.org/abs/1712.01826

Replies from: TAG
comment by TAG · 2020-09-09T11:34:06.377Z · LW(p) · GW(p)

Prima facie, Boltzman brain theories don't predict experience. People sometimes try to fix that problem by making additional assumptions about consciousnes, leveraging the fact that no one knows how consciousness works.

comment by dadadarren · 2020-09-02T20:40:12.635Z · LW(p) · GW(p)

It may seem very natural to say "I" am a randomly chosen observer (from some proposed reference class). But keep in mind that is an assumption. PBA suggests that assumption is wrong. And if we reason from one consistent perspective such kind of assumptions are unnecessary.

Replies from: avturchin
comment by avturchin · 2020-09-03T19:58:30.460Z · LW(p) · GW(p)

Ok, let's look at a real world example: "drivers in next lane are going faster" suggested by Bostrom. It is true from observer's point of view but not true from the God's view.

Replies from: dadadarren
comment by dadadarren · 2020-09-04T14:54:55.386Z · LW(p) · GW(p)

"The drivers in the next lane are going faster" is true both from a driver's first-person view and from a god's eye view. However, none of those two are self-locating probabilities. This is explained by PBA's position on self-locating probabilities, by the link mentioned above.

The lane assignment can be regarded as an experiment. The lane with more vehicles assigned to it moves slower. Here, from a god's eye view, if a random car is selected then the probability of it from the slow lane is higher. From a driver's first-person view, "I" and the other drivers are in symmetrical positions in this lane assigning experiment. So the probability of me being in the slow lane is higher. According to PBA, both probabilities are valid. However, they are not the same concept, though they have the same value. (This point have been illustrated by Question 1 and Question 2 in the link above)

However, neither of them are self-locating probabilities in the anthropic context. Some anthropic reasoning suggests there is an innate reference class for indexicals like "I". E.g. SSA assumes "I" can be considered a randomly chosen human from all humans. This requires both the first-person to identify "I" and a god's eye view to do the choosing. It does not depend on any experiment. Compare this with the driver's first-person view above, the reference class is all drivers on the road. It is defined by the lane assigning experiment. It does not even matter if other drivers are humans or not. They could be all pigs and they would still be in symmetrical positions with me. The PBA argues the self-locating probabilities are invalid. (This point has been demonstrated by Question 3 in the link above.)

Since we are discussing Nick Bostrom's position, he made the explicit statement in "The Mysteries of Self-Locating Belief and Anthropic Reasoning" that an experiment is unnecessary in defining the reference class. We can always treat "I" as the result of an imaginary sampling process. This is in direct conflict with my PBA. According to PBA, anthropics is not an observation selection effect, just recognizing the perspective of reasoning.

Lastly, you are clearly interested in this topic. I just find the questions you raised have already been covered by the argument presented on my website. I can only recommend you give it a read. Because this question and answer model of communication is disproportionally effort heavy on my part. Cheers.

comment by ike · 2020-09-05T02:45:41.830Z · LW(p) · GW(p)

I don't agree with your argument against self-locating-uncertainty.

I define such questions in terms of expectations. So your question 2 is meaningful because it represents your expectations as to what you'll see on the label. Question 3 is likewise meaningful, because it represents expectations for anything that causally depends on who the original was - e.g. you ask Omega "was I the original"?

If you suppose there's literally no record anywhere, and therefore no difference in expectation, then I might agree that it's meaningless. But anthropic questions are typically not like that.

Replies from: dadadarren, ike, ike
comment by dadadarren · 2020-09-07T19:41:39.861Z · LW(p) · GW(p)

I am not sure about "expectations" in this context. If you meant mathematics expectations, i.e. the arithmetic means of large independent repetitions, then as I have demonstrated with the frequentist argument, the relative frequency does not converge on any value. So expectation does not exist for self-locating probabilities. If the"expectation" here just means the two alternatives, being the original vs the clone, are meaningful facts then I agree with this assessment. In fact, I argue the fact about the perspective center is extremely meaningful that they are axiomatic. So there is no rational way to assign probabilities to propositions regarding it. My argument against self-locating probability does not depend on the observer never being able to find out the answer. It does not hinge on the lack of record or differentiable observables. The answer (e.g. "original" vs "clone") could be on the back of a piece of paper laying right in front of me. There is still no way to make a probability out of what it says.

If you are familiar with the Many-Worlds Interpretation (MWI), its proponents often use self-locating probability to explain the empirical randomness of quantum mechanics. The argument typically starts with experiments with highly symmetrical outcomes (e.g. measuring the spin of an electron at along a perpendicular axis). Then after the experiment but before observing the outcome, it is argued the probability of "me" in a particular branch must be 1/2. However, one counter-argument is to question the validity of this claim. Why there has to be a probability at all? Why can I just say "I don't know"? Sean Carroll (one of the most famous supporters of MWI) calls it" the simple-minded argument, yet surprisingly hard to counter." (not verbatim) In the MWI construct, the experiment outcome is definitively observable. Yet that does not automatically justify a probability to it.

PBA starts with the plausible assumption of the importance of perspectives, the invalidity of self-locating probability is one of its conclusions. I think it is much less ad hoc than simply making the judgment call of saying there should/shouldn't be a probability in those situations. If we say there should be a probability to it, then it comes to the question of how. SSA or SIA, what counts as an observer, which reference class to use in which case etc. There's one judgment call after another. Paradoxes ensue.

Regarding the individual analysis of the paradoxes, I understand your position. If you do not agree with the invalidity of self-locating probabilities you would not agree with the takedowns. That is the nature of PBA. There is no flexibility in the argument such as the choice of reference classes like other schools of thought. Yet I would consider it an advantage rather than a weakness.

Replies from: ike
comment by ike · 2020-09-07T20:16:21.368Z · LW(p) · GW(p)

Expectations are subjective and Bayesian.

>The answer (e.g. "original" vs "clone") could be on the back of a piece of paper laying right in front of me.

I don't understand why you think question 2 is meaningful, but question 3 is not, in that case. If it's meaningful to ask what Omega labelled you, why isn't it meaningful to ask what Omega wrote on the paper in front of you?

>there is no rational way to assign probabilities to propositions regarding it

Bayesianism is perfectly capable of assigning probabilities here. You haven't actually argued for this claim, you're just asserting it.

>However, one counter-argument is to question the validity of this claim. Why there has to be a probability at all? Why can I just say "I don't know"?

You can, of course, do this for any question. You can refuse to make any predictions at all. What's unclear is why you're ok with predictions in general but not when there exist multiple copies of you.

>If we say there should be a probability to it, then it comes to the question of how. SSA or SIA, what counts as an observer, which reference class to use in which case etc. There's one judgment call after another. Paradoxes ensue.

I don't see any paradoxes. SIA is the natural setup, and observer is defined as any being that's subjectively indistinguishable from me. We don't need reference classes. I *know* that I'm subjectively indistinguishable from myself, so there's no need to consider any beings that don't know that. There are no judgement calls required.

Replies from: dadadarren
comment by dadadarren · 2020-09-07T21:51:19.247Z · LW(p) · GW(p)

I don't think I can give further explanations other than the ones already said. But I will try.

As for the difference between Questions 2 and 3, the fundamental difference is the probability cannot be formulated from a single perspective for 3. It requires the assumption of an innate reference class for the indexical "I". Both different from Question 2 which is regarding a random/unknown experiment. Again the argument has nothing to do with Omega can or cannot give definitive and differentiable answers to either of them.

Bayesianism is perfectly capable of assigning probabilities here. You haven't actually argued for this claim, you're just asserting it.

I only asserted that perspective is an important starting point of reasoning like an axiom. Arguments cannot be formulated from one consistent perspective is therefore false. That includes SIA, SSA, FNC, any notion of reference class for indexicals. And of course self-locating probabilities. I have also shown why self-locating probabilities cannot be formulated with a frequentist model. The same assertion about perspective's axiomatic importance also leads to other conclusions such as rational perspective disagreement. Whether or not my position is convincing is of debate. But I feel it is unfair to say I just asserted self-locating probabilities' invalidity without arguing for it.

You can, of course, do this for any question. You can refuse to make any predictions at all. What's unclear is why you're ok with predictions in general but not when there exist multiple copies of you.

I am not refusing to make a prediction. I am arguing in these cases there is no rational way to make predictions. And keep in mind, the nature of probability in MWI is a major ongoing debate. The fact that the probability comes from a complete known experiment with a deterministic outcome is not easily justifiable. So I think self-locating probabilities' validity should be at least debatable. Therefore I do not think your assertion that "Bayesianism is perfectly capable of assigning probabilities here" can be regarded as an obvious truth.

I don't see any paradoxes. SIA is the natural setup.

I think this is our fundamental disagreement. I do not think all anthropic paradoxes are settled by SIA. Nor do I think SIA is natural. Whatever that means. And I am pretty sure there will be supporters of SIA who's unhappy with your definition of the reference class (or lack thereof).

Replies from: ike
comment by ike · 2020-09-07T22:26:08.001Z · LW(p) · GW(p)
As for the difference between Questions 2 and 3, the fundamental difference is the probability cannot be formulated from a single perspective for 3.

I still see no relevant difference between 2 and 3. For one, you're assuming a random choice can be made, but aren't explaining how. Maybe that random choice results in two universes, one of which has the original get assigned as RED and the other has the clone assigned as RED.

I don't think that probabilities are impossible, just because there are multiple copies of me. I don't think you've addressed the issue at all. You're pointing at supposed paradoxes without naming a single one that rules out such probabilities.

And keep in mind, the nature of probability in MWI is a major ongoing debate.

None of the scenarios we've been discussing involve MWI - that's more complicated because the equations are tricky and don't easily lend themselves to simple conceptions of multiple worlds.

The fact that the probability comes from a complete known experiment with a deterministic outcome is not easily justifiable.

Bayesianism does not require determinism to generate probabilities. Technically, it's just a way of turning observations about the past into predictions about the future.

I do not think all anthropic paradoxes are settled by SIA.

Can you name one that isn't?

And I am pretty sure there will be supporters of SIA who's unhappy with your definition of the reference class (or lack thereof).

I don't know why I should care, even if that were the case. I have a firm grounding of Bayesianism that leads directly to SIA. Reference classes aren't needed:

https://wiki.lesswrong.com/wiki/Self-indication_assumption

Notice that unlike SSA, SIA is not dependent on the choice of reference class, as long as the reference class is large enough to contain all subjectively indistinguishable observers.

I think your assertion that SIA requires reference classes just means you aren't sufficiently familiar with it. As far as I can tell, your only argument against self locating probability had to do with the issue of reference classes.

Replies from: dadadarren
comment by dadadarren · 2020-09-08T16:26:35.289Z · LW(p) · GW(p)

At this point, I think it might be more productive to list our differences rather than try to convince each other.

1. I say a probability cannot be formulated from a single perspective is invalid. You say it doesn't matter.

BTW, you said Question 2 and 3 are no different if both labeling outcomes actualizes in two parallel universes. Yes, if that is the case Question 2 and Question 3 are the same. They are both self-locating probabilities and both invalid according to PBA. However, what you are saying is essentially the MWI. Given I already argued against MWI's origin of probability that is not a counter-argument. It is just a restatement of what I have said.

2. I say SIA is based on an imagined sampling from a reference class. You say it is not.

Here I am a little upset about the selective quoting of the lesswrong wiki to fit your argument. Why not quote the definition of SIA? "All other things equal, an observer should reason as if they are randomly selected from the set of all possible observers." The set of all possible observers being selected is the reference class. Also, you have miss-interpreted the part you quoted "SIA is not dependent on the choice of reference class, as long as the reference class is large enough to contain all subjectively indistinguishable observers." It is saying the choice of reference class, under some conditions, would not change the numerical value of probability. Because the effect of the choice cancels out in problems such as sleeping beauty and doomsday argument. Not that there is no reference class to begin with. Also, just a suggestion, given the ongoing nature of the debate of anthropic principles and the numerous paradoxes, try not to take a single source, even the lesswrong wiki, as given facts.

3. You think SIA solves all anthropic paradoxes. I think not.

To name a few problems that I can think of right away: Does every single observation we make confirms the MWI? Refer to the debate between Darren Bradley and Alastair Wilson if you are unfamiliar. Does my simple existence confirm the existence of the multiverse? Applying SIA to the simulation argument. Wouldn't my existence alone confirm there are numerous ancestor simulations? In that case, wouldn't "I" be almost certainly simulated? Contrary to the simulation argument, applying SIA would suggest the great filter is most likely ahead. So we should be pessimistic about reaching a technologically mature state as described by the simulation argument. In Dr. Evil and Dub, what conclusion would SIA make? Is the brain arm race correct? Does my existence already confirm the arms race has already happened? Can all the above questions be satisfactorily answered using one consistent choice of reference class?

Basing on your opinion previously given on some paradoxes, it seems you think they have idiosyncratic explanations. Which is not wrong per se. But it does seems ad hoc. And if the idiosyncratic reasons are so important, are the paradoxes really solved by SIA or those individual explanations?

Replies from: ike, ike
comment by ike · 2020-09-08T17:19:38.673Z · LW(p) · GW(p)

>Does every single observation we make confirms the MWI?

>Does my simple existence confirm the existence of the multiverse?

These are equivalent to presumptuous philosopher, and my answer is the same - if my existence is more likely given MWI or multiverse, then it provides Bayesian evidence. This may or may not be enough to be confident in either, depending on the priors, which depends on the simplicity of the theory compared to the simplicity of competing theories.

>Wouldn't my existence alone confirm there are numerous ancestor simulations?

No, there's a lot of implausible assumptions involved there. I don't think the measure of ancestor simulations across the multiverse is significant. If we had strong evidence that the measure was high, then that probability would go up, all else being equal.

The great filter itself relies on assumptions about base rates of life arising and thriving which are very uncertain. The post you link to says:

>Assume the various places we think the filter could be are equally likely.

Also, we should be conditioning on all the facts we know, which includes the nature of our Earth, the fact we don't see any signs of life on other planets, etc. It's not at all clear that a future filter is more likely once all that is taken into account.

>In Dr. Evil and Dub, what conclusion would SIA make?

The conclusion would be that they're equally likely to be the clone vs the original. Whether they should act on this conclusion depends on blackmail game theory.

> Is the brain arm race correct?

I don't know which paradox you're referring to.

I don't think any of these paradoxes are resolved by ad hoc thinking. One needs to carefully consider the actual evidence, the Bayes factor of conditioning on one's existence, the prior probabilities, and use SIA to put it all together. The fact that this sometimes results in unintuitive results shouldn't be held against the theory, all anthropic theories will be unintuitive somewhere.

Replies from: cubefox
comment by cubefox · 2020-10-07T00:14:42.971Z · LW(p) · GW(p)

Sorry if this is somewhat unrelated to the discussion here, but I don't think the SIA Doomsday can be dismissed so easily.

The great filter itself relies on assumptions about base rates of life arising and thriving which are very uncertain.

If we don't have overwhelming reason to think that the filter is in the past, or to think that there is no filter at all, SIA suggests that the filter is very, very likely in the future. SIA itself would, so to speak, be overwhelming evidence for a future filter; you would need overwhelming counter evidence to cancel this out. Or you do a Moorean Shift and doubt SIA, precisely because there apparently is no independent overwhelming evidence that such a filter is in the future. (Especially when we consider the fact that SIA pretty much rules out an AI as the future filter, since we do not only see no Aliens, we also see no rouge alien AIs. There is a separate post on this topic on her blog.) Or you doubt other details of the SIA Doomsday argument, but aside from SIA there aren't many it seems.

Replies from: ike
comment by ike · 2020-10-07T00:46:53.354Z · LW(p) · GW(p)

If we don't have overwhelming reason to think that the filter is in the past, or to think that there is no filter at all, SIA suggests that the filter is very, very likely in the future.

My current understanding is that the parameters don't imply the need for a future great filter to explain the Fermi paradox.

I don't think you need overwhelming evidence for this. SIA is only overwhelming evidence for a future filter if you already have overwhelming evidence that a filter exists beyond what we know of, which we don't.

Toy model: if a priori a filter exists 10% of the time, and this filter would prevent 99% of civilizations from evolving into humans OR prevent humans from becoming space civilizations (50% each), then there's 1800 worlds with no filter for every 200 worlds with a filter; 101 of those filter worlds contain humans and only two of those become space civilizations. So our probability of getting to space is 1802/1901.

If the probability a filter exists is 90%, then there's 200 worlds with no filter for every 1800 filter worlds. Out of the filter worlds, 909 contain humans. Out of those, 18 go to space. So the probability of us going to space is 218/1109.

You really do need overwhelming evidence that a filter exists / the filter is very strong before it creates overwhelming odds of a future filter.

Replies from: cubefox
comment by cubefox · 2020-10-07T23:38:55.242Z · LW(p) · GW(p)

I think you are right that when we are not very certain about the existence / strength of a great filter, SIA Doomsday loses lots of its force. But I think the standard argument for the "great filter hypothesis" was always that a strong filter is extremely likely because even if just a single civilization decides to colonize/sterilize the galaxy (e.g. via von Neumann probes) it could do so comparatively quickly. If it spreads at 1% of the speed of light, it takes 1 million years to colonize the whole Milky Way, which is a very short amount of time compared to the age of the galaxy or even our solar system. Yet the fermi paradox suggests the Milky Way is not colonized to a significant extent. So the expected number civilizations in the galaxy is so low that we are likely one of the first, if not the first.

A different point about your toy model: Why do you assume 50% each for the filter being in the past/future? That seems to ignore SIA. The point of the SIA Doomsday argument is precisely that the filter, assuming it exists, is much, much more likely to be found in the future than in the past. Because SIA strongly favors possible worlds with more observers who could be us, and in a possible world with a past filter (i.e. "early" or "middle" filter in Katja's post) there are of course very few such observers (the filter prevents them from coming into existence), but in a world with a late filter there are much more of them. (Indeed, SIA's preference for more observers who could be us seems to be unbounded, to the point that it makes it certain that there are infinitely many observers in the universe.)

Here is the link to the argument again: https://meteuphoric.com/2010/03/23/sia-doomsday-the-filter-is-ahead/

Replies from: ike
comment by ike · 2020-10-08T02:16:36.713Z · LW(p) · GW(p)

The standard argument for the great filter depends on a number of assumptions, and as I said, my current understanding is this standard argument doesn't work numerically once you set up ranges for all the variables. 

The point of the SIA Doomsday argument is precisely that the filter, assuming it exists, is much, much more likely to be found in the future than in the past.

Yes, this is true in my model - conditioning on a filter in the first case yields 100 future filters vs 1 past filter, and in the second case yields 900 future filters vs 9 past filters. There's a difference between a prior before you know if humans exist and a posterior conditioning on humans existing. 

Indeed, SIA's preference for more observers who could be us seems to be unbounded, to the point that it makes it certain that there are infinitely many observers in the universe.

This depends on your measure over the set of possible worlds, but one can plausibly reject infinities in any possible world or reject the coherency of such. As I've written elsewhere, I'm a verificationist and don't think statements about what is per se are verifiable or meaningful - my anthropic statements are meaningful insofar as they predict future experiences with various probabilities. 

Replies from: cubefox
comment by cubefox · 2020-10-08T22:31:48.535Z · LW(p) · GW(p)

The standard argument for the great filter depends on a number of assumptions, and as I said, my current understanding is this standard argument doesn't work numerically once you set up ranges for all the variables.

You are talking about the calculations by Sandberg, Drexler, and Ord, right? In a post where these results were discussed there was an interesting comment by avturchin:

https://www.lesswrong.com/posts/MdGp3vs2butANB7zX/the-fermi-paradox-what-did-sandberg-drexler-and-ord-really?commentId=4FpJaopxXsf7FRLCt [LW(p) · GW(p)]

It seems that SIA says that the parameters of the drake equation should be expected to be optimized for observers-which-could-be-us to appear, but exactly this consideration was not factored into the calculations of Sandberg, Drexler, and Ord. Which would mean their estimations for the expected number of civilizations per galaxy are way too low.

Yes, this is true in my model - conditioning on a filter in the first case yields 100 future filters vs 1 past filter, and in the second case yields 900 future filters vs 9 past filters. There's a difference between a prior before you know if humans exist and a posterior conditioning on humans existing.

Then what I don't quite understand is why the calculation of your toy model seems so different from the calculation in Katja's post. In her calculation there is a precise point where SIA is applied, while I don't see such a point I'm your calculation. Also, the original Bostom SIA ("SSA+SIA") does, as Dadadarren pointed out, involve a reference class whose effect then "cancels out" while you are, as you pointed out, trying to avoid reference classes to begin with.

Maybe your version of SIA is closer to something like FNC than to the original SIA. Perhaps you should try to give your version a precise definition. The core idea, as far as my limited understanding goes, is this: If hypothesis H makes my existence M more likely, then my existence M also makes H more likely. Because P(M|H) > P(M) implies P(H|M) > P(H). This of course doesn't work if P(M) is 1 to begin with, as you would expect if M means something like the degree of belief I have in my own existence, or in "I exist". So we seem to be forced to consider a "non-centered" version without indexical, i.e. "cubefox exists", which plausibly has a much lower probability than 1 from the gods eye perspective. If we call my indexical proposition M_i and the non-indexical proposition M_c, it becomes clear that the meaning of 'M' in "P(M|H) > P(M) implies P(H|M) > P(H)" is ambiguous. If it means: "P(M_c|H) > P(M_c) implies P(H|M_i) > P(H)" then this of course is not a theorem of probability theory anymore. So how is it justified? If we take M_i to simply imply M_c, then P(M_c) would also be 1 and the first inequality (P(M_c|H) > P(M_c)) would again be false.

Maybe I'm off the track here, but Dadadarren seems to be at least right in that the relation between indexical and non-indexical propositions is both important and not straightforward.

This depends on your measure over the set of possible worlds, but one can plausibly reject infinities in any possible world or reject the coherency of such.

Now that seems to me a surprising statement. As far as I'm aware, the most popular guess among cosmologist about the size of the universe is that it is infinitely large. That it has the topology of a plane rather than of a sphere or a torus. Why and how would we plausibly reject this widely held possibility? On the contrary, it seems that SIA presumptuously requires us to categorically reject the sphere and the torus possibility on pure a priori grounds because they imply a universe finite in size and thus with way too few observers.

The only a priori reason against a "big" universe I can think of is one with infinite complexity. By Ockham's razor, it would be infinitely unlikely. If simplicity is low complexity, and complexity is information content, then the complexity C is related to probability P with C(x) = -log_{2}P(x), or P(x) = 2^-C(x). If C(x) is infinite, P(x) is 0.

But an infinitely large universe doesn't mean infinite complexity, at least not in the information content sense of "incompressibility". An infinite universe may arise from quite simple laws and initial conditions, which would make its information content low, and its probability relatively high.

As I've written elsewhere, I'm a verificationist and don't think statements about what is per se are verifiable or meaningful - my anthropic statements are meaningful insofar as they predict future experiences with various probabilities.

Well, SIA seems to predict that we will encounter future evidence which would imply a finite size of the universe with probability 0. Which is just what you required. While the silly cosmologists have not ruled out a finite universe, we philosophers just did so on pure a priori grounds. :)

Replies from: ike, ike
comment by ike · 2020-10-09T00:05:42.504Z · LW(p) · GW(p)

It seems that SIA says that the parameters of the drake equation should be expected to be optimized for observers-which-could-be-us to appear, but exactly this consideration was not factored into the calculations of Sandberg, Drexler, and Ord. Which would mean their estimations for the expected number of civilizations per galaxy are way too low.

I don't think this is correct. Look at page 6 of https://arxiv.org/pdf/1806.02404.pdf

SIA is a reason to expect very low values of N to be unlikely, since we would be unlikely to exist if N was that low. But the lowest values of N aren't that likely - probability of N<1 is around 33%, but probability of N<10^-5 is around 15. It seems there's at least a 10% chance that N is fairly close to 1, such that we wouldn't expect much of a filter. This should carry through to our posterior such that there's a 10% chance that there's no future filter. 

comment by ike · 2020-10-08T23:23:03.754Z · LW(p) · GW(p)

You are talking about the calculations by Sandberg, Drexler, and Ord, right?

Yes. Will read that post and get back to you.

reference class whose effect then "cancels out" while you are, as you pointed out, trying to avoid reference classes to begin with.

I don't know that this is a meaningful distinction, being as both produce the same probabilities. All we need is a reference class large enough to contain anything that I might be / don't currently know that I am not. 

Perhaps you should try to give your version a precise definition.

SIA is a prior over observers, once you have a prior over universes. It says that for any two observers that are equally likely to exist, you are equally likely to "be" either one (and corresponding weighting for observers not equally likely to exist). We take this prior and condition on our observations to get posterior probabilities for being in any particular universe as any particular observer. 

I'm not conditioning on "ike exists", and I'm not conditioning on "I exist". I'm conditioning on "My observations so far are ike-ish" or something like that. This rules out existing as anyone other than me, but leaves me agnostic as to who "I" am among the group of observers that also have had the same set of observations. And the SIA prior means that I'm equally likely to be any member of that set, if those members had an equal chance of existing. 

Why and how would we plausibly reject this widely held possibility?

If it's incoherent, it doesn't matter how many people believe it. 

On the contrary, it seems that SIA presumptuously requires us to categorically reject the sphere and the torus possibility on pure a priori grounds because they imply a universe finite in size and thus with way too few observers.

You're smuggling in a particular measure over universes here. You absolutely need to do the math along with priors and justification for said priors, you can't just assert things like this. 

An infinite universe may arise from quite simple laws and initial conditions, which would make its information content low, and its probability relatively high.

It's not clear to me this counts as an infinite universe. It should repeat after a finite amount of time or space or both, which makes it equivalent to a finite universe being run on a loop, which doesn't seem to count as infinite. That's assuming all of this talk is coherent, which it might not be - our bandwidth is finite and we could never verify an infinite statement.

Well, SIA seems to predict that we will encounter future evidence which would imply a finite size of the universe with probability 0.

You need to specify the measure, as above. I disagree that this is an implication of SIA. 

Replies from: cubefox
comment by cubefox · 2020-10-11T00:00:52.400Z · LW(p) · GW(p)

SIA is a reason to expect very low values of N to be unlikely, since we would be unlikely to exist if N was that low. But the lowest values of N aren't that likely - probability of N<1 is around 33%, but probability of N<10^-5 is around 15. It seems there's at least a 10% chance that N is fairly close to 1, such that we wouldn't expect much of a filter. This should carry through to our posterior such that there's a 10% chance that there's no future filter.

I'm not quite sure I understand you here... Let me unpack this a little.

SIA is a reason to expect very low values of N to be unlikely, since we would be unlikely to exist if N was that low.

Yes, but not only that, according to SIA our existence is also a reason to expect high values of N to be likely, since we are more likely to exist if N is higher. But Sandberg, Drexler, and Ord (SDO) do not include this consideration. Instead, they identify the probability P(N<1) with the probability of us being alone in the galaxy (repeatedly, e.g. on page 5). But that's simply a mistake. P(N<1) is just the probability that a galaxy like ours is empty. (Or rather it is close to that probability, which is actually about e^-N as they say in footnote 3). But the probability of us being alone in the galaxy, i.e. that no other civilizations besides us exist in the galaxy, is rather the probability that at most one civilization exists in the galaxy, given that at least one civilization (us) exists in the galaxy. To calculate this would amount to apply SIA. Which they didn't do. This mistake arguably breaks the whole claim of the paper.

It seems there's at least a 10% chance that N is fairly close to 1, such that we wouldn't expect much of a filter. This should carry through to our posterior such that there's a 10% chance that there's no future filter.

What do you mean with "fairly close to one" here? SDO calculate densities, so we would need a range here. Maybe 0.9<N<1.1? 0.99<N<1.01? 0.5<N<1.5? I don't even know how to interpret such fraction intervals, given that we can't have a non-integer number of civilizations per galaxy.

The whole probability distribution for N should have been updated on the fact that N is at least 1. (They actually consider an update later on in the paper, but not on our existence, but on the Fermi observation, i.e. that we don't see signs of ETI.)

I'm not conditioning on "ike exists", and I'm not conditioning on "I exist". I'm conditioning on "My observations so far are ike-ish" or something like that. This rules out existing as anyone other than me, but leaves me agnostic as to who "I" am among the group of observers that also have had the same set of observations. And the SIA prior means that I'm equally likely to be any member of that set, if those members had an equal chance of existing.

This sounds interesting. The "or something like that" is crucial of course... Last time I thought your version of SIA might actually be close to FNC (Full Non-indexical Conditioning) by Radford Neal, which is mostly equivalent in results to SIa. But your "My observations so far are ike-ish" does have an indexical ("my") in it, while FNC ignores all indexical evidence. (This is initially a big advantage, since it is an open question how beliefs with indexicals, so-called self-locating credences, should be modelled systematically in Bayesian reasoning, which leads to the need for additional ad-hoc principles like SSA or SIA.) As far as I understand it, FNC conditions rather on something like "Someone has exactly this state of mind: [list of ike's total evidence, including memories and current experience]". Note that this is not a self-locating probability. But FNC (in contrast to SIA) leads to strange results when there are so many observers in the universe that it becomes virtually certain that there is someone (not necessarily you) with the same mind as you, or even certain that there exist an observer for any possible state of mind.

Maybe you know this already, but if not and if you are interested: in Neal's original paper there is a rather compact introduction to FNC from page 5 to 9, i.e. sections 2.1 to 2.3. The rest of the paper is not overly important. The paper is here: https://arxiv.org/abs/math/0608592 I'm saying this because you seem to have some promising intuitions which Neal also shares, e.g. he also wants to do away with the artificial "canceling out" of reference classes in SIA, and because FNC is, despite its problem with large universes, in some way an objective improvement over SIA, because it basically falls out of standard Bayesian updating if you ignore indexical information, in contrast to principles like SSA or SIA.

But if your approach really needs indexicals it still sounds plausible. Though there are some open questions related to indexicals. How should the unconditional probability of "My observations so far are ike-ish" be interpreted? For you, this probability is one, presumably. For me it is zero, presumably. But what is it from a gods-eye perspective? Is it undefined, because then "my" has no referent, as dadadarren seems to suggest? Or can the "my" be replaced? Maybe with "The observations of a random observer, who, according to ike's evidence, might be ike, are ike-ish"?

This rules out existing as anyone other than me, but leaves me agnostic as to who "I" am among the group of observers that also have had the same set of observations.

Actually this is a detail which doesn't seem to me quite right. It seems you are rather agnostic about who are you among the group of observers that, from your limited knowledge, might have had the same set of observations as you.

You're smuggling in a particular measure over universes here. You absolutely need to do the math along with priors and justification for said priors, you can't just assert things like this.

The priors are almost irrelevant. As long as an infinite universe with infinite observers has a prior probability larger than 0 being in such an universe is infinitely more likely than being in a universe with finitely many observers. But given that cosmologists apparently find an infinite universe the most plausible possibility, the probability should arguably be estimated much higher than 0%, apparently many of them find it higher than 50% if they believe in an infinite universe. Let's assume an infinite and an finite universe mit infinitely many observers are equally likely. Then the odds for being in on of those universes are, according to SIA, n:infinite, where n is the number of observers in the finite universe. We could wheigh these odds by almost any prior probabilities other than 50%/50% and the result wouldn't change: Infinite weighted by any non-zero probability is still infinite, and n stays a finite number regardless. It will always be infinitely more likely to be in the universe with infinitely many observers. So there are only two possibilities: either the prior probability of a universe with infinitely many observers is not 0, then SIA says we live in such an infinite universe with probability 1. Or the prior probability of and infinite universe is 0, then SIA leaves it at 0.

It's not clear to me this counts as an infinite universe. It should repeat after a finite amount of time or space or both, which makes it equivalent to a finite universe being run on a loop, which doesn't seem to count as infinite.

Why not? You might then have exact doppelgängers but you are not them. They are different persons. If you have a headache, your doppelgänger also has a headache, but you feel only your headache and your doppelgänger feels only his. If there are infinitely many of those doppelgängers, we have infinitely many persons. Also, a universe with infinite complexity would also have doppelgängers. Apart from that, simple laws and initial conditions can lead to chaotic outcomes, which are indistinguishable from random ones, i.e. from ones with infinite information content. Consider the decimal expansion of pi. It is not periodic like a rational number, it looks like a random number. Yet it can be generated with a very short algorithm. It is highly compressible, a random number is not, but this is the only qualitative difference. Another example are cellular automata like Conway's Game of Life, or fractals like the Mandelbrot set. Both show chaotic, random-looking behavior from short rules/definitions. Such a infinite universe with pseudo randomness might be nearly indistinguishable from one with infinite information content.

That's assuming all of this talk is coherent, which it might not be - our bandwidth is finite and we could never verify an infinite statement.

It depends on what you mean with "verify". If you mean "assign probability 1 to it" then almost nothing can be verified, not even that you have a hand. (You might be deceived by a Cartesian demon into thinking there is an external world.) If you mean with "verify", as you suggested in your last comment, to assign some probability after gaining evidence, then this is just updating.

Replies from: ike
comment by ike · 2020-10-11T02:35:25.200Z · LW(p) · GW(p)

I don't even know how to interpret such fraction intervals, given that we can't have a non-integer number of civilizations per galaxy.

N is the average number of civilizations per galaxy. 

But the probability of us being alone in the galaxy, i.e. that no other civilizations besides us exist in the galaxy, is rather the probability that at most one civilization exists in the galaxy, given that at least one civilization (us) exists in the galaxy. To calculate this would amount to apply SIA.

I was going to agree with this, but I realize I need to retract my earlier agreement with this statement to account for the difference between galaxies and the observable universe. We don't, in fact, have evidence for the "fact that N is at least 1." We have evidence that the number of civilizations in the universe is at least one. But this is likely to be true even if the probability of a civilization arising on any given galaxy is very low. 

I think I agree with you that SIA means higher values of N are higher a priori. But I'm not sure this leads to the overwhelming evidence of a future filter that you need, or much evidence for a future filter at all. 

I'll also note that some of the parameters are already adjusted for such effects: 

As noted by Carter and McCrea [10] the evidential power of the early emergence of life on Earth is weakened by observer selection effects, allowing for deep uncertainty about what the natural timescale of life formation is

You've succeeded in confusing me, though, so I'll have to revisit this question at a later point. 

But what is it from a gods-eye perspective?

It doesn't seem meaningful to ask this. 

It seems you are rather agnostic about who are you among the group of observers that, from your limited knowledge, might have had the same set of observations as you.

If some observer only has some probability of having had the same set of observations, then they get a corresponding weight in the distribution. 

As long as an infinite universe with infinite observers has a prior probability larger than 0 being in such an universe is infinitely more likely than being in a universe with finitely many observers.

This breaks all Bayesian updates as probabilities become impossible to calculate. Which is a great reason to exclude infinite universes a priori. 

You might then have exact doppelgängers but you are not them.

I don't see any meaningful sense in which this is true. 

Such a infinite universe with pseudo randomness might be nearly indistinguishable from one with infinite information content.

I don't know how this is relevant. 

It depends on what you mean with "verify".

I wrote two posts on this: https://www.lesswrong.com/posts/PSichw8wqmbood6fj/this-territory-does-not-exist [LW · GW] and https://www.lesswrong.com/posts/zm3Wgqfyf6E4tTkcG/the-short-case-for-verificationism. [LW · GW] I don't think ontological claims are meaningful except insofar as they mean a set of predictions, and infinite ontological claims are meaningless under this framework. 

Replies from: cubefox
comment by cubefox · 2020-10-11T16:39:29.329Z · LW(p) · GW(p)

N is the average number of civilizations per galaxy.

I was going to agree with this, but I realize I need to retract my earlier agreement with this statement to account for the difference between galaxies and the observable universe. We don't, in fact, have evidence for the "fact that N is at least 1." We have evidence that the number of civilizations in the universe is at least one. But this is likely to be true even if the probability of a civilization arising on any given galaxy is very low.

SDO treat N as the expected number of civilizations in the Milky Way, i.e. in our galaxy (page 2):

The Drake equation was intended as a rough way to estimate of the number of detectable/contactable civilizations in the Milky Way

If they interpret N in this way, then N is at least 1. They didn't account for this fact in a systematic way, even if some parameter estimations already should include some such considerations. (From your quote I don't find it clear whether this is really the case. Also SIA is a fairly new theory and as such unlikely to play a significant role in the historical estimates they looked at).

But what is it from a gods-eye perspective?

It doesn't seem meaningful to ask this.

It just occurred to me that you still need some prior probability for your sentence which is smaller than 1. If you condition on "My observations so far a ike-ish" and this statement for you has unconditional probability 1, then conditioning on it has not effect. Conditioning on a probability 1 statement is like not conditioning at all. But what is this prior probability and how could it be smaller than 1 for you? It seems to be necessarily true for you. I guess we are forced to consider some non-indexical (gods-eye) version of that statement, e.g. like the one I suggested in my last comment. Also your characterization of (your version of) SIA was quite informal, so there is room for improvement. My personal goal would be to make SIA (or a similar principle) nothing more than a corollary of Bayesian updating, possibly together with a general theory of indexical beliefs.

If some observer only has some probability of having had the same set of observations, then they get a corresponding weight in the distribution.

Good idea. Maybe it is not just the probability that the hypothetical observer had the same observations, it's the probability that the hypothetical observer exists and had the same observations. Not just what observations observers made is often a guess but also how many of them exist. Also, I don't think "had the same observations" is quite right to characterize the "total evidence". Because there could be observers like a Swamp Man (or Boltzmann brain etc) which have the same state of mind as you, and thus arguably the same total evidence, but whose memories formed just by accident and not because they actually made the experiences/observations they think they remember. So I think "has the same state of mind" is better to not exclude those freak observers to begin with, because we might be such a freak observer.

This breaks all Bayesian updates as probabilities become impossible to calculate.

I think you are referring to what is known as the measure problem in cosmology: What is the probability that a cow is two-headed if there are infinitely many one-headed and two-headed cows in the universe? Surely it is still much more probable that a cow is one-headed. There are apparently several solutions proposed in cosmology. For a universe with is spatially infinite I would estimate the probability of a cow to be one-headed by the ratio of the expected number of one-headed cows to the expected number of cows -- in a growing imaginary sphere around us. The sphere is of finite size and we take the probability of a cow being one-headed as the limit of the ratio as the size of the sphere goes towards infinity. Then surely the sphere at any finite size contains much more one-headed cows than two-headed cows (the latter are estimated at a much larger number because two-headedness is not evolutionary advantageous for cows). There are other proposed solutions. I think one can be optimistic here that probabilities are not impossible to calculate.

Which is a great reason to exclude infinite universes a priori.

I think the measure problem is merely a practical problem for us. Which would be an instrumental reason not to consider infinite universes if we don't like to work on the measure problem (if only considering universes with finite size has higher utility for us). But we would need an epistemic reason, in contrast to an instrumental reason, to a priori exclude a possibility by assigning it probability 0. I think there are three types of epistemic reasons to do this:

  1. if we think that the idea of an infinite universe is logically contradictory (that seems not to be the case)

  2. if we think that an infinite universe is infinitely unlikely (That seems only the case for infinite universes with infinite information content. But infinite universes can plausibly have finite and even low finite information content.)

  3. If something is not the case to which we have direct epistemic access. I currently do not have a headache. Since we are perfectly competent in judging the contents of our mind, and a headache is in the mind, my probability of "I have a headache" is 0. (Unlike headaches and other observational evidence, infinite universes are not mental objects, so that option is also not viable here.)

To highlight the difference between practical/instrumental reasons/rationality and epistemic reasons/rationality: Consider Pascal's Wager. Pascal argued that believing in God has higher expected utility than not believing or being agnostic. Whether that argument goes through is debatable, but in any case it doesn't show that God exists (that his existence is likely). If subjectivly assigning high probability to a hypothesis has high utility, that doesn't mean that this hypothesis actually has high probability. And the other way round.

Such a infinite universe with pseudo randomness might be nearly indistinguishable from one with infinite information content.

I don't know how this is relevant.

You seemed to specifically object to universes with finite information content on grounds that they are just (presumably periodic) "loops". But they need not be any more loopy than universes with infinite information content.

I wrote two posts on this: https://www.lesswrong.com/posts/PSichw8wqmbood6fj/this-territory-does-not-exist [LW · GW] and https://www.lesswrong.com/posts/zm3Wgqfyf6E4tTkcG/the-short-case-for-verificationism [LW · GW]. I don't think ontological claims are meaningful except insofar as they mean a set of predictions, and infinite ontological claims are meaningless under this framework.

But you seem to be fine with anything on which you could possibly update. E.g. there could be evidence for or against the plane topology of the universe. The plane topology means the universe is infinitely large. And as I said, SIA seems to make the significant prediction that evidence which implies a finite universe has probability 0.

I know this opens a huge can of worms, but I also wanted to comment on this one:

By talking about the unseen causes of visible events, it is often possible for me to compress the description of visible events. By talking about atoms, I can compress the description of the chemical reactions I've observed. Sure, but a simpler map implies nothing about the territory.

If hypotheses (e.g. about the existence of hands and chairs and rocks and electrons and forces and laws) which assume the existence of things external to our mind greatly reduce the information content of our mental evidence, then those hypotheses are more likely to be true than a pure phenomenological description of the evidence itself. Because lower information content means higher a priori probability. If you entertained the hypothesis that solipsism is true, this would not compress your evidence at all, which means the information content of that hypothesis would be very high, which means it is very improbable. The map/territory analogy is not overly helpful here I think. If you mean with map "hypotheses", then simpler hypotheses do in fact (probabilistically) "imply" something about the world, because simpler hypotheses are more likely to be true.

Another point: There many people who say that the main task of science is not to make useful technology, or to predict the future, but to explain our world. If you have some evidence E and a hypothesis H, and that hypothesis is supposed to explain your evidence, then that explanation is correct if and only if the following is true:

E because H.

But the truth of any statement of the form of "y because x" arguably implies the truth of x and y. So H must be true in order to correctly explain your evidence. If H is true, and H asserts the existence of things external to your mind (hands, chairs, laws etc.) then those things exist. Almost any hypothesis talks about objects external to your mind. In fact, we wouldn't even call beliefs about objects internal to our mind ("I have a headache", "I have the visual impression of a chair in front of me", "I have a memory of eating pizza yesterday") hypotheses at all, we would just call them "evidence". If no external things exist, then all "y because x" statements would be false.

I'm not sure about your argument involving the "level IV multiverse". I think it is equivalent to modal realism (everything which possibly exists, exists). I'm not sure whether the information content of that hypothesis is high or low. (It is infinite if we think of it as a long description of every possible world. If the information content is very high, then the hypothesis is likely to be false, which would justify our belief that it is false. If it is in fact false, we have a justified true belief in the falsity of modal realism. Since this is not a Gettier case, we then would know that modal realism is false.)

Replies from: ike
comment by ike · 2020-10-11T17:29:10.712Z · LW(p) · GW(p)

If they interpret N in this way, then N is at least 1.

No, N is a prior. You can't draw conclusions about what a prior is like that. N could be tiny and there could be a bunch of civilizations anyway, that's just unlikely. 

It just occurred to me that you still need some prior probability for your sentence which is smaller than 1.

Sure, prior in the sense of an estimate before you learn any of your experiences. Which clearly you're not actually computing prior to having those experiences, but we're talking in theory. 

My personal goal would be to make SIA (or a similar principle) nothing more than a corollary of Bayesian updating, possibly together with a general theory of indexical beliefs.

SIA is just a prior over what observer one expects to end up with. 

Maybe it is not just the probability that the hypothetical observer had the same observations, it's the probability that the hypothetical observer exists and had the same observations. Not just what observations observers made is often a guess but also how many of them exist.

I'm not sure what distinction you're drawing here. Can you give a toy problem where your description differs from mine?

So I think "has the same state of mind" is better to not exclude those freak observers to begin with, because we might be such a freak observer.

My usual definition is "subjectively indistinguishable from me", you can substitute that above. 

The sphere is of finite size and we take the probability of a cow being one-headed as the limit of the ratio as the size of the sphere goes towards infinity.

This is basically just downweighting things infinitely far away infinitely low. It's accepting unboundedness but not infinity. Unboundedness has its own problems, but it's more plausible than infinity. 

But we would need an epistemic reason, in contrast to an instrumental reason, to a priori exclude a possibility by assigning it probability 0.

I'm not assigning it probability 0 so much as I'm denying that it's meaningful. It doesn't satisfy my criterion for meaning. 

You seemed to specifically object to universes with finite information content on grounds that they are just (presumably periodic) "loops".

That's one objection among several, but the periodicity isn't the real issue - even without that it still must repeat at some point, even if not regularly. All you really have is an irrational set of ratios between various "states of the world", calling that "infinity" seems like a stretch. 

those hypotheses are more likely to be true

What do you mean by true here?

Because lower information content means higher a priori probability.

Probability is just a means to predict the future. Probabilities attached to statements that aren't predictive in nature are incoherent. 

If you entertained the hypothesis that solipsism is true, this would not compress your evidence at all, which means the information content of that hypothesis would be very high, which means it is very improbable.

The same thing is true of the "hypothesis" that solipsism is false. It has no information content. It's not even meaningful to say that there's a probability that it's true or false. Neither is a valid hypothesis. 

If no external things exist, then all "y because x" statements would be false.

The problem with this line of reasoning is that we commonly use models we know are false to "explain" the world. "All models are wrong, some models are useful". 

Also re causality, Hume already pointed out we can't know any causality claims. 

Also, it's unclear how an incoherent hypothesis can serve to "explain" anything. 

I think explanations are just fine without assuming a particular metaphysics. When we say "E because H", we just mean that our model H predicts E, which is a reason to apply H to other predictions in the future. We don't need to assert any metaphysical statements to do that. 

Replies from: cubefox
comment by cubefox · 2020-10-12T02:09:26.408Z · LW(p) · GW(p)

No, N is a prior. You can't draw conclusions about what a prior is like that. N could be tiny and there could be a bunch of civilizations anyway, that's just unlikely.

I just quoted the paper. It stated that N is the expected number of civilizations in the Milky Way. If that is the case, we have to account for the fact that at least one civilization exists. Which wasn't done by the authors. Otherwise N is just the expected number of civilizations in the Milky Way under the assumption we didn't knew that we existed.

Sure, prior in the sense of an estimate before you learn any of your experiences. Which clearly you're not actually computing prior to having those experiences, but we're talking in theory.

"before you learn any experience"? I.e. before you know you exist? Before you exist? Before the "my" refers to anything? You seem to require exactly what I suspected: a non-indexical version of your statement.

SIA is just a prior over what observer one expects to end up with.

There are infinitely many possible priors. One would need a justification that the SIA prior is more rational than the alternatives. FNC made much progress in this direction by only using Bayesian updating and no special prior like SIA. Unfortunately there are problems with this approach. But I think those can be fixed without needing to "assume" some prior.

This is basically just downweighting things infinitely far away infinitely low.

All things in the universe get weighted and all get weighted equally. Things just get weighted in a particular order, nearer things get weighted "earlier" so to speak (not in a temporal sense), but not with more weight.

It's accepting unboundedness but not infinity. Unboundedness has its own problems, but it's more plausible than infinity.

"Unboundednes" is means usually something else. A universe with a sphere or torus topology is unbounded but finite in size. I'm talking about a plane topology universe here which is both unbounded and infinitely large.

But you seem to have something like hyperreal numbers in mind when you talk about infinity. Hyperreal numbers include "infinite numbers" (the first is called omega) which are larger than any real number. But if cosmologists talk about a universe which is spatially infinite, they only say that for any positive real number n, there is a place in the universe which is at least n+1 light-years away. They do not say "there is something which is omega light-years away". They do not treat infinite as a (kind of) number. That's more of a game played by some mathematicians who sometimes like to invent new numbers.

I'm not sure what distinction you're drawing here. Can you give a toy problem where your description differs from mine?

You might be certain that 100 observers exist in the universe. You are not sure who might be you, but one of the observers you regard as twice as likely to be you as each of the other ones, so you weigh it twice a strong.

But you may also be uncertain of how many observers exist. Say you are equally uncertain about the existence of each of 99 and twice as certain about the existence of a hundredth one. Then you weigh it twice as strong. (I'm not quite sure whether this is right.)

That's one objection among several, but the periodicity isn't the real issue - even without that it still must repeat at some point, even if not regularly.

Even in a finite universe there might be repetition. Possibly our universe is finite and contains not only Earth but also a planet we might call Twin-Earth very far away from Earth. Twin-Earth is a perfect duplicate of Earth. It's even called "Earth" by twin-earthlings. If a person X on Earth moves only his left arm, Twin-X on Twin-Earth also moves only his left arm. But this is merely (perfect) correlation, there is no stronger form of dependence, like counterfactual dependence. If X had moved his right arm instead, Twin-X still had moved only his left arm. This could not be the case if X and Twin-X were identical. Also, if X hurts his foot, Twin-X will also hurt his foot, but X will only feel the pain caused by X's foot and not the pain caused by the foot of Twin-X. They don't share a single mind.

All you really have is an irrational set of ratios between various "states of the world", calling that "infinity" seems like a stretch.

I would rather say that it's a stretch to regard infinity as a ordinary number, as you are apparently doing. The limit view of infinity doesn't do this. "Infinity" then just means that for any real number there is another real number which is larger (or smaller).

those hypotheses are more likely to be true

What do you mean by true here?

What we usually mean. But you can remove "to be true" here and the meaning of the sentence stays the same.

Probability is just a means to predict the future.

We can perfectly well (and do all the time) make probabilistic statements about the present or the past. I suggest to regard probability not so much as a "means" but as a measure of uncertainty, where P(A)=1/2 means I am (or perhaps: I should be) perfectly uncertain whether A or not A. This has nothing to do with predictions. (But as I said, the hypothesis of an infinite universe makes predictions anyway.)

Probabilities attached to statements that aren't predictive in nature are incoherent.

Where is the supposed "incoherence" here?

The best characterization of incoherence I know treats it as a generalization of logical contradiction: A and B are (to some degree) incoherent if P(A and B) < P(A)*P(B). Negative statistical dependence. I.e. each one is evidence against the other. But you seem to mean something else.

The same thing is true of the "hypothesis" that solipsism is false. It has no information content.

It is verified by just a single non-mental object. It has information content, just a very low one. Not as low as "something exists" (because this is also verified by mental objects) but still quite low. Only tautologies have no (i.e. zero) information content.

The problem with this line of reasoning is that we commonly use models we know are false to "explain" the world. "All models are wrong, some models are useful".

The common answer to that is that Newton's theory of gravity isn't so much wrong as it is somewhat inaccurate. A special case of Einstein's more accurate theory. A measure of (in)accuracy is generalization error in statistics. Low generalization error seems to be for many theories what truth is for ordinary statements. And if we would say of an ordinary statement A that it is "more likely" than an other statement B we would say that a theory X has a "lower expected" generalization error than a theory Y.

Also re causality, Hume already pointed out we can't know any causality claims.

Well, not only that! Hume also said that no sort of inductive inference is justified, probabilistic or not, so all predictions would be out of the window, not just ones about causal relationships. Because the evidence is almost always consistent with lots of possible but incompatible predictions. I would say that an objective a priori probability distribution over hypotheses (i.e. all possible statements) based on information content solves the problem. For indexical hypotheses I'm not quite certain yet, maybe there is something similar objective for an improved version of SIA. If there is no objective first prior then Hume is right and verificationism is wrong. What you predict would rely on an arbitrary choice of prior probabilities.

I think explanations are just fine without assuming a particular metaphysics. When we say "E because H", we just mean that our model H predicts E, which is a reason to apply H to other predictions in the future. We don't need to assert any metaphysical statements to do that.

That doesn't work for many reasons. Some barometer reading predicts a storm, but it doesn't explain it. Rather there is a common explanation for both the barometer reading and the storm: air pressure.

Also, explanation (because statements) are asymmetric. If B because A then not A because B. But prediction is symmetric: If A is evidence for B, then B is evidence for A. Because one is evidence for the other if both are positively probabilistically dependent ("correlated"). P(A|B) > P(A) implies P(B|A) > P(B). The rain predicts the wet street, so the wet street predicts the rain. The rain explains the wet street, so the wet street doesn't explain the rain.

There are even some cases where H explains E but H and E don't predict each other, i.e. they are not positively statistically dependent. These cases are known as Simpson's paradox.

Replies from: ike
comment by ike · 2020-10-12T03:01:26.223Z · LW(p) · GW(p)

I just quoted the paper. It stated that N is the expected number of civilizations in the Milky Way. If that is the case, we have to account for the fact that at least one civilization exists. Which wasn't done by the authors. Otherwise N is just the expected number of civilizations in the Milky Way under the assumption we didn't knew that we existed.

The update we need to do is not equivalent to assuming N is at least one, because as I said, N being less than one is consistent with our experiences. 

"before you learn any experience"? I.e. before you know you exist? Before you exist? Before the "my" refers to anything?

Yes, it gets awkward if you try to interpret the prior literally. Don't do that, just apply the updating rules. 

There are infinitely many possible priors. One would need a justification that the SIA prior is more rational than the alternatives.

SIA as a prior just says it's equally likely for you to be one of two observers that are themselves equally likely to exist. Any alternative will necessarily say that in at least one such case, you're more likely to be one observer than the other, which violates the indifference principle. 

You might be certain that 100 observers exist in the universe. You are not sure who might be you, but one of the observers you regard as twice as likely to be you as each of the other ones, so you weigh it twice a strong.

But you may also be uncertain of how many observers exist. Say you are equally uncertain about the existence of each of 99 and twice as certain about the existence of a hundredth one. Then you weigh it twice as strong.

I'm not sure where my formulation is supposed to diverge here. 

"Infinity" then just means that for any real number there is another real number which is larger (or smaller).

Well, this is possible without even letting the reals be unbounded. For any real number under 2, there's another real number under 2 that's greater than it. 

We can perfectly well (and do all the time) make probabilistic statements about the present or the past.

And those statements are meaningless except insofar as they imply predictions about the future.

Where is the supposed "incoherence" here?

The statement lacks informational content. 

It is verified by just a single non-mental object.

I don't know what this is supposed to mean. What experience does the statement imply?

Low generalization error seems to be for many theories what truth is for ordinary statements.

Sure, I have no problem with calling your theory true once it's shown strong predictive ability. But don't confuse that with there being some territory out there that the theory somehow corresponds to. 

objective a priori probability distribution over hypotheses (i.e. all possible statements) based on information content

Yes, this is SIA + Solomonoff universal prior, as far as I'm concerned. And this prior doesn't require calling any of the hypotheses "true", the prior is only used for prediction. Solomonoff aggregates a large number of hypotheses, none of which are "true". 

Some barometer reading predicts a storm, but it doesn't explain it.

The reading isn't a model. You can turn it into a model, and then it would indeed explain the storm, while air pressure would explain it better, by virtue of explaining other things as well and being part of a larger model that explains many things simply (such as how barometers are constructed.) 

prediction is symmetric:

A model isn't an experience, and can't get conditioned on. There is no symmetry between models and experiences in my ontology. 

The experience of rain doesn't explain the experience of the wet street - rather, a model of rain explains / predicts both experiences. 

comment by ike · 2020-09-08T16:46:44.827Z · LW(p) · GW(p)

>However, what you are saying is essentially the MWI.

No it isn't - MWI has to do with quantum effects and that scenario doesn't involve any. You can't argue against MWI on the basis of contingent quantum facts (which you effectively did when pointing to a "major ongoing debate" about MWI probability - that debate is contingent on quantum idiosyncrasies) , and then say those arguments apply to any multiverse.

>It is saying the choice of reference class, under some conditions, would not change the numerical value of probability. Because the effect of the choice cancels out in problems such as sleeping beauty and doomsday argument. Not that there is no reference class to begin with.

If the choice of reference class doesn't matter, then you don't need a reference class. You can formulate it to not require a reference class, as I did. Certainly there's no problem of arbitrary picking of reference classes if it literally doesn't make a difference to the answer.

The only restriction is that it must contain all subjectively indistinguishable observers - which is equivalent to saying it's possible for "me" to be any observer that I don't "know" that I'm not - which is almost tautological (don't assume something that you don't know). It isn't arbitrary to accept a tautological restriction here.

I'll respond to the paradoxes separately.

comment by ike · 2020-09-05T03:30:01.256Z · LW(p) · GW(p)

Re your "takedown" of anthropic arguments:

For Presumptuous Philosopher: you reject the "prior probability that “I” actually exist." But this is a perfectly valid Bayesian update. You update on your entire lifetime of observations. If one theory says those observations are one in a billion likely, and another theory says they're one in a trillion, then you update in favor of the former theory.

I do think the doomsday argument fails, because your observations aren't any less likely in a doomsday scenario vs not. This is related to the fact that I accept SIA and not SSA.

>It should be noted this counter-argument states the probability of “me” being simulated is a false concept

I actually agree with this, if the assumption is that the simulation is perfect and one can't tell in principle if they're in a simulation. Then, I think the question is meaningless. But for purposes of predicting anomalies that one would expect in a simulation and not otherwise, it's a valid concept.

Dr Evil is complicated because of blackmail concerns.

comment by ike · 2020-09-05T03:02:06.165Z · LW(p) · GW(p)

>From the subject’s first-person perspective there is no new information at the time of wake-up. Some arguments suggest there is new information that the subject learned “I” exist. However, from the first-person perspective, recognizing the perspective center is the starting point of further reasoning, it is a logical truth that “I” exist. Or simply, “It is guaranteed to find myself exist.” Therefore “my” own existence cannot be treated as new evidence.

I agree one can't update on existing. But you aren't updating on wake-up - you can make this prediction at any point. Before you go to sleep, you predict "when I wake up, it will be in a world where the coin landed heads 1/3 of the time". No updating required.