SIA Is Just Being a Bayesian About the Fact That One Exists

post by omnizoid · 2023-11-14T22:55:43.362Z · LW · GW · 5 comments

Contents

5 comments

Anthopics is often dense and technical, so I’ll begin this by presenting a captivating story. I was chatting with someone and she said “is SSA the better theory of anthropics?” So I launched into a lengthy digression of roughly the following form.

The Self Indication Assumption claims “all other things equal, an observer should reason as if they are randomly selected from the set of all possible observers.” Thus, when evaluating two hypotheses, if all else is equal, they should prefer the one that says that there’s a higher probability of them in particular existing. This has some counterintuitive results. Here are two that come broadly from Bostrom’s famous book on the subject:

Jackets: There are two hypotheses that are equally probable. Hypothesis one claims that there will be 10 people created with red shirts and 100 with blue shirts. Hypothesis two claims that there will only be 10 people created with red shirts. When you are created, you find yourself with a red shirt. SIA says that, because both hypotheses hold that there will be the same number of people with red shirts created, you should be indifferent between the two. But this is a bit counterintuitive; to many it seems like, because on the first hypothesis most existent people won’t have red shirts and on the second all existent people will have red shirts, this favors the second hypothesis.

The presumptuous philosopher: There are two theories of the physical world. According to one, the universe is infinitely large. According to the other, the universe is very large but finite in size. Because you’re infinitely more likely to be created in the first hypothesis than the second, you should think that the first hypothesis is infinitely more likely than the second.

There are independent motivations [LW · GW] for SIA; to avoid these results one has to posit a puzzling epistemic asymmetry between failing to create and creation. In fact, the alternative view to the SIA view, the SSA view, according to which one should reason as if they’re randomly selected from the set of all people in their reference class, which are beings that are relevantly like them according to some criteria, implies utterly nutty things including that, for example, one can make supernovas unlikely by procreating only if a supernova occurs, and implies that whether you should think that our civilization will die depends on causally isolated aliens. My favorite counterexample to SSA is a the flipside of the second result:

The presumptuous archeologist: The SSA has drawn their reference class. Archeologists discover overwhelming evidence that there were huge numbers of Neanderthals—quintillions of them—and that they’re in our reference class. On the SSA view, one should reject the overwhelming archeological evidence, because if you’re randomly selected from the reference class, it’s unlikely you’d be one of the chance few who isn’t a neanderthal.

The alternatives to SIA are wholly untenable. But even if this were not so, even if there was a working alternative to SIA, I would still think SIA was true. This is partially because, as Carlsmith points out, many of the most counterintuitive results of SIA are directly entailed by ironclad arguments [LW · GW]; the problems for SSA are general problems for any alternative to SIA.

But perhaps the bigger reason is that I find SIA intuitively extremely obvious. It’s just what you get when you apply Bayesian reasoning to the fact that you exist. Take the presumptuous philosopher, for example. It seems very counterintuitive—how is it that you can know a priori that the universe is big just from the fact that you exist? But it’s straightforward probabilistic reasoning—if a million people are created then I’m more likely to be created, just as if a million jars are created, any particular possible jar is more likely to be created. In fact, if every agent is created, the odds I’d be created would be 1—however, if only a finite number of agents are created, the odds I’d be one of those finite agents would be zero. So the fact that I exist confirms the hypothesis that there are infinite agents over the hypothesis that there are finite agents.

And the probabilistic reasoning employed in Jackets to get the result contradicting SIA is totally wrong. It’s true that given that I exist, it’s more likely that I’d have a red shirt conditional on the hypothesis that there are just 10 people with red shirts. But I’d be less likely to exist at all! So the probabilities wash out.

Probabilistic math is weird sometimes. It gets unintuitive results. But that’s not a reason to give up on doing probability in the obvious way about the fact that I exist. SSAers say “but you must reason as if you’re randomly selected from all actual observers in your reference class.” But why? You’re not randomly selected from all actual observers in your reference class. God isn’t pulling you out of a reference class. You are just an agent at a time, and so the way to reason about things is to look at the probability of that agent existing at that time, just as for all probabilistic events the way to reason about them it so look at the odds of the event on the various hypotheses.

The only reason to reason as if you’re randomly selected from the observers in your reference class is that doing so meets our intuitions. But you don’t just get to change the way probability works because it meets your intuitions better. The correct response is to recognize that our intuitions are wrong, rather than devise a gerrymandered theory with no deeper justification that involves arbitrary drawing of reference classes.

Finally, thinking about things in terms of agents at times helps us explain what beauty learns in the sleeping beauty case. The sleeping beauty problem is well explained by Wikipedia:

The Sleeping Beauty problem is a puzzle in decision theory in which whenever an ideally rational epistemic agent is awoken from sleep, they have no memory of whether they have been awoken before. Upon being told that they have been woken once or twice according to the toss of a coin, once if heads and twice if tails, they are asked their degree of belief for the coin having come up heads.

The two answers people give generally are 1/2 and 1/3. People often justify the 1/2 answer on the grounds that you already knew you’d wake up, and so you learn nothing new beyond that a fair coin was tossed. But this is subtly wrong. Upon waking up, you know you’re awake at the time that you’re awake. Suppose that if heads is turned up, one will be awakened on Tuesday, while if tails is turned up they’ll be awakened on Tuesday and Wednesday. The person knows that they’re awake at the time upon finding they’ve woken up. There’s a 2/3 chance that it’s Tuesday and a 1/3 chance it’s Wednesday. Conditional on it being Wednesday, there’s a 100% chance that they got tails, while conditional on it being Tuesday, there’s a 50% chance that they got tails. So there’s a 1/3 chance they got tails and it’s Tuesday, a 1/3 chance they got tails and it’s Wednesday, and a 1/3 chance they got heads and it’s Tuesday.

 

 


 

5 comments

Comments sorted by top scores.

comment by Ape in the coat · 2023-11-15T13:23:15.862Z · LW(p) · GW(p)

Disclaimer: I preemptively appologize if my comment appears to be rude. I'm writing a series on anthropics due to the fact that I've been passionately dissatisfied with the level of discussion on the topic for a long time. Any strong emotions that you may read from this comment are not targeted at you in particular, but at the general way people tend to reason about these matter.

A common way people tend to justify SIA and all it ridiculousness is by pointing at SSA ridiculousness and claiming that it's even more ridiculous. Frankly, I'm quite tired of this kind of anthropical whataboutism. It seems to be some kind of weird selective blindness. In no other sphere of knowledge people would accept this as a valid reasoning. But in anthropics, somehow, it works?

The fact that SSA is occasionally stupid doesn't justify SIA's occasional stupidity. Both are obviously wrong in general, even though sometimes both may produce correct result. You don't even need to construct any specific mind experiments to show it. Just think about the main assumptions of them for five minutes:

SIA claims that an observer should reason as if they are randomly selected from the set of all possible observers.

SSA claims that an observer should reason as if they’re randomly selected from all actual observers in your reference class.

SIA fails in cases where observer isn't selected from all possible observers and SSA fails in cases where observer isn't selected from all actual observers. Obviously. They are not actually opposite. Why would people assume that either one or the other necessary has to be the case? To what law of the universe do they appeal? 

The alternatives to SIA are wholly untenable.

As soon as you stop thinking in "either SIA or SSA" mode, an alternative immediately comes to mind:

An observer should reason as if they’re randomly selected from all actual observers if they are actually randomly selected from actual observers and should reason as if they're randomly selected from the set of all possible observers if they are actually selected from the set of all possible observers.

Or, basically the same idea:

An observer should investigate the causal process that created them and reason accordingly to it, instead of just assuming things for no reason.

Here I'm leaving a link to my post Conservation of Expected Evidence and Random Sampling in Anthropics [LW · GW]. It explains how to use this principle on practice and also why "just being a bayesian about the fact that one exists" can be a bad idea in some cases, leading to contradicting the law of conservation of expected evidence. Previous and next posts are also relevant.

comment by Donald Hobson (donald-hobson) · 2023-11-16T19:18:38.317Z · LW(p) · GW(p)

I think both are wrong. 

I'm in favor of the complexity location hypothesis. 

A hypothesis needs to describe the universe, and point you out within it, and it uses occam's razor for both.

This means you should assign a high probability to finding yourself in a special position, ie one easy to describe.

If the hypothesis are 1 red shirt, or 1 red and 3^^3 blue shirts. Then observing a red shirt is modest evidence towards the former position. And if you find yourself in the latter world, your probability of being the red shirt is determined by the length of "your the red shirt", not by the number of blue shirts. (Although if there weren't many blue shirts, "your on the left" or "you have the biggest feet" might also locate you, giving higher total probability.)

comment by Razied · 2023-11-14T23:42:41.108Z · LW(p) · GW(p)

But perhaps the bigger reason is that I find SIA intuitively extremely obvious. It’s just what you get when you apply Bayesian reasoning to the fact that you exist.

Correct, except for the fact that you're failing to consider the possibility that you might not exist at all...

My entire uncertainty in anthropic reasoning is bound up in the degree to which an "observer" is at all a coherent concept.

Replies from: Ape in the coat
comment by Ape in the coat · 2023-11-15T13:44:07.920Z · LW(p) · GW(p)

Actually, it's the other way around. SIA always assumes that one may not have existed at all. This is the source of the Bayesian update and this itself may be a problem. 

Basically, it requires assuming that all existing humans were randomly sampled from a finite set of immaterial souls - a pretty extraordinary claim about the way our universe works, without any evidence to support it. 

comment by JeffJo · 2023-11-17T19:27:36.441Z · LW(p) · GW(p)

The need for distinguishing between SIA and SSA is not needed in the Sleeping Beauty Problem. It was inserted into Adam Elga's problem when he changed it from the one he posed, to the one he solved. I agree that they should have the same answer, which may help in choosing SIA or SSA, but it is not needed. This is what he posed:

"Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?"

The need was created when Elga created a schedule for waking, that treated two days differently. So that "existence" became an alleged issue. There is a better way. First, consider this "little experiment":

  1. Two coins, C1 and C2, will be randomly arranged so that the four possible combinations, HH, HT, TH, and TT, are equally likely.
  2. If either coin is showing Tails, you will be asked for what you believe to be the probability that coin C1 is showing Heads.
  3. If both are showing Heads, you will not be asked a question.

The answer should be obvious: From the fact that you are in step 2, as established by being asked a question, the combination HH is eliminated. What happens in step 3 is irrelevant, since the question is not asked there. In only one of the three remaining combinations is coin C1 showing Heads, so there is a 1/3 chance that coin C1 is showing heads.

To implement the experiment Elga proposed - not the one he solved - put SB to sleep on Sunday night, and flip the two coins. On Monday, perform the "little experiment" using the result of the flips. You will need to wake SB if step 2 is executed. What you do in step 3 is still irrelevant, but can include leaving her asleep. Afterwards, if she is awake, put her back to sleep with amnesia. AND THEN TURN COIN C2 OVER. On Tuesday, perform the "little experiment" again, using the modified result of the flips.

SB does not need to consider any other observers than herself to answer the question, because she knows every detail of the "little experiment." If she is awake, and asked a question, the coins were arranged as described in step 1 and she is in step 2. The answer is 1/3.