SIA is basically just Bayesian updating on existence
post by Stuart_Armstrong · 2021-06-04T13:17:20.590Z · LW · GW · 8 commentsContents
Indirect effect on population size None 8 comments
I love the sleeping Beauty problem, but I think it leads people astray, by making them think that anthropic reasoning is about complicated situations with multiple duplicates. That leads to people saying erroneous things, like "SIA implies there are a lot of observers".
But anthropic reasoning isn't that complicated; SIA [? · GW], especially, is mostly just Bayesian updating.
Specifically, if there are no exact duplicates of yourself in your universe, SIA is just Bayesian updating on the fact that you exist; this is the same update that an outside observer would make, if informed of your existence. So, if theory has prior probability and gives you a probability of existing, then SIA updates 's probability to (and then renormalises everything):
This result is easy to see - since SIA is independent of reference class, just restrict the reference class to exact copies of you. If there is only on one such copy in the universe, then the update rule follows.
Even if there are multiple exact copies of you, you can still mostly see SIA as Bayesian updating over your future observations. See this footnote[1] for more details.
Indirect effect on population size
So, what does this mean for the number of observers in the universe? Well, SIA can have an indirect effect on population size. If, for instance, theory posits that life is likely to happen, then our existence is more likely, so gets a relative boost by SIA compared with most other theories.
So, SIA's boosting of other observers' existence is only an indirect effect of it boosting our existence. The more independent our existence is of them, or the more independent we suspect it might be, the less impact SIA has on them.
Suppose that there are exact copies of you, and that they are going to make independently random observations. Then as soon as is much bigger than , you can expect that each copy will make a different observation; so, ultimately, you expect there to be only one exact future copy of you.
So if you Bayesianly update for each possible future copy (weighted by the probability of that future observation), you will get SIA. This is the trick that full non-indexical conditioning uses.
This can be seen as a partial solution to the Boltzmann brain problem: Boltzmann brains won't diverge, because they won't have future experiences. Personally, I prefer to address the issue by mixing in a bit of decision theory; my decisions are only relevant if I'm not a Boltzmann brain, so I'll start with "I exist and am not a Boltzmann brain" as an initial assumption. ↩︎
8 comments
Comments sorted by top scores.
comment by Gurkenglas · 2021-06-04T17:32:39.677Z · LW(p) · GW(p)
If you try to model yourself as a uniformly randomly selected observer somewhere in the Solomonoff prior, that doesn't work because there isn't a uniform distribution on the infinite naturals. When you weight them by each universe's probability but allow a universe to specify the number of observers in it, you still diverge because the number goes up uncomputably fast while the probability goes only exponentially down. In the end, probabilities are what decision theories use to weight their utility expectations. Therefore I suggest we start from a definition of how much we care about what happens to each of the forever infinite copies of us throughout the multiverse. It is consistent to have 80% of the utility functions in one's aggregate say that whatever happens to the overwhelming majority of selves in universes of description length >1000, it can only vary utility by at most 0.2.
comment by danielmartin0 · 2021-06-04T14:52:48.863Z · LW(p) · GW(p)
I have a simple question. You claim I should update my theories based on the fact I exist. Why would I update on something I already know?
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2021-06-04T15:35:54.773Z · LW(p) · GW(p)
Depends; when you constructed your priors, did you already take that fact into explicit account? You can "know" things, but not have taken them into account.
Replies from: danielmartin0↑ comment by danielmartin0 · 2021-06-04T16:15:19.650Z · LW(p) · GW(p)
I don’t understand how it could be something I’ve left out. If I’m considering whether the world is a certain way, but I know that I can’t exist in that world, I reject the theory.
comment by dadadarren · 2021-06-05T20:16:55.173Z · LW(p) · GW(p)
SIA is just Bayesian updating on the fact that you exist; this is the same update that an outside observer would make, if informed of your existence.
Actually, It also assumes the outside observer learns of your existence by a certain process. E.g. by randomly sampling all potentially existing observers (in a reference class) and finds you. (If on the contrary, the outsider learns about your existence by sampling all actual observers then the update would follow SSA.) Of course, if the outsider defines the reference class by all the definable characteristics of you then he would effectively get a reference class of one. But it seems dubious why would an outsider pay attention to this particular set of characteristics in the first place.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2021-06-06T14:07:12.581Z · LW(p) · GW(p)
SIA is the Bayesian update on knowing your existence (ie if they were always going to ask if dadadarren existed, and get a yes or no answer). The other effects come from issues like "how did they learn of your existence, and what else could they have learnt instead?" This often does change the impact of learning facts, but that's not a specifically anthropics problem.
Replies from: dadadarren↑ comment by dadadarren · 2021-06-06T19:23:55.398Z · LW(p) · GW(p)
SIA is the Bayesian update on knowing your existence (ie if they were always going to ask if dadadarren existed, and get a yes or no answer)
The i.e. part is what I was trying to point, thus I agree with this assessment. So the anthropic debate can be presented as whether I should give the same probability as some outsider learns of my existence by this process. SIA suggests yes. Halfers in general suggest no.
Here SIA has the advantage of not going to generate any disagreement when the outsider and I can communicate. We would give the same probability values. Whereas all halfers (not just SSA supporters) must find a way to explain why the outsider and I must give different probabilities even when we share all information. i.e. what aspect of the problem is incommunicable.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2021-06-07T09:48:19.692Z · LW(p) · GW(p)
Ah, understood. And I think I agree.