Anthropics is pretty normal

post by Stuart_Armstrong · 2019-01-17T13:26:22.929Z · LW · GW · 9 comments

Contents

  Common understanding of anthropics
  Updating and conditional probability
    The anthropic principle
    Conditional probabilities
    The power of Bayes compels you!
  Survivorship bias and special Earths
  Anthropic reasoning in medium sized universes
    Defining medium sized universes
    Anthropic probability in medium sized universes
  Anthropics for beginners
None
9 comments

In this post, I'll defend these claims:

  1. The common understanding of anthropic reasoning is wrong.
  2. There are interesting reasons for that error.
  3. Anthropic updating is the same as normal updating. For example, our survival is evidence that the world is safer than we thought.
  4. Full anthropic reasoning is actually pretty normal and easy in most cases.
  5. We don't need to define some special class of "observers" to do anthropic reasoning.

Common understanding of anthropics

By the "common understanding", I mean something like:

Sometimes "can't conclude anything" is weakened to allow some weak updating.

Now sounds reasonable. But consider instead:

Formally the two arguments have the same structure. Now, people might start objecting that the difference between an observer or no observers is not the same thing as the difference between an observer seeing a loss and one not seeing it. And then I might respond by slicing into the definition of observer, creating "half-observers", and moving smoothly between observer and non-observer...

But that's the wrong response, on both of our parts (shame on you, hypothetical strawman, for reasoning like that!). The key question is not "can we justify that and might be different?" Because we can always justify something like that if we work on it hard enough.

Instead we should be asking "1) Why do we find convincing?", and "2) Do we have reasons to believe is wrong?"

My answer to 2) is "yes, of course; is clearly wrong, and is formally structured the same way, so there must be a paradox lurking there" (spoiler: there is a paradox lurking there).

For 1), I introspected on why I had been lead astray for so long, and here are some of the reasons why we might believe (or at least think that anthropic reasoning is hard):

The desire not to go overboard is the easiest to understand: to those who might say "so, it turns out we were safe after all!", we can answer, correctly "not necessarily; we might just have got lucky". And that is correct; we might have got lucky. But it's also some evidence, at least, that maybe we were safer than we thought.

Updating and conditional probability

The anthropic principle

Let's go back to the idea that started this all: the anthropic principle. Looking at the Wikipedia article on it, there seems to be a bunch of different principles; here's my attempt at putting them in a table:

The Barrow and Tipler Strong AP is in my view wrong (I think they're mixing frequentist and Bayesian probability, if they have to posit an actual multiverse). But the other ones seem trivially true, just as matter of conditional probability. And the differences between them are unimportant: whether it's looking at the whole universe, or our space-time location, and at observers in general, carbon based life, or just ourselves. All of these are equally true, and it seems to me that people arguing about different versions of the AP just haven't seen them written down as they are here, where it's clear that they are all of a similar format:

Conditional probabilities

Now look back at . It looks similar, but it isn't; the conditionals are used a bit differently. What says is "conditional on us surviving, the probability of an existential catastrophe having happened is zero. And this probability is independent of whether the world is safe or not. Hence we can't deduce whether the world is safe or not".

All the mischief is in that word "hence". Conditional probabilities are tricky and counterintuitive; to pick an example from logical uncertainty, ''0=1''"0=0" while "0=0""0=1". And, in general, you can't move "is independent of" from one side of the conditional to the other.

So these probabilities have to be computed explicitly - though you can get a hint of the potential mistake by considering "conditional on us seeing ourselves lose the lottery, the probability of us winning the lottery is zero. And this probability is independent of the odds of the lottery. Hence we can't say anything about the odds of the lottery".

The power of Bayes compels you!

I have actually computed the odds explicitly [LW · GW], using Bayesian reasoning, to show that statements like are wrong. But let's invert the problem: if we assumed was true, what would that imply?

Imagine that the world is either (low risk of existential catastrophe) or (high risk of existential catastrophe). Then would argue that is the same as : our survival provides no evidence of the world being safe. Then applying almighty Bayes:

The same reasoning shows . Therefore would force us to conclude that the safe and the dangerous worlds have exactly the same level of risk!

Similar problems arise if we try and use weaker versions of - maybe our survival is some evidence, just not strong evidence. But Bayes will still hit us, and force us to change our values of terms like . There are simply not enough degrees of freedom in the system for anthropic updating to be done any way other than in the normal way.

Survivorship bias and special Earths

There are clearly issues of selection bias and survivorship bias in anthropic reasoning. We can't conclude from seeing all the life around us, that the universe is full of life.

But that doesn't stop us from updating normally, it just means we have to update on exactly what we know: not on the information that we observe, but on the fact that we observe it.

Take a classical example of survivorship bias: hedge funds success. We see a lot of successful hedge funds, and we therefore conclude that hedge funds are generally successful. But that conclusion is mistaken, because the least successful hedge funds tend to go bankrupt, leaving us with a skewed sample. So if we noticed "most hedge funds I can see are successful", concluded "most hedge funds are successful", and updated on that... then we'd be wrong.

Similarly, if we noticed a lot of life around us, concluded "life is common", and updated on that, we'd be wrong. If, however, we instead concluded "life is common on at least one planet" and updated on that, then we would be correct.

Notice how specific the update requirements can be. Suppose we had three theories. Theory gives a probability to life existing on any given planet. Theory gives a probability for life existing on any Earth-like planet, and for other planets. While theory gives a probability to life existing on Earth, specifically, and to life existing anywhere else.

Now, the different might have different priors. But updating them on the fact of our existence will increase the probability of twice as much as , which itself is twice as much as . Even though posits a universe filled with life and a universe almost empty of life, our existence is evidence for over .

So, when updating on anthropic evidence, we have to update on what we see (and the fact that we see it), and not assume we are drawing from a random sample of possible observations about the universe. But, with those caveats, anthropic updating works just as normal updating.

Anthropic reasoning in medium sized universes

There's a final reason that anthropic reasoning can seem daunting. I've shown above that the update process of anthropic probability is the normal update process. But what about the initial probabilities? There are a plethora of anthropic probability theories - SIA, SSA, FNC - and some people (ie me) arguing that probabilities don't even exist, and that you have to use decision theory instead.

But in this section I'll show that, if you make some reasonable assumptions about the size of the universe (or at least the size of the part of the universe you're willing to consider), then all those probabilities collapse into the same thing, which is pretty much just normal probability for the universe in which you exist. If we make those assumptions, we can then do anthropic probabilities in an easy way, at least for problems without explicit duplication.

Defining medium sized universes

Let's talk about how unique you are. From human to human, there is typically 20 million base pairs of variation. Our brain processes 50 bits per second, or 2.2 billion bits in a lifetime. A lot of this information will be highly redundant, but not all of it.

The Hubble volume roughly cubic light years, or roughly in volume. In bits, this is . So if we packed our Hubble volume with humans, and those humans were initially identical but had had about ten seconds to diverge, then we would not expect to find two copies of the same human anywhere.

Of course, humans are not packed anywhere near that density, and humans diverge a lot more than that. So we expect to go a great great great ... great great great distance before finding identical copies of ourselves.

So I define a medium sized universe as universes larger than our own, but where we'd expect to find but a single copy of ourselves. These universes can, of course, be very big - a universe that is times bigger than the Hubble volume would count as a very small example of a medium-sized universe.

This might seem controversial; after all, doesn't the universe appear to be infinite? Well, probability theories have problems with infinity, anthropic probability theories even more so [LW · GW]. In most areas, we are fine with ignoring the infinity and just soldiering on in our local area; I'm suggesting that we do that for most anthropic reasoning as well. By "most" I mean "reasoning about situations that don't involve infinities, deliberate duplication, or simulations". Though you can't shove that many simulations into a medium sized universe, so avoiding simulations may be unnecessary (it does tend to make the rest of the reasoning much easier, though).

Anthropic probability in medium sized universes

Different theories of anthropic probability are trying to answer subtly different questions about the universe and ourselves [LW · GW]. But they only really differ if there are multiple copies of the same person.

Take SIA. We know that SIA is independent of reference class [LW · GW], so we may as well take the reference class consisting of a the agents subjectively indistinguishable from a given human (eg ourselves). Because there are almost certainly no duplicates in this universe, this reduces to a single copy, at most. So if is the probability function for SIA with this reference class, then it is almost exactly equal to for the non-anthropic probability distribution over universes.

And is just the Full Non-indexical conditioning version of anthropic probability. Now, I know that FNC is inconsistent [LW · GW]; still, in medium sized universes, it's very close to being consistent (and very close to being SIA).

If we use SSA with the reference class or the consistent class [LW · GW] , we get a similar almost equality:

And understates how nearly identical these probabilities are.

Now, there is one anthropic probability theory that is different: SSA with significantly larger reference class (say the class of all humans, all sentient beings, or all "observers"). But this post [LW · GW] argues against those larger reference classes, claiming they belong more to decision theory and morality, not probability. And remember, the definition of the reference class for SSA is contained in the question we are asking. Only for questions where "we could have been person X", in a specific sense, does SSA with larger reference classes make sense.

Another reason to restrict to is that in medium sized universes, the anthropic probabilities are essentially free from all the usual paradoxes [LW · GW].

Notice that in using , we haven't had to formally define what an "observer" is, or what would qualify an agent to get that rank. Instead we're just looking at agents that are subjectively indistinguishable from each other, a narrow and reasonably well-defined class.

Anthropics for beginners

So, here's how to proceed with anthropics in most situations:

  1. Assume the universe is medium sized.
  2. Check (or assume) that there is no actual duplication or simulations going on.
  3. Use a prior over universes, and update it based on the fact that you exist.
  4. Proceed to update using any other information you find, remembering selection bias: the update is the fact that you saw this information, not that the information exists.

And that should suffice for most non-specialised work in the area.

9 comments

Comments sorted by top scores.

comment by shminux · 2019-01-18T02:46:22.047Z · LW(p) · GW(p)

Again, anthropics is basically generalizing from one example [LW · GW]. Yes, humans have dodged an x-risk bullet a few times so far. There was no nuclear war. The atmosphere didn't explode when the first nuclear bomb was detonated (something that happens to white dwarfs in binary systems, leading to some supernovae explosion). The black plague pandemic did not wipe out nearly everyone, etc. If we have a reference class of x-risks and assign the probability of a close call p to each member of the class, then all we know is that after observing n close calls the probability of no extinction would be p^n. If the number is vanishingly small, we might want to reconsider our estimate of p ("the world is safer than we thought"). Or maybe the reference class is not constructed correctly. Or maybe we truly got luckier than other hypothetical observable civilizations who didn't make it. Or maybe quantum immortality is a thing. Or maybe something else. After all, there is only one example, and until we observe some other civilizations actually not making it through, anthropics is groundless theorizing. Maybe we can gain more insights into the reference classes and the probabilities of a close call, and of surviving an even from studying near extinction events roughly fitting into the same reference class (past asteroid strikes, plagues, climate changes, ...). However, none of the useful information comes from guessing the size of the universe, of whether we are in a simulation, of "updating based on the fact that we exist" beyond accounting for the close calls and x-risk events.

That said, I certainly agree with your point 4. That only the observed data need to be accounted for.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2019-01-18T08:21:28.589Z · LW(p) · GW(p)

none of the useful information comes from guessing the size of the universe, of whether we are in a simulation,

The reason I assume those is so that only the "standard" updating remain - I'm deliberately removing the anthropically weird cases.

comment by Chris_Leong · 2019-01-19T14:03:36.370Z · LW(p) · GW(p)

1) Subjectively distinguishable needs to be clarified. It can either a) that a human receives enough information/experience to distinguish themselves b) that a human will remember information/experience in enough detail to distinguish themselves from another person. The later is more important for real-world anthropics problems and results in significantly more copies.

2) "In most areas, we are fine with ignoring the infinity and just soldiering on in our local area" - sure, but SSA is inherently non-local. It applies over the whole universe, not just the Hubble Volume. If we're going to use an approximation to handle our inability to model infinities, we should be using a large universe, large enough to break your model, rather than a medium sized one.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2019-01-21T13:06:02.071Z · LW(p) · GW(p)

The correct way to handle SSA is to deal with the exact question that it poses. But for most purposes, this approximation suffices.

comment by Dr. Jamchie · 2019-01-20T12:06:35.912Z · LW(p) · GW(p)
And then I might respond by slicing into the definition of observer, creating "half-observers", and moving smoothly between observer and non-observer...

Do you have this written down somewhere in more detail? It seems that for this to work one needs to assume the gradual appearance of consciousness, something like rock<beetle<mouse<ape<human. Will this work if one assumes consciousness to be binary, that it either is or it isn't?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2019-01-21T13:08:12.303Z · LW(p) · GW(p)

Will this work if one assumes consciousness to be binary, that it either is or it isn't?

If it's binary, I point out the binariness is arbitrary, start looking at states of uncertainty about whether there was consciousness or not (or observers or not), talk about video feeds that may or may not be observed, or start looking at disasters that kill the population gradually yet inevitably. It's... not a very fruitful avenue to explore, in my view.

comment by Ofer (ofer) · 2019-01-17T17:31:06.705Z · LW(p) · GW(p)
Therefore A1 would force us to conclude that the safe and the dangerous worlds have exactly the same level of risk!
Similar problems arise if we try and use weaker versions of A1 - maybe our survival is some evidence, just not strong evidence. But Bayes will still hit us, and force us to change our values of terms like P( we survived | dangerous ).

I'm confused by this. The event "we survived" here is actually the event "at least one observer similar to us survived", right? (for some definition of "similar").
If the number of planets on which creatures similar-to-us evolve is sufficiently large, we get:
at least one observer similar to us survivedat least one observer similar to us survived dangerous

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2019-01-17T18:23:39.356Z · LW(p) · GW(p)

No, the event "we survived" is "we (the actual people now considering the anthropic argument and past xrisks) survived".

Over enough draws, you have .

So we update the lottery odds based on whether we win or not; we update the danger odds based on whether we live. If we die, we alas don't get to do much updating (though note that we can consider hypothetical with bets that pay out to surviving relatives, or have a chance of reviving the human race, or whatever, to get the updates we think would be correct in the worlds where we don't exist).

Replies from: ofer
comment by Ofer (ofer) · 2019-01-17T21:22:48.426Z · LW(p) · GW(p)

Thank you, I understand this now (I found it useful to imagine code that is being invoked many times and is terminated after a random duration; and reflect on how the agent implemented by the code should update as time goes by).

I guess I should be overall more optimistic now :)