Anthropics over-simplified: it's about priors, not updates

post by Stuart_Armstrong · 2020-03-02T13:45:11.710Z · LW · GW · None comments


  Updating on safety
  Population balancing
No comments

I've argued that anthropic reasoning isn't magic [LW · GW], applied anthropic reasoning to the Fermi question [LW · GW], claimed that different anthropic probabilities answer different questions [LW · GW], and concluded that anthropics is pretty normal [LW · GW].

But all those posts were long and somewhat technical, and needed some familiarity with anthropic reasoning in order to be applied. So here I'll list what people unfamiliar with anthropic reasoning can do to add it simply[1] and easily to their papers/blog posts/discussions:

  1. Anthropics is about priors, not updates; updates function the same way for all anthropic probabilities.
  2. If two theories predict the same population, there is no anthropic effect between them.

Updating on safety

Suppose you go into hiding in a bunker in 1956. You're not sure if the cold war is intrinsically stable or unstable. Stable predicts a chance of nuclear war; unstable predicts a chance.

You emerge much older in 2020, and notice there has not been a nuclear war. Then, whatever anthropic probability theory you use, you update the ratio

Population balancing

Suppose you have two theories to explain the Fermi paradox:

Since the total population predicted by these two theories is the same, there is no anthropic update between them[2].

  1. These points are a bit over-simplified, but are suitable for most likely scenarios. ↩︎

  2. If you use a reference class that doesn't include certain entities - maybe you don't include pre-mammals or beings without central nervous systems - then you only need to compare the population that is in your reference class. ↩︎

None comments

Comments sorted by top scores.