Anthropics over-simplified: it's about priors, not updatespost by Stuart_Armstrong · 2020-03-02T13:45:11.710Z · LW · GW · None comments
I've argued that anthropic reasoning isn't magic [LW · GW], applied anthropic reasoning to the Fermi question [LW · GW], claimed that different anthropic probabilities answer different questions [LW · GW], and concluded that anthropics is pretty normal [LW · GW].
But all those posts were long and somewhat technical, and needed some familiarity with anthropic reasoning in order to be applied. So here I'll list what people unfamiliar with anthropic reasoning can do to add it simply and easily to their papers/blog posts/discussions:
- Anthropics is about priors, not updates; updates function the same way for all anthropic probabilities.
- If two theories predict the same population, there is no anthropic effect between them.
Updating on safety
Suppose you go into hiding in a bunker in 1956. You're not sure if the cold war is intrinsically stable or unstable. Stable predicts a chance of nuclear war; unstable predicts a chance.
You emerge much older in 2020, and notice there has not been a nuclear war. Then, whatever anthropic probability theory you use, you update the ratio
Suppose you have two theories to explain the Fermi paradox:
- Theory 1 is that life can only evolve in very rare conditions, so Earth has the only life in the reachable universe.
- Theory 2 is that there is some disaster that regularly obliterates pre-life conditions, so Earth has the only life in the reachable universe.
Since the total population predicted by these two theories is the same, there is no anthropic update between them.
These points are a bit over-simplified, but are suitable for most likely scenarios. ↩︎
If you use a reference class that doesn't include certain entities - maybe you don't include pre-mammals or beings without central nervous systems - then you only need to compare the population that is in your reference class. ↩︎
Comments sorted by top scores.