Practical anthropics summary

post by Stuart_Armstrong · 2021-07-08T15:10:44.805Z · LW · GW · 3 comments

This is a simple summary of how to "do anthropics":

First of all, there was the realisation that different theories of anthropic probability correspond to correct answers to different questions [LW · GW] - questions that are equivalent in non-anthropic situations.

We can also directly answer "what actions should we do?", without talking about probability. This anthropic decision theory [? · GW] gave behaviours that seem to correspond to SIA (for total utilitarianism) or SSA (for average utilitarianism).

My personal judgement, however, is that the SIA-questions are more natural than the SSA-questions (ratios of totals rather than average of ratios), including the decision theory situation (total utilitarianism rather than average utilitarianism). Thus, in typical situations, using SIA is generally the way to go.

And if we ignore exact duplicates, Boltzmann brains, and simulation arguments, SIA is simply standard Bayesian updating on our existence [LW · GW]. Anthropic probabilities can be computed exactly the same way as non-anthropic probabilities can [LW · GW].

And there are fewer problems than you might suspect. This doesn't lead to problems with infinite universes [LW · GW] - at least, no more than standard probability theories do. And anthropic updates tend to increase the probability of larger populations in the universe, but that effect can be surprisingly small [LW · GW] - to given the data we have.

Finally, note that anthropic effects are generally much weaker [LW · GW] than Fermi observation effects. The fact that we don't see life, on so many planets, tells us a lot more than the fact we see life on this one.

3 comments

Comments sorted by top scores.

comment by dadadarren · 2021-07-09T16:17:29.172Z · LW(p) · GW(p)

"If there are no issues of exact copies, or advanced decision theory, and the questions you're asking aren't weird, then use SIA. "

So practically FNC? I understand that FNC and SIA converges when the reference class is so restrictive that it only contains the one observer. But I find counter arguments like this [LW · GW] quite convincing.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2021-07-09T16:26:31.427Z · LW(p) · GW(p)

If there are no exact duplicates, FNC=SIA whatever the reference class is.