The Anthropic Principle: Five Short Examples
post by Optimization Process · 2017-09-26T22:35:24.102Z · LW · GW · 13 commentsContents
1. Life and death and love and birth 2. ...and peace and war on the planet Earth 3. Improbability upon improbability 4. Supercritical 5. Evidence kills Conclusion None 13 comments
(content warning: nuclear war, hypothetical guns, profanity, philosophy, one grammatically-incorrect comma for readability’s sake)
This is a very special time of year, when my whole social bubble starts murmuring about nuclear war, and sometimes, some of those murmurers, urging their listeners to worry more, will state: "Anthropic principle."
To teach people about the anthropic principle and how to use it in an epistemically virtuous way, I wrote a short dialogue featuring five examples.
1. Life and death and love and birth
Avery: But how can you not believe the universe was designed for life? It's in this cosmic Goldilocks zone, where if you tweaked the speed of light or Planck's Constant or the electron mass by just a few percent, you couldn't have atoms, you couldn't have any patterns complex enough to replicate and evolve, the universe would just be this entropic soup!
Brook: I'm not sure I buy the "no complex patterns at all" bit, but even granting it for the sake of argument -- have you heard of the anthropic principle?
Avery: No. Explain?
Brook: We're alive. Life can only exist in a universe where life exists, tautologically. If the cosmos couldn't support life, we wouldn't be having this conversation.
Avery: And so, a universe... created... ngh, I vaguely see what you're getting at. Elaborate?
Brook: And so, a universe created by some kind of deity, and tuned for life, is indistinguishable from a universe that happens to have the right parameters by coincidence. "We exist" can't be evidence for our living in one or the other, because that fact doesn't correlate with design-or-lack-of-design -- unless you think that, from inside a single universe, you can derive sensible priors for the frequency with which all universes, both designed and undesigned, can support life?
Avery: I... oof. That argument feels vaguely like cheating, but... I'll think about it.
2. ...and peace and war on the planet Earth
Avery: In any case, it's interesting that the topic of Earth's hospitality comes up today of all days, given that we so nearly made it inhospitable.
Brook: What do you mean, "today of all days"?
Avery: Oh! You don't know!? September 26 is Petrov Day! It's the anniversary of when some Soviet early-warning system sounded an alarm, and the officer in charge noticed something funny about it and wrote it off -- correctly, as it turned out -- as a false alarm. If he hadn't, Russia might have "retaliated," starting nuclear war and destroying the world.
Brook: Yikes.
Avery: Yeah! And Wikipedia has a list of similar incidents. We need to be extremely careful to never get into a Cold-War-like situation again: it was incredibly lucky that we survived, and if we get there again, we'll almost certainly melt the planet into a radioactive slag heap.
Brook: Hmm. Nuclear war is definitely bad, and we should try hard to prevent it. But I suspect things aren't as bad as they appear, and that those reports of near-disaster have been exaggerated due to people's credulity for "cool" shiver-inducing things. Theories should get punished for assigning low probabilities to true things: if your model claims that the odds of surviving the Cold War were only 1:1000, it takes a thousandfold probability hit. Any model that predicts Cold-War-survival better, is correspondingly more plausible.
Avery: Not so! Anthropic principle, remember? If the world had ended, we wouldn't be standing here to talk about it. Just as the fact that intelligent life exists shouldn't surprise us (because we can only exist in a universe with intelligent life), the fact that the world didn't end in 1983 shouldn't surprise us (because we can only exist in a world that didn't dissolve into flames).
Brook: I... see what you're saying...
3. Improbability upon improbability
Avery: Oh! And that's not all! According to this article, of the officers who took shifts monitoring that station, Petrov was the only one to have had a civilian education; the others were just taught "to issue and obey orders." It's really lucky that the false alarm went off when it did, instead of twelve hours later when somebody else was at the helm.
Brook: That... rings false to me. I expect there was a miscommunication somewhere between Petrov's lips and your eyes: if there were six officers taking shifts watching the early-warning system, and five of them would've pressed the button, you just declared the probability of surviving this false alarm to be six times smaller: your model takes another 6x hit, just like it would if it also claimed that Petrov rolled a die and decided he'd only ignore the warning if it came up 1.
Avery: Anth--
Brook: Don't you dare.
Avery: *coughthropic principlecough*
Brook: Shut your mouth.
4. Supercritical
Avery: Fine, fine, sorry. Change of subject: I have a friend who works at the DoE, and they gave me a neat little trinket last week. Here, hold onto this. Careful: it's small, but super heavy.
Brook: Oka-- oh, jeez, wow, yeah. What is it?
Avery: A supercritical ball of enriched uranium.
Brook: Gyaah! That's not safe-- wait, supercritical? That can't be, it would detonate in less than a microsecond.
Avery: And kill us, yes. But we're still alive! Therefore, we must be on the unfathomably tiny branch of possible universes where, so far, when an atom of U-235 in this ball has fissioned, the resulting neutrons tended to miss all the other atoms. Thus, the chain reaction hasn't yet occurred, and we survive.
Brook: But Av--
Avery: And you might be tempted to say, "But Avery, that's so improbable I can't even express numbers that small in standard mathematical notation. Clearly this ball is merely platinum or osmium or some other dense metal." But remember! Anthropic principle! You're not allowed to use the fact that you're still alive as evidence! The fact that this ball hasn't detonated is not evidence against its being supercritical uranium!
Brook: I-- um. Okay, that is definitely one hundred percent nonsensical sophistry, I just need to put my finger on--
5. Evidence kills
Avery: Sophistry!? I'm insulted. In fact, I'm so insulted that I pulled out a gun and shot you.
Brook: ...what?
Avery: Clearly we're living in the infinitesimally tiny Everett branch where the bullet quantum-tunnelled through your body! Amazing! How improbable-seeming! But, you know, anthropic principle and all.
Brook: NO. I have you now, sucker: even in the branches where the bullet tunneled through me, I would have seen you draw the gun, I'd have heard the shot, I'd see the bullet hole in the wall behind me.
Avery: Well, that all assumes that the photons from my arm and the wall reached your eye, which is a purely probabilistic quantum phenomenon.
Brook: Yes, but still: of the universes where the bullet tunneled through me, in ninety-nine-point-so-many-nines percent of those universes, there is a bullet hole in the wall. Even ignoring the universes where I'm dead, the lack of a bullet hole is overwhelming evidence against your having shot me.
Avery: Is it, though? When you looked at the wall just now, you saw no bullet hole, yes?
Brook: Yes...
Avery: But you, Brook-who-saw-no-bullet-hole, can basically only exist in a universe where there's no bullet hole to be seen. If there were a bullet hole, you wouldn't exist -- a Brook-who-did-see-a-bullet-hole would stand in your place. Just as the fact that intelligent life exists shouldn't cause you to update (because you can only exist in a universe with intelligent life), the fact that there's no bullet hole shouldn't cause you to update (because you can only exist in a universe without a bullet hole).
Brook: But seeing a bullet hole doesn't kill me.
Avery: There's nothing fundamentally different about death. You wouldn't exist if the world had been destroyed in 1983, but you also wouldn't exist if there were a bullet hole: I'd be talking to Brook-who-saw-a-bullet-hole instead. And the fact that you exist can't be used as evidence.
Brook: Are you high on something!? Are you fucking with me!? You're asking me to throw out the whole notion of evidence!
Avery: Oh, yeah, I'm totally messing with you. Absolutely. Sorry. When did I start, though?
Brook: ...sometime between describing Petrov Day -- that part was true, right? Good. -- and telling me that that ball-of-some-dense-metal was made of uranium.
Avery: Correct. But can you be more specific?
Brook: ...tungsten?
Avery: Heh. Yeah, good guess. But I meant about--
Brook: --yeah. I'm not really sure when you started messing with me. And I'm not really sure when you stopped applying the anthropic principle correctly.
Avery: Hmm. That's too bad. Neither am I.
Conclusion
I have no idea whether the anthropic principle is legit or how to use it, or even whether it has any valid uses.
[cross-posted to a blog; comments here preferred]
13 comments
Comments sorted by top scores.
comment by SilentCal · 2017-09-27T15:53:46.797Z · LW(p) · GW(p)
"...unless you think that, from inside a single universe, you can derive sensible priors for the frequency with which all universes, both designed and undesigned, can support life?"
The corresponding line in part 2 would be
"...unless you think that, from inside a single Cold War outcome, you can derive sensible priors for the frequency with which all Cold War outcomes can support life?"
Which we kind of can, but imprecisely.
In part 4, it would be
"...unless you think that, from inside a single uranium-ball-outcome, you can derive sensible priors for the frequency with which all uranium-ball-outcomes leave us alive?"
Which we very obviously can.
↑ comment by Stuart_Armstrong · 2017-11-01T09:06:37.210Z · LW(p) · GW(p)
Just read your answer here, after posting mine on somewhat similar lines: https://www.lesserwrong.com/posts/4ZRDXv7nffodjv477/anthropic-reasoning-isn-t-magic
↑ comment by magfrump · 2017-09-27T22:33:25.854Z · LW(p) · GW(p)
Just to be really explicit about all this and make sure I'm understanding correctly:
The reason we can derive stats in the part 4 scenario has two causes.
One is that we can observe obviously related scenarios (other uranium balls)
Two is that we can factor "leave us alive" through the related circumstances of "tons of radiation"
In the part 2 scenario, we can try to make educated guesses, but we have less power
There have been other cold wars, but not other nuclear cold wars
We can make guesses about how likely "we are to live" based on how many bombs would get launched, make guesses about that based on how many bombs exist, etc.
In the part 1 scenario, we have no power on either side
We can't make any observations about realities with alternate physics (though this may become possible with sufficiently advanced computing?)
We don't have a reference class for how likely it is that sapient/sentient life could or would evolve in those alternate scenarios because while we have many ways of understanding death and can extrapolate to everyone dying, we have only one example of sapient life and can't extrapolate that to other physics
Now I've gotten to the point where I don't know what work the anthropic principle is doing. Is it telling us that if we don't have any basis for constructing a prior we can't just assume the ignorance prior? I think that's right but I'm not sure.
↑ comment by nostalgebraist2point0 · 2017-09-28T03:28:00.724Z · LW(p) · GW(p)
IMO, the anthropic principle boils down to "notice when you are trying to compute a probability conditional on your own existence, and act accordingly."
A really simple example, where the mistake is obvious(ly wrong), is "isn't it amazing that we live on a planet that's just the right distance from its star (etc.) to support life?" No, this can't be amazing. The question presupposes a "we" who live on some planet, so we're looking for something like P(we live on a habitable planet | we are inhabiting a planet), which is, well, pretty high. The fact that there are many uninhabitable planets doesn't make the number any smaller. When I hear the phase "anthropic principle," this is the kind of prototype case I think of.
I don't think it's correct to phrase this as "you're not allowed to use the fact that you exist as evidence." Your own existence can be used as evidence for many propositions, like "habitable planets exist" or (more weakly) "being named [your first name] is not a crime punishable by death in your country." The point is, neither of those are conditioned on your existence. (For a case like the above, imagine your first name is punishable by death almost everywhere, and you find yourself marveling at the fact that you were, "of all places," born in the one tiny country where it isn't.)
The cosmic fine-tuning argument is a whole other can of worms, because we may not actually have any sensible choice of probability measure for those supposedly "fine-tuned" constants. (This was my knee-jerk reaction upon first hearing about the issue, and I stick by it; I vaguely remember reading something once that made me question this, but I can't remember what it was.) That is, when we talk about "if the constants were a little bit different..." we are using intuitions from the real world, in which physical quantities we observe are pushed and pulled by a complicated web of causes and usually cannot be counted on to stay within a very tiny range (relative to their magnitude). But if the universe is just what it is, full stop, then there is no "complicated web of causes," so this intuition is mis-applied.
As a purely philosophical issue, this is muddled by the way that fundamental physicists prefer to simplify things as far as possible. There is a legitimate complaint made by physicists that the many arbitrary parameters are "ugly," and a corresponding desire that they be reduced to something with fewer degrees of freedom, as the periodic table did for elements and the quark model did for hadrons. A desire for fewer degrees of freedom is not exactly the same thing as a desire for less fine-tuning, but the desires are psychologically related and thus easy for people to conflate -- both desires would be satisfied by some final theory that feels sufficiently "natural," a few clean elegant equations with no jagged funny bits sticking off of the ends.
↑ comment by Conor Moreton · 2017-09-27T22:01:20.356Z · LW(p) · GW(p)
This is why symbolic logic rules.
comment by nostalgebraist2point0 · 2017-09-27T00:19:08.083Z · LW(p) · GW(p)
This all seems like exploiting ambiguity about what your conditional probabilities are conditional on.
Conditional on "you will be around a supercritical ball of enriched uranium and alive to talk about it," things get weird, because that's such a low-probability event to begin with. I suspect I'd still favor theories that involve some kind of unknown/unspecified physical intervention, rather than "the neutrons all happened to miss," but we should notice that we're conditioning on a very low probability event and things will get weird.
Conditional on "someone telling me I'm around a supercritical ball of enriched uranium and alive to talk about it," they're probably lying or otherwise trolling me.
Conditional on "I live in a universe governed by the standard model and I'm alive to talk about it," the constants are probably tuned to support life.
Conditional on "the Cold War happened, lasted for a number of decades, and I'm alive to talk about it," humanity was probably (certainly?) not wiped out.
Once you think about it this way, any counterintuitive implications for prediction go away. For instance, we don't get to say nuclear cold wars aren't existentially dangerous because they aren't if we condition on humanity surviving them -- that's conditioning on the event whose probability we're trying to calculate! But we also can't discount "we survived the cold war" as (some sort of) evidence that cold wars might be less dangerous than we thought. For prediction (and evaluation of retro-dictions), the right event to condition on is "having a cold war (but not necessarily surviving it).
↑ comment by ESRogs · 2017-09-27T01:01:27.239Z · LW(p) · GW(p)
On the Cold War thing, I think the lesson to learn depends on whether situations that start with a single nuclear launch reliably (and rapidly) escalate into world-destroying conflicts.
If (nuclear launch) -> (rapid extinction), then it seems like the anthropic principle is relevant, and the close calls really might have involved improbable luck.
If, on the other hand, (nuclear launch) -> (perhaps lots of death, but usually many survivors), then this suggests the stories of how close the close calls were are exaggerated.
↑ comment by scarcegreengrass · 2017-10-05T22:09:41.050Z · LW(p) · GW(p)
Your 'if' statements made me update. I guess there is also a distinction between what conclusions one can draw from this type of anthropic reasoning.
One (maybe naive?) conclusion is that 'the anthropic principle is protecting us'. If you think the anthropic principle is relevant, then you continue to expect it to allow you to evade extinction.
The other conclusion is that 'the anthropic perspective is relevant to our past but not our future'. You consider anthropics to be a source of distortion on the historical record, but not a guide to what will happen next. Under this interpretation you would anticipate extinction of [humans / you / other reference class] to be more likely in the future than in the past.
I suspect this split depends on whether you weight your future timelines by how many observers are in them, etc.
comment by WSS · 2017-09-28T18:49:11.880Z · LW(p) · GW(p)
6: When will it end?
Brook: You know what, I think it’s far more likely that you’re messing with me than you actually shot me. But I’ll concede that it is possible that you did actually shoot me, and the only reason I’m standing here talking to you is because I am forced to take an Everett branch that allows me to be standing here talking to you.
Avery: Well actually, in most of them you end up bleeding out on the floor while you tell me this.
B: And then I die.
A: From my perspective, yeah, most likely. From yours, there will be some branches where a team of paramedics happens to drive by and save you, and if you are to be conscious at all in the future it will be in those branches.
B: Ok, but in most of those I die and simply stop experiencing reality.
A: Maybe. Or maybe you’re guaranteed to take the conscious path, since there must be some future state which has your present as its past.
B: Are you saying that I can’t die? That’s ludicrous!
A: I’m saying that your conscious experience might not ever end, since there’s always a branch where it won’t. And the ones where it does won’t be around to talk about it.
B: So if I make a bomb that is set to blow me up if I don’t win tomorrow’s Powerball jackpot, the next day I’m guaranteed to have the subjective experience of walking away with several hundred million?
A: Well most likely you end up horribly maimed, disfigured, and concussed, unable to do anything until someone takes pity on you, uploads you into a computer, and you live for eternity in some experience we can’t imagine. That‘s where your subjective Everett branch is going to end up regardless , but it’ll be nice to skip the maiming portion.
B: This all seems pretty shakey.
A: Yeah I’m not very confident in that line of reasoning myself. Certainly not better than the 1:292,201,338 powerball odds.
B: You didn’t shoot me, did you?
A: No way! Do you know infinitesimally small the wavelength of a bullet is?
Quantum immortality is a natural extension of the anthropic principle, but I’m far less confident about using it to say anything about future states rather than using it to reason about your current one.
comment by Conor Moreton · 2017-09-27T08:25:33.752Z · LW(p) · GW(p)
Between story 1 and story 2.
(I'm not actually able to explain my intuition, and also my intuition is wrong an awful lot, but that's my Gordian-knot answer.)
Also thanks for messing with my head and uncovering a confusion I'd glossed over.
↑ comment by Stuart_Armstrong · 2017-11-01T09:04:47.796Z · LW(p) · GW(p)
Yep: https://www.lesserwrong.com/posts/4ZRDXv7nffodjv477/anthropic-reasoning-isn-t-magic
comment by Ben Pace (Benito) · 2017-09-27T00:32:08.911Z · LW(p) · GW(p)
Added to the frontpage.
↑ comment by Ben Pace (Benito) · 2017-09-27T21:52:46.984Z · LW(p) · GW(p)
Promoted to featured, for solidly communicating a common confusion in the community.