How Many LHC Failures Is Too Many?

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-20T21:38:27.000Z · LW · GW · Legacy · 140 comments

Recently the Large Hadron Collider was damaged by a mechanical failure.  This requires the collider to be warmed up, repaired, and then cooled down again, so we're looking at a two-month delay.

Inevitably, many commenters said, "Anthropic principle!  If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"

This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction.  However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all.  (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)

As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry.  However, there's an exercise in probability theory (which I first picked up from E.T. Jaynes) along the lines of, "How many times does a coin have to come up heads before you believe the coin is fixed?"  This tells you how low your prior probability is for the hypothesis.  If a coin comes up heads only twice, that's definitely not a good reason to believe it's fixed, unless you already suspected from the beginning.  But if it comes up heads 100 times, it's taking you too long to notice.

So - taking into account the previous cancellation of the Superconducting Supercollider (SSC) - how many times does the LHC have to fail before you'll start considering an anthropic explanation?  10?  20?  50?

After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?

140 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by frelkins · 2008-09-20T22:25:51.000Z · LW(p) · GW(p)

"But if it comes up heads 100 times, it's taking you too long to notice"

Ros. Heads. (He puts it in his bag. The process is repeated.) Heads. (Again.) Heads. (Again.) Heads. (Again.) Guil. (Flipping a coin) There is an art to the building of suspense. Ros. Heads. Guild. (Flipping another) Though it can be done by luck alone. Ros. Heads. Guil. If that's the word I'm after. Ros. (Raises his head) 76! (Guil gets up but has nowhere to go. He spins the coin over shoulder without looking at it.) Heads Guil. A weaker man might be moved to re-examine his faith, if in nothing else at least in the law of probability. (He flips a coin back over his shoulder.) Ros. Heads. Guil. (Musing) The law of probability, it has been asserted, is something to do with the proposition that if six monkeys - (He has surprised himself) if six monkeys were. . . Ros. Game? Guil. Were they? Ros. Are you?

-- Rosenkrantz & Guildenstern Are Dead, Tom Stoppard, Act I

Replies from: None
comment by [deleted] · 2012-06-04T14:03:11.310Z · LW(p) · GW(p)

Might want to reformat that, looks like markdown did you in.

comment by Greg2 · 2008-09-20T22:39:25.000Z · LW(p) · GW(p)

Perhaps the question could also be asked this way: How many times does the LHC have to inexplicably fail before we take it as scientific confirmation that world-destroying black holes and/or strange particles are indeed produced by LHC-level collisions? Would we treat such a scenario as a successful experimental result for the LHC?

Replies from: AlexanderRM
comment by AlexanderRM · 2015-09-07T19:24:16.860Z · LW(p) · GW(p)

I wouldn't describe a result that eliminated the species conducting the experiment in the majority of world-branches as "successful", although I suppose the use of LHCs could be seen as an effective use of quantum suicide (two species which want the same resources meet, flip a coin loser kills themselves- might have problems with enforcement) if every species invariably experiments with them before leaving their home planet.

On the post as a whole: I was going to say that since humans in real life don't use the anthropic principle in decision theory, that seems to indicate that applying it isn't optimal (if your goal is to maximize the number of world-branches with good outcomes), but realized that humans are able to observe other humans and what sort of things tend to kill them, along with hearing about those things from other humans when we grow up, so we're almost never having close calls with death frequently enough to need to apply the anthropic principle. If a human were exploring an unknown environment with unknown dangers by themselves, and tried to consider the anthropic principle... that would be pretty terrifying.

comment by Pseudonymous3 · 2008-09-20T22:45:25.000Z · LW(p) · GW(p)

John Cramer wrote a novel with an anthropic explanation for the cancellation of the SSC:

http://www.amazon.com/Einsteins-Bridge-John-Cramer/dp/0380788314

comment by Peter3 · 2008-09-20T22:47:24.000Z · LW(p) · GW(p)

Just to make sure I'm getting this right... this is sort of along the same lines of reasoning as quantum suicide?

It depends on the type of "fail" - quenches are not uncommon. And also their timing - the LHC is so big, and it's the first time it's been operated. Expect malfunctions.

But if it were tested for a few months before, to make sure the mechanics were all engineered right, etc., I guess it would only take a few (less than 10) instances of the LHC failing shortly before it was about to go big for me to seriously consider an anthropic explanation. If it's mechanically sound and still miraculously failing every time the dials get turned up high, it's likely enough to consider.

"After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?"

Not sure what is meant by that.

comment by Greg2 · 2008-09-20T22:49:22.000Z · LW(p) · GW(p)

Another thought. Suppose a functioning LHC does in fact produce world-destroying scenarios. Would we see: A) an LHC with mechanical failures? or B) an LHC where all collisions happen except world-destroying ones? If B, would the LHC be giving us biased experimental results?

comment by Michael_Blume · 2008-09-20T22:53:00.000Z · LW(p) · GW(p)

I'm confused by your last comment - what use would the LHC be in a global economic crisis or nuclear war? I don't suppose you mean something like "rig the LHC to activate if the market does not recover by date X according to measure Y, and then we will only be able to observe the scenario in which the market does recover" or something like that, do you?

Replies from: drethelin, amaury-lorin
comment by drethelin · 2015-09-08T00:50:58.888Z · LW(p) · GW(p)

I think the idea is you only run it if you're already indifferent to the world being destroyed?

comment by momom2 (amaury-lorin) · 2023-01-15T00:41:25.952Z · LW(p) · GW(p)

By precommitting to firing up the LHC in difficult moments, assuming firing up the LHC destroys the world, you end up observing only universes where difficult moments don't happen (at a cost I would describe as "at best ambiguous").

comment by steven · 2008-09-20T22:57:03.000Z · LW(p) · GW(p)

IMHO if anthropics worked that way and if the LHC really were a world-killer, you'd find yourself in a world where we had the propensity not to build the LHC, not one where we happened not to build one due to a string of improbable coincidences.

Replies from: wafflepudding
comment by wafflepudding · 2016-09-29T02:28:39.371Z · LW(p) · GW(p)

I'd agree that certain worlds would have the building of the LHC pushed back or moved forward, but I doubt there would be many where the LHC was just never built. Unless human psychology is expected to be that different from world to world?

Replies from: hairyfigment
comment by hairyfigment · 2016-09-29T22:15:19.208Z · LW(p) · GW(p)

...As I pointed out recently in another context, humans have existed for tens of thousands of years or more. Even civilization existed for millenia before obvious freak Isaac Newton started modern science. Your position is a contender for the nuttiest I've read today.

Possibly it could be made better by dropping this talk of worlds and focusing on possible observers, given the rise in population. But that just reminds me that we likely don't understand anthropics well enough to make any definite pronouncements.

Replies from: wafflepudding
comment by wafflepudding · 2016-10-02T01:04:03.440Z · LW(p) · GW(p)

Are you responding to "Unless human psychology is expected to be that different from world to world?"? Because that's not my position, I'd think that most things recognizable as human will be similar enough to us that they'd build an LHC eventually. I guess I'm not exactly sure what you're getting at.

Replies from: hairyfigment
comment by hairyfigment · 2016-10-02T01:33:56.699Z · LW(p) · GW(p)

I am strongly disagreeing with you. The cultures that existed on Earth for tens of millenia or more were recognizably human; one of them built an LHC "eventually", but any number of chance factors could have prevented this. Like I just said, modern science started with an extreme outlier.

Replies from: wafflepudding, ChristianKl
comment by wafflepudding · 2016-10-02T09:04:04.333Z · LW(p) · GW(p)

Gotcha. So, assuming that the actual Isaac Newton didn't rise to prominence*, are you thinking that human life would usually end before his equivalent came around and the ball got rolling? Most of our existential risks are manmade AFAICT. Or you think that we'd tend to die in between him and when someone in a position to build the LHC had the idea to build the LHC? Granted, him being "in a position to build the LHC" is conditional on things like a supportive surrounding population, an accepting government, etcetera; but these things are ephemeral on the scale of centuries.

To summarize, yes, some chance factor would def prevent us from building the LHC as the exact time we did, but with a lot of time to spare, some other chance factor would prime us to build it somewhen else. Building the LHC just seems to me like the kind of thing we do. (And if we die from some other existential risk before Hadron Colliding (Largely), that's outside the bounds of what I was originally responding to, because no one who died would find himself in a universe at all.)

*Not that I'm condoning this idea that Newton started science.

Replies from: hairyfigment
comment by hairyfigment · 2016-10-08T06:10:59.209Z · LW(p) · GW(p)

but these things are ephemeral on the scale of centuries.

That's what I just said. You seem to have an alarming confidence in our ability to bounce back from ephemeral shifts. If there were actually some selection pressure against a completed LHC, then it would take a lot less than a repetition of this to keep us shifted away from building one.

comment by ChristianKl · 2016-10-02T19:17:02.301Z · LW(p) · GW(p)

Like I just said, modern science started with an extreme outlier.

There's a lot of history of science and it generally doesn't find that it all hinges on one event like Newton.

Replies from: hairyfigment
comment by hairyfigment · 2016-10-08T05:58:56.961Z · LW(p) · GW(p)

We're not talking about all of science. (Though I stand by my claim that he started it, unless you can point to someone else writing down a workable scientific method beforehand.) We're talking about whether or not anthropic reasoning tells us to expect to see people building the LHC, at a cost of $1 billion per year.

Thatcher apparently rejected the idea as presented, and rightly too if the Internet accurately reported the pitch they made to her. (In this popular account, the Higgs mechanism doesn't "explain mass," it replaces one arbitrary number with another! I still don't know the actual reasons for believing in it!) So we don't need to imagine humanity dying out, and we don't need to assume that civilization collapses after using up irreplaceable fossil fuels. (Though that one seems somewhat plausible.) I don't think we even need to assume religious tyranny crushes respect for science. Slightly less radical changes to the culture of a small fraction of the world seem sufficient to prevent the LHC expenditure for the foreseeable future. Add in uncertainty about various risks that fall short of total annihilation, and this certainty starts to look ridiculous.

Now as I said, one could make a different anthropic argument based on population in various 'worlds'. But as I also said, I don't think we know enough to get a high probability from that either.

Replies from: ChristianKl
comment by ChristianKl · 2016-10-08T15:56:38.589Z · LW(p) · GW(p)

Though I stand by my claim that he started it, unless you can point to someone else writing down a workable scientific method beforehand

Hakob Barseghyan teaches in his History and Philosophy of Science course that Descartes started it. The hypothetico-deductive method (what's commonly called the scientific method) is a result of the philosophic commitments of Descartes thought.

Replies from: hairyfigment
comment by hairyfigment · 2016-10-09T00:12:59.100Z · LW(p) · GW(p)

The video is somewhat odd in that he claims Descartes had no problem with experiments, but I recall the philosopher proposing rules which contradicted experiments and hand-waving this by appealing to the impossibility of observing bodies in isolation.

In any case, Hakob does make clear that Descartes used a more Aristotelian method as a rhetorical device to persuade Aristotelians. (In effect, he proved the method of intuitive truth unreliable by producing a contradiction.) I don't believe his work includes any workable method you could use to do science, while Newton's rules for natural philosophy seem like an OK approximation.

Replies from: ChristianKl
comment by ChristianKl · 2016-10-09T10:20:18.285Z · LW(p) · GW(p)

The main point is that if you buy the philosophic commitments of Descartes the hypothetico-deductive method is a straightforward conclusion. Newton might have expressed the method more clearly but various people moved in that directions once Descartes successfully argued against the old way.

Replies from: hairyfigment
comment by hairyfigment · 2016-10-10T01:39:19.230Z · LW(p) · GW(p)

Possibly, but I wouldn't say the popes started science by being terrible rulers, thereby creating a clearer distinction between religious and secular.

Replies from: ChristianKl, ChristianKl
comment by ChristianKl · 2016-10-10T09:17:40.818Z · LW(p) · GW(p)

thereby creating a clearer distinction between religious and secular.

Given that Newton was a person who cared about the religious that would be a bad example. He spent a lot of time with biblical chronology.

You claimed that science wouldn't have been invented at the time without Newton. It's historically no accident that Leibniz discovered calculus independently from Newton. The interest in numerical reasoning was already there.

To get back to the claim, following the scientific method and explicitly writing it down are two different activities. It takes time to move from the implicit to the explicit.

Replies from: hairyfigment
comment by hairyfigment · 2016-10-11T01:51:03.857Z · LW(p) · GW(p)

But Newton didn't propose a religious method for science, which is my point. Did you think I meant that the popes turned Dante atheist? What they did was give him a desire for a secular ruler and an "almost messianic sense of the imperial role".

That sort of thinking may have given rise to Descartes' science fiction, so to speak - secular aspirations which go beyond even a New Order of the Ages. So there are a few possible prerequisites for a scientific method. As for someone else writing one down, maybe; what we observe is that the best early formulation came from a brilliant freak.

Replies from: ChristianKl
comment by ChristianKl · 2016-10-11T13:35:37.632Z · LW(p) · GW(p)

Why do you think that Newtons proposal of his method of science had something to do with desire for a secular ruler?

Replies from: hairyfigment
comment by hairyfigment · 2016-10-11T22:40:34.384Z · LW(p) · GW(p)

Why do you think Newton's focus on new observations/experiments came from Cartesian ontology, when Newton doesn't wholly buy that ontology?

I'm saying the popes inadvertently created a separate concept of secular aspirations - often opposed to religious authorities, though not to God if he turns out to exist. This "imperial role" business is arguably a rival form of the idea, though Newton did in fact work for the Crown.

Replies from: ChristianKl
comment by ChristianKl · 2016-10-12T10:30:56.841Z · LW(p) · GW(p)

My main source is lecture series towards which I linked above. The Newtonian worldview is presented as the lecture that follows after the one I linked.

This "imperial role" business is arguably a rival form of the idea, though Newton did in fact work for the Crown.

At the time the Crown was the head of the church in England.

comment by ChristianKl · 2016-10-10T13:55:57.956Z · LW(p) · GW(p)

Asking on StackExchange gives a variety of people before Newton: http://hsm.stackexchange.com/questions/5275/was-isacc-newton-the-first-person-to-articulate-the-scientific-method-in-europe/5277#5277

Replies from: hairyfigment
comment by hairyfigment · 2016-10-11T02:03:56.683Z · LW(p) · GW(p)

Even there, someone points out that Bacon wasn't big on math. I'll grant you I should give him more credit for a sensible conclusion on heat, and for encouraging experiments.

comment by steven · 2008-09-20T23:02:08.000Z · LW(p) · GW(p)

Sorry, make that "happened not to build one that worked".

comment by Z._M._Davis · 2008-09-20T23:14:31.000Z · LW(p) · GW(p)

Say our prior odds for the LHC being a destroyer of worlds are a billion to one against. Then this hypothesis is at negative ninety decibels. Conditioned on the hypothesis being true, the probability of observing failure is near unity, because in the modal worlds where the world really is destroyed, we don't get to make an observation--or we won't get to remember it very long. Say that conditioned on the hypothesis being false, the probability of observing failure is one-fifth--this is very delicate equipment, yes? So each observation of failure gives us 10log(1/0.2), or about seven decibels of evidence for the hypothesis. We need ninety decibels of evidence to bring us to even odds; ninety divided by seven is about 12.86. So under these assumptions it takes thirteen failures before we believe that the LHC is a planet-killer.

comment by Allan_Crossman · 2008-09-20T23:38:27.000Z · LW(p) · GW(p)

First collisions aren't scheduled to have happened yet, are they? In which case, the failure can't be seen as anthropic evidence yet, since we might as well be in a world where it hasn't failed, since such a world wouldn't have been destroyed yet in any case.

But if I'm not mistaken, even old failures will become evidence retrospectively once first collisions are overdue, since (assuming the unlikely case of the LHC actually being dangerous) all observers still alive would be in a world where the LHC failed; when it failed being irrelevant.

As much as the AP fascinates me, it does my head in. :)

comment by Hopefully_Anonymous · 2008-09-21T00:06:47.000Z · LW(p) · GW(p)

Eliezer it's a good question and a good thought experiment except for the last sentence, which assumes a conservation of us as subjective conscious entities that the anthropic principle doesn't seem to me to endorse.

You can also add into your anthropic principle mix the odds that increasing numbers of experts think we can solve biological aging within our life time, or perhaps that should be called the solipstic principle, which may be more relevant for us as persisting observers.

comment by komponisto2 · 2008-09-21T00:29:13.000Z · LW(p) · GW(p)

At the risk of asking the obvious:

Does the fact that no one has yet succeeded in constructing transhuman AI imply that doing so would necessarily wipe out humanity?

Replies from: CarlShulman
comment by CarlShulman · 2013-05-02T18:59:27.504Z · LW(p) · GW(p)

No.

Replies from: shminux
comment by shminux · 2013-05-02T19:06:36.950Z · LW(p) · GW(p)

But does it increase the probability of it, and if so, by how much?

comment by Yvain2 · 2008-09-21T00:49:53.000Z · LW(p) · GW(p)

Originally I was going to say yes to the last question, but after thinking over why a failure of the LHC now (before it would destroy Earth) doesn't let me conclude anything by the anthropic principle, I'm going to say no.

Imagine a world in which CERN promises to fire the Large Hadron Collider one week after a major terrorist attack. Consider ten representative Everett branches. All those branches will be terrorist-free for the next few years except number 10, which is destined to suffer a major terrorist attack on January 1, 2009.

On December 31, 2008, Yvains 1 through 10 are perfectly happy, because they live in a world without terrorist attacks.

On January 2, 2009, Yvains 1 through 9 are perfectly happy, because they still live in worlds without terrorist attacks. Yvain 10 is terrified and distraught, both because he just barely escaped a terrorist attack the day before, and because he's going to die in a few days when they fire the LHC.

On January 8, 2009, CERN fires the LHC, killing everyone in Everett branch 10.

Yvains 1 through 9 aren't any better off than they would've been otherwise. Their universe was never destined to have a terrorist attack, and it still hasn't had a terrorist attack. Nothing has changed.

Yvain 10 is worse off than he would have been otherwise. If not for the LHC, he would be recovering from a terrorist attack, which is bad but not apocalyptically so. Now he's dead. There's no sense in which his spirit has been averaged out over Yvains 1 through 9. He's just plain dead. That can hardly be considered an improvement.

Since it doesn't help anyone and it does kill a large number of people, I'd advise CERN against using LHC-powered anthropic tricks to "prevent" terrorism.

comment by Michael_Blume · 2008-09-21T01:18:09.000Z · LW(p) · GW(p)

Unless you just consider it a Mouse That Roared scenario in which no one dares commit a terrorist attack under threat of global annihilation.

(just read the book, it's well worth it)

comment by Vladimir_Nesov · 2008-09-21T01:21:21.000Z · LW(p) · GW(p)

Blowing up the world in response to terrorist attack is like shooting yourself in the head when someone steps on your foot, to make subjective probability of your feet being stepped on lower.

comment by Yvain2 · 2008-09-21T01:31:35.000Z · LW(p) · GW(p)

Just realized that several sentences in my previous post make no sense because they assume Everett branches were separate before they actually split, but think the general point still holds.

Replies from: AlexanderRM
comment by AlexanderRM · 2015-09-07T19:30:17.366Z · LW(p) · GW(p)

Some of the factors leading to a terrorist attack succeeding or failing would be past the level of quantum uncertainty before the actual attack happens, so unless the terrorists are using bombs set up on the same principle as the trigger in Scrodinger's Cat, the branches would have split already before the attack happened.

comment by steven · 2008-09-21T01:41:28.000Z · LW(p) · GW(p)

This remark may be somewhat premature, since I don't think we're yet at the point in time when the LHC would have started producing collisions if not for this malfunction. However, a few weeks(?) from now, the "Anthropic!" hypothesis will start to make sense, assuming it can make sense at all. (Does this mean we can foresee executing a future probability update, but can't go ahead and update now?)

I can only see this statement making any sense if you think we should behave as if nature first randomly picked a value of a global cross-world time parameter, then randomly picked an observer (in any world) alive at that time, and that observer is you. (Actually I can't see it making any sense even then.) But that's not thinking 4D! Choosing a random observer in all of spacetime makes much more sense.

comment by Mitchell_Porter · 2008-09-21T03:54:01.000Z · LW(p) · GW(p)

Inevitably, many commenters said, "Anthropic principle! If the LHC had worked, it would have produced a black hole or strangelet or vacuum failure, and we wouldn't be here!"

This remark may be somewhat premature

Uh, isn't it actually nonsense? The anthropic principle is supposed to explain how you got lucky enough to exist at all, not how you got lucky enough to keep existing.

comment by Nominull3 · 2008-09-21T04:07:46.000Z · LW(p) · GW(p)

The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it's sorted, and if it's not, destroying the world.

Replies from: datadataeverywhere, wedrifid, wedrifid
comment by datadataeverywhere · 2010-09-29T19:50:31.001Z · LW(p) · GW(p)

Strictly speaking, how does one randomize a list in linear time?

Even picking a uniformly-randomized list from all possible sequences is out of reach for us under most scenarios with reasonably long lists.

Replies from: wedrifid
comment by wedrifid · 2010-09-30T01:43:13.094Z · LW(p) · GW(p)

A uniform randomization may not be possible but you can get an arbitrarily well randomized list in linear time. That is all that is needed for the purposes of the sorting. (You would just end up destroying the world 1 + (1 / arbitrarily large) as many times as with a uniform distribution.)

Replies from: datadataeverywhere
comment by datadataeverywhere · 2010-09-30T02:28:33.534Z · LW(p) · GW(p)

Algorithms like a modified Fisher-Yates shuffle in linear time if you're just measuring reads and writes, but O(lg(n!)) > O(n) bits are required to specify which permutation is being chosen, so unless generating random numbers is free, shuffling is always O(n log n) .

In real life, we don't use PRNGs with sufficiently long cycle times, so we usually get linear-time shuffles by discarding the vast majority of the potential orderings.

comment by wedrifid · 2010-09-30T01:45:07.054Z · LW(p) · GW(p)

The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it's sorted, and if it's not, destroying the world.

That seems to be a rational decision for people with certain value systems. Specifically, those that don't care about their quantum measure. (Yes, that value system is at least as insane as Clippy's.)

"Quantum Sour Grapes" seems like a suitable label for the strategy. ;)

comment by wedrifid · 2010-09-30T02:00:15.837Z · LW(p) · GW(p)

It just occurred to me that you would want to be REALLY careful that there wasn't a bug in either your shuffling or list checking code.

If you started using quantum suicide for all your problems eventually you'd make a mistake. :)

Replies from: TheOtherDave
comment by TheOtherDave · 2010-11-14T16:44:30.733Z · LW(p) · GW(p)

If I'm following the reasoning (if "reasoning" is in fact the right word, which I'm unconvinced of), you wouldn't make any world-destroying mistakes that it's possible for you not to make, since only the version of you that (by chance) made no such mistakes would survive.

And, obviously, there's no point in even trying to avoid world-destroying mistakes that it's not possible for you not to make.

comment by db2 · 2008-09-21T07:54:13.000Z · LW(p) · GW(p)

The anthropic principle strikes me as being largely too clever for its own good, at least, the people who think you can sort a list in linear time by randomizing the list, checking if it's sorted, and if it's not, destroying the world.

Maybe it's stupid and evil, but what stops it from actually working?

comment by Brian_Jaress2 · 2008-09-21T07:54:25.000Z · LW(p) · GW(p)
"How many times does a coin have to come up heads before you believe the coin is fixed?"

I think your LHC question is closer to, "How many times does a coin have to come up heads before you believe a tails would destroy the world?" Which, in my opinion, makes no sense.

comment by Lightwave2 · 2008-09-21T08:08:24.000Z · LW(p) · GW(p)

I bet the terrorists would target the LHC itself, so after the terrorist attack there's nothing left to turn on.

comment by Allan_Crossman · 2008-09-21T08:54:01.000Z · LW(p) · GW(p)

Oh God I need to read Eliezer's posts more carefully, since my last comment was totally redundant.

comment by RobinHanson · 2008-09-21T09:47:54.000Z · LW(p) · GW(p)

As others have noted, it seems straightforward to use Bayes' rule to decide when to believe how much that LHC malfunctions were selection effects - the key question is the prior. As to the last question, even if I was confident I lived in an infinite universe and so there was always some version of me that lived somewhere, I still wouldn't want to kill off most versions of me. So all else equal I'd never want to fire the LHC if I believed doing so killed that version of me.

comment by Ben_Jones · 2008-09-21T11:59:42.000Z · LW(p) · GW(p)

Brilliant post.

I almost want it to fail a few more times so that the press latch on to this idea. Imagine journalists trying to A) understand and b) articulate the anthropic principle across many worlds. Would be hilarious.

comment by simon2 · 2008-09-21T14:56:55.000Z · LW(p) · GW(p)

Actually, failures of the LHC should never have any effect at all on our estimate of the probability that if it did not fail it would destroy Earth.

This is because the ex ante probability of failure of the LHC is independent of whether or not if it turned on it would destroy Earth. A simple application of Bayes' rule.

Now, the reason you come to a wrong conclusion is not because you wrongly applied the anthropic principle, but because you failed to apply it (or applied it selectively). You realized that the probability of failure given survival is higher under the hypothesis that the LHC would destroy the Earth if it did not fail, but you didn't take into account the fact that the probability of survival is itself lower under that hypothesis (i.e. the anthropic principle).

comment by simon2 · 2008-09-21T15:15:20.000Z · LW(p) · GW(p)

To clarify, I mean failures should not lead to a change of probability away from the prior probability; of course they do result in a different probability estimate than if the LHC succeeded and we survived.

comment by James_D._Miller · 2008-09-21T15:18:51.000Z · LW(p) · GW(p)

If: (The probability that the LHC's design is flawed and because of this flaw the LHC will never work) is much, much greater than (the probability that the LHC would destroy us if it were to function properly), then regardless of how many times the LHC failed it would never be the case that we should give any significant weight to the anthropic explanation.

Similarly, if the probability that someone is deliberately sabotaging the LHC is relatively high then we should also ignore the anthropic explanation.

comment by Alejandro · 2008-09-21T15:19:23.000Z · LW(p) · GW(p)

My prior probability for the existence of a secret and powerful crackpot group willing to sabotage the LHC to prevent it from "destroying the world" is larger than my prior probabilty for the LHC-actually-destroying-the-world scenarios being true, so after many mechanical failures I would rather believe the first hypothesis than the second one.

comment by Allan_Crossman · 2008-09-21T17:21:51.000Z · LW(p) · GW(p)

Simon: the ex ante probability of failure of the LHC is independent of whether or not if it turned on it would destroy Earth.

But - if the LHC was Earth-fatal - the probability of observing a world in which the LHC was brought fully online would be zero.

(Applying anthropic reasoning here probably makes more sense if you assume MWI, though I suspect there are other big-world cosmologies where the logic could also work.)

comment by simon2 · 2008-09-21T17:41:05.000Z · LW(p) · GW(p)

Allan, I am of course aware of that (actually, it would probably take time, but even if the annihilation were instantaneous the argument would not be affected).

There are 4 possibilities:

  1. The LHC would destroy Earth, but it fails to operate
  2. The LHC destroys Earth
  3. The LHC would not destroy Earth, but it fails anyway
  4. The LHC works and does not destroy Earth

The fact that conditional on survival possibility 2 must not have happened has no effect on the relative probabilities of possibility 1 and possibility 3.

comment by Alexei_Turchin · 2008-09-21T17:45:38.000Z · LW(p) · GW(p)

But the destruction of the earth in case of creation of blackhole or a stranglet will not be instantaneous, like on YouTube movie.

BH will grow slowly, but exponentialy. By some assumptions it could take 27 years to eat the earth. So we will have time to understand our mistake and to suffer from it. The main hurting thing of BH will be its energy realize. And if BH is in the cenre of the earth , this energy will go out as violent volcanic eruptions.

Because of exponential grouth of BH the bigest part of energey will be realized in the last years of its existence.

It means that in fisrt years we could not even mention that BH is created.

It measns that probably BH is already created on previus collider RHIC, but we still do not see its manifestations.

So, this antropic principlle would work only in case of vacuum transition.

But we should not afraid it because in Multiverse we will always survive in some worlds.

And continues failtures of LHC will prove that this way of immortality is valid and we should not wary about existential risks at all.

comment by prase · 2008-09-21T17:51:44.000Z · LW(p) · GW(p)

"After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?"

After observing 100 failures in a row I would expect that a failure would occur after the next attempt to switch it on too. So it doesn't seem as a reliable means to prevent terrorism or economic crash even if anthropic multi-world "ideology" were true.

On the other hand, if somebody were able to show that the amplitude of LHC's unexpected failure for technical reasons was significantly lower than the amplitude of terrorist-free future...

comment by Caledonian2 · 2008-09-21T18:05:37.000Z · LW(p) · GW(p)

IMHO if anthropics worked that way and if the LHC really were a world-killer, you'd find yourself in a world where we had the propensity not to build the LHC, not one where we happened not to build one due to a string of improbable coincidences.
Incorrect reasoning; every branching compatible with sentient organisms contains sentient organisms monitoring its conditions.

The organisms that are in branchings in which LHC facilities were built perceive themselves to be in such a world, no matter how improbable it is. It doesn't matter if it's quite unlikely for you to win a lottery -- if you do win a lottery, you'll eventually accumulate enough data to conclude that's precisely what's happened.

comment by prase · 2008-09-21T18:08:24.000Z · LW(p) · GW(p)

BH will grow slowly, but exponentialy. By some assumptions it could take 27 years to eat the earth. So we will have time to understand our mistake and to suffer from it.

I am curious about these assumptions. BH with mass of the whole Earth has the Schwartzschild radius about 1cm. At start the BH should be much lighter, so it's not clear to me how could this BH, sitting in the centre of Earth, eat anything.

comment by Robinson · 2008-09-21T18:24:09.000Z · LW(p) · GW(p)

simon,

Actually, I think it might (though I'm obviously open to correction) if you take the anthropic principle as a given (which I do not).

One thing you're missing is that there are two events here, call them A and B:

A. LHC would destroy earth B. LHC works

So the events, which are NOT independent, should look more like:

  1. The LHC would destroy earth, and it fails to operate
  2. The LHC would destroy earth, and it works
  3. The LHC would not destroy Earth, and it fails to operate
  4. The LHC would not destroy Earth, and it works

Outcome 2 is "closer" to outcome 1. More precisely, evidence that 2 occured would increase our probability of both A and B, which would therefore decrease the probability of event 3 relative to event 1.

The fact that 2 is invisible means that we can't tell when it has happened. But there is a chance that it is happening that would increase with each subsequent failure, as Eliezer noted.

This is far from formal but I hope I'm getting the gist across.

comment by simon2 · 2008-09-21T19:21:44.000Z · LW(p) · GW(p)

Robinson, I could try to nitpick all the things wrong with your post, but it's probably better to try to guess at what is leading your intuition (and the intuition of others) astray.

Here's what I think you think:

  1. Either the laws of physics are such that the LHC would destroy the world, or not.
  2. Given our survival, it is guaranteed that the LHC failed if the universe is such that it would destroy the world, whereas if the universe is not like that, failure of the LHC is not any more likely than one would expect normally.
  3. Thus, failure of the LHC is evidence for the laws of physics being such that the LHC would destroy the world.

This line of argument fails because when you condition on survival, you need to take into account the different probabilities of survival given the different possibilities for the laws of the universe. As an analogy, imagine a quantum suicide apparatus. The apparatus has a 1/2 chance of killing you each time you run it and you run it 1000 times. But, while the apparatus is very reliable, it has a one in a googol chance of being broken in such a way that every time it will be guaranteed not to kill you, but appear to have operated successfully and by chance not killed you. Then, if you survive running it 1000 times, the chance of it being broken in that way is over a googol squared times more likely than the chance of it having operated successfully.

Here's what that means for improving intuition: one should feel surprised at surviving a quantum suicide experiment, instead of thinking "well, of course I would experience survival".

Finally a note about the anthropic principle: it is simply the application of normal probability theory to situtations where there are observer selection effects, not a special separate rule.

comment by Jay_Levitt · 2008-09-21T19:37:46.000Z · LW(p) · GW(p)

I'm with Brian Jaress, who said, 'I think your LHC question is closer to, "How many times does a coin have to come up heads before you believe a tails would destroy the world?"' OTOH, I have a very poor head for probabilities, Bayesian or otherwise, and in fact the Monty Hall thing still makes my brain hurt. So really, I make a lousy "me too" here.

That said: Could someone explain why repeated mechanical failures of the LHC should in any way imply the likelihood of it destroying the world, thus invoking the anthropic principle? Given the crowd, I'm assuming there's more to it than "OMG technology is scary and it doesn't even work right!", but I'm not seeing it.

comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-09-21T21:11:10.000Z · LW(p) · GW(p)

Okay, it scares me when I realize that I've been getting probability theory wrong, even though I seemed to be on perfectly firm ground. But I'm finding that it's even more scary that even our hosts and most commenters here seem to be getting it backwards -- at least Robin; given that the last question in the post seems so obviously wrong for the reasons pointed out already, I'm starting to wonder whether the post is meant as a test of reasoning about probabilities, leading up to a post about how Nature Does Not Grade You On A Curve (grumble :)). Thanks to simon for pointing out the flaw -- I didn't see it myself.

Since simon's explanation is apparently failing to convince most other people here, let me try my own:

As Robinson points out, there are two underlying events. (A): The laws of physics either mean that a working LHC would destroy the world, or that it wouldn't; let p_destroyer denote our subjective prior probability that it would destroy the world. (B): Either something random happens that prevents the LHC from working, or it doesn't. There is an objective Born probability here that a randomly chosen Everett branch of future Earth at date X will have had a string of failures that kept the LHC from working. We should really consider a subjective probability distribution over these objective probabilities, but let us just consider the resulting subjective probability that a randomly chosen Everett branch will not have had a string of failures preventing LHC from working -- call it p_works.

Now, at date X, in a randomly chosen Everett branch, there are four possibilities:

  1. The LHC would destroy Earth, and it fails to operate; p = p_destroyer * (1 - p_works).
  2. The LHC would destroy Earth, and it works. p = p_destroyer * p_works.
  3. The LHC would not destroy Earth, and it fails to operate. p = (1 - p_destroyer) * (1 - p_works).
  4. The LHC would not destroy Earth, and it works. p = (1 - p_destroyer) * p_works.

Now, we cannot directly observe whether he LHC would destroy Earth if turned on; what we actually can "observe" in a randomly chosen Everett branch at date X is which of the following three events is true:

i. The LHC is turned on and working fine. (Aka "case 4") ii. The LHC is not turned on, because there has been a string of random failures. (Aka "case 1 OR case 3") iii. Earth is gone. (Aka "case 2")

Of course, in case iii aka 2, we are not actually around to observe -- thus the scare quotes around "observe."

simon's argument is that if we observe case ii aka "1 OR 3" aka "a string of random failures has prevented the LHC from working up to date X", then our posterior probability of "The LHC would destroy Earth if turned on" is equal to our prior probability of that proposition (i.e., to p_destroyer):

p(case 1 OR case 3) = p(case 1) + p(case 3) = p_destroyer (1 - p_works) + (1 - p_destroyer) (1 - p_works) = 1 - p_works

p(case 1 | the LHC would destroy Earth) = p(the LHC would destroy Earth AND it fails to operate | the LHC would destroy Earth) = 1 - p_works p(case 3 | the LHC would destroy Earth) = p(the LHC would NOT destroy Earth AND it fails to operate | the LHC WOULD destroy Earth) = 0 p(case 1 OR case 3 | the LHC would destroy Earth) = p(case 1 | the LHC would destroy Earth) + p(case 3 | the LHC would destroy Earth) = (1 - p_works + 0) = 1 - p_works

p(the LHC would destroy Earth | case 1 OR case 3) = p(case 1 OR case 3 | the LHC would destroy Earth) p(the LHC would destroy Earth) / p(case 1 OR case 3) = (1 - p_works) p_destroyer / (1 - p_works) = p_destroyer

  • Benja
comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-09-21T21:28:47.000Z · LW(p) · GW(p)

The intuition behind the math: If the LHC would not destroy the world, then on date X, a very small number of Everett branches of Earth have the LHC non-working due to a string of random failures, and most Everett branches have the LHC happily chugging ahead. If the LHC would destroy the world, a very small number of Everett branches of Earth have the LHC non-working due to a string of random failures -- and most Everett branches have Earth munched up into a black hole.

The very small number of Everett branches that have the LHC non-working due to a string of random failures is the same in both cases.

Thus, if all you know is that you are in an Everett branch in which the LHC is non-working due to a string of random failures, you have no information about whether the other Everett branches have the LHC happily chugging ahead, or dead.

comment by simon2 · 2008-09-21T22:05:39.000Z · LW(p) · GW(p)

I'm going to try another explanation that I hope isn't too redundant with Benja's.

Consider the events

W = The LHC would destroy Earth F = the LHC fails to operate S = we survive (= F OR not W)

We want to know P(W|F) or P(W|F,S), so let's apply Bayes.

First thing to note is that since F => S, we have P(W|F) = P(W|F,S), so we can just work out P(W|F)

Bayes:

P(W|F) = P(F|W)P(W)/P(F)

Note that none of these probabilities are conditional on survival. So unless in the absence of any selection effects the probability of failure still depends on whether the LHC would destroy Earth, P(F|W) = P(F), and thus P(W|F) = P(W).

(I suppose one could argue that a failure could be caused by a new law of physics that would also lead the LHC to destroy the Earth, but that isn't what is being argued here - at least so I think; my apologies to anyone who is arguing that)

In effect what Eliezer and many commenters are doing is substituting P(F|W,S) for P(F|W). These probabilities are not the same and so this substitution is illegitimate.

Benja, I also think of it that way intuitively. I would like to add though that it doesn't really matter whether you have branches or just a single nondeterministic world - Bayes' theorem applies the same either way.

comment by Nick_Tarleton · 2008-09-21T22:09:05.000Z · LW(p) · GW(p)

Benja: Good explanation! Intuitively, it seems to me that your argument holds if there are Tegmark IV branches with different physical laws, but not if whether the LHC would destroy Earth is fixed across the entire multiverse. (Only in the latter case, if it would destroy the Earth, the objective frequency of observations of failure - among observations, period - would be 1.)

comment by Allan_Crossman · 2008-09-21T22:44:56.000Z · LW(p) · GW(p)

Benja, I'm not really smart enough to parse the maths, but I can comment on the intuition:

The very small number of Everett branches that have the LHC non-working due to a string of random failures is the same in both cases [of LHC dangerous vs. LHC safe]

I see that, but if the LHC is dangerous then you can only find yourself in the world where lots of failures have occurred, but if the LHC is safe, it's extremely unlikely that you'll find yourself in such a world.

Thus, if all you know is that you are in an Everett branch in which the LHC is non-working due to a string of random failures, you have no information about whether the other Everett branches have the LHC happily chugging ahead, or dead.

The intuition on my side is that, if you consider yourself a random observer, it's amazing that you should find yourself in one of the extremely few worlds where the LHC keeps failing, unless the LHC is dangerous, in which case all observers are in such a world.

(I would like to stress for posterity that I don't believe the LHC is dangerous.)

comment by Richard_Hollerith2 · 2008-09-21T22:58:12.000Z · LW(p) · GW(p)

Simon's last comment is well said, and I agree with everything in it. Good job, Simon and Benja.

Although the trickiest question was answered by Simon and Benja, Eliezer asked a couple of other questions, and Yvain gave a correct and very clear answer to the final question.

Or so it seems to me.

comment by Caledonian2 · 2008-09-21T23:05:29.000Z · LW(p) · GW(p)

Here's what that means for improving intuition: one should feel surprised at surviving a quantum suicide experiment, instead of thinking "well, of course I would experience survival".
You can (and should) be surprised that the device failed. You should not be surprised that you survived -- it's the only way you can feel anything at all.

You always survive.

comment by Allan_Crossman · 2008-09-21T23:25:49.000Z · LW(p) · GW(p)

Simon: As I say above, I'm out of my league when it comes to actual probabilities and maths, but:

P(W|F) = P(F|W)P(W)/P(F)

Note that none of these probabilities are conditional on survival.

Is that correct? If the LHC is dangerous and MWI is true, then the probability of observing failure is 1, since that's the only thing that gets observed.

An analogy I would give is:

You're created by God, who tells you that he has just created 10 people who are each in a red room, and depending on a coin flip God made, either 0 or 10,000,000 people who are each in a blue room. You are one of these people. You turn the lights on and see that you're one of the 10 people in a red room. Don't you immediately conclude that there are almost certainly only 10 people, with nobody in a blue room?

The red rooms represent Everett worlds where the LHC miraculously and repeatedly fails. The blue rooms represent Everett worlds where the LHC works. God's coin flip is whether or not the LHC is dangerous.

i.e. You conclude that there are no people in worlds where the LHC works (blue rooms), because they're all dead. The reasoning still works even if the coin is biased, as long as it's not too biased.

comment by Gordon_Rae · 2008-09-21T23:31:41.000Z · LW(p) · GW(p)

If you're conducting an experiment to test a hypothesis, the first thing you have to do is set up the apparatus. If you don't set up the apparatus so it produces data, you haven't tested anything. Just like if you try to take a urine sample, and the subject can't pee. The experiment has failed to produce data, not the same as the data failing to prove the hypothesis.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-21T23:51:28.000Z · LW(p) · GW(p)
First thing to note is that since F => S, we have P(W|F) = P(W|F,S), so we can just work out P(W|F)

With respect for your diligent effort and argument, nonetheless: Fail.

F => S -!-> P(X|F) = P(X|F,S)

In effect what Eliezer and many commenters are doing is substituting P(F|W,S) for P(F|W). These probabilities are not the same and so this substitution is illegitimate.

(Had your argument above been correct, the probabilities would have been the same.)

Conditioning on survival, or more precisely, the (continued?) existence of "observers", is just what anthropic reasoning is all about. Hence the controversy about anthropic reasoning.

To understand the final question in the post, suppose that you hooked yourself up to a machine that would instantly and painlessly kill you if a quantum coin came up tails. After one hundred heads, wouldn't you start to believe in the Quantum Theory of Immortality? But if so, wouldn't you be tempted to use it to win the lottery? ...that's where the question comes from, anyway - never mind the question of what exactly is believed.

See also: Outcome Pump

comment by Richard_Hollerith2 · 2008-09-21T23:59:09.000Z · LW(p) · GW(p)

I retract my endorsement of Simon's last comment. Simon writes that S == (F or not W). False: S ==> (F or not W), but the converse does not hold (because even if F or not W, we could all be killed by, e.g., a giant comet). Moreover, Simon writes that F ==> S. False (for the same reason). Finally, Simon writes, "Note that none of these probabilities are conditional on survival," and concludes from that that there are no selection effects. But the fact that a true equation does not contain any explicit reference to S does not mean that any of the propositions mentioned in the equation are independent or conditionally independent of S. In other words, we have established neither P(W|F) == P(W|F,S) nor P(F|W) == P(F|W,S) nor P(W) == P(W|S) nor P(F) == P(F|S), which makes me wonder how we can conclude the absence of an observational selection effect.

comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-09-22T00:13:05.000Z · LW(p) · GW(p)

simon, that's right, of course. The reason I'm dragging branches into it is that for the (strong) anthropic principle to apply, we would need some kind of branching -- but in this case, the principle doesn't apply [unless you and I are both wrong], and the math works the same with or without branching.

Eliezer, huh? Surely if F => S, then F is the same event as (F /\ S). So P(X | F) = P(X | F, S). Unless P(X | F, S) means something different from P(X | F and S)?

Allan, you are right that if the LHC would destroy the world, and you're a surviving observer, you will find yourself in a branch where LHC has failed, and that if the LHC would not destroy the world and you're a surviving observer, this is much less likely. But contrary to mostly everybody's naive intuition, it doesn't follow that if you're a suriving observer, LHC has probably failed.

Suppose that out of 1000 women who participate in routine screening, 10 have breast cancer. Suppose that out of 10 women who have breast cancer, 9 have positive mammographies. Suppose that out of 990 women who do not have breast cancer, 81 have a positive mammography.

If you do have breast cancer, getting a positive mammography isn't very surprising (90% probability). If you do not have breast cancer, getting a positive mammography is quite surprising (less than 10% probability).

But suppose that all you know is that you've got a positive mammography. Should you assume that you have breast cancer? Well, out of 90 women who get a positive mammography, 9 have breast cancer (10%). 81 do not have breast cancer (90%). So after getting a positive mammography, the probability that you have breast cancer is 10%...

...which is the same as before taking the test.

comment by simon2 · 2008-09-22T00:19:00.000Z · LW(p) · GW(p)

While I'm happy to have had the confidence of Richard, I thought my last comment could use a little improvement.

What we want to know is P(W|F,S)

As I pointed out F=> S so P(W|F,S) = P(W|F)

We can legitimately calculate P(W|F,S) in at least two ways:

1. P(W|F,S) = P(W|F) = P(F|W)P(W)/P(F) <- the easy way

2. P(W|F,S) = P(F|W,S)P(W|s)/P(F|S) <- harder, but still works

there are also ways you can get it wrong, such as:

3. P(W|F,S) != P(F|W,S)P(W)/P(F) <- what I said other people were doing last post

4. P(W|F,S) != P(F|W,S)P(W)/P(F|S) <- what other people are probably actually doing

In my first comment in this thread, I said it was a simple application of Bayes' rule (method 1) but then said that Eliezer's failure was not to apply the anthropic principle enough (ie I told him to update from method 4 to method 2). Sorry if anyone was confused by that or by subsequent posts where I did not make that clear.

Allan: your intuition is wrong here too. Notice that if Zeus were to have independently created a zillion people in a green room, it would change your estimate of the probability, despite being completely unrelated.

Eliezer: F => S -!-> P(X|F) = P(X|F,S)

All right, give me an example.

And yeah, anthropic reasoning is all about conditioning on survival, but you have to do it consistently. Conditioning on survival in some terms but not others = fail.

Richard: your first criticism has too low an effect on the probability to be significant. I was of course aware that humanity could be wiped out in other ways but incorrectly assumed that commenters here would be smart enough to understand that it was a justifiable simplification. The second is wrong: the probabilities without conditioning on S are "God's eye view" probabilities, and really are independent of selection effects.

comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-09-22T00:27:00.000Z · LW(p) · GW(p)

Allan, oh **, the elementary math in my previous comment is completely wrong. (In the scenario I gave, the probability that you have breast cancer is 1%, not 10%, before taking the test.) My argument doesn't even approximately work as given: if having breast cancer makes it more likely that you get a positive mammography, then indeed getting a positive mammography must make it more likely that you have breast cancer. Sorry!

(I'm still convinced that my argument re the LHC is correct, but I realize that I'm just looking stupid right now, so I'll just shut up for now :-))

comment by simon2 · 2008-09-22T00:39:00.000Z · LW(p) · GW(p)

Sorry Richard, well of course they aren't necessarily independent. I wasn't quite sure what you were criticising. But I pointed out already that, for example, a new physical law might in principle both cause the LHC to fail and cause it to destroy the world if it did not fail. But I pointed out that this was not what people were arguing, and assuming that such a relation is not the case then the failure of the LHC provides no information about the chance that a success would destroy the world. (And a small relation would lead to a small amount of information, etc.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-22T00:45:00.000Z · LW(p) · GW(p)

Oops, I fail! I thought F >= S meant "F is larger than S". But looking at the definitions of terms, Fail >= Survival must mean "Fail subset_of Survival". (I do protest that this is an odd symbol to use.)

Okay, looking back at the original argument, and going back to definitions...

If you've got two sets of universes side-by-side, one where the LHC destroys the world, and one where it doesn't, then indeed observing a long string of failures doesn't help tell you which universe you're in. However, after a while, nearly all the observers will be concentrated into the non-dangerous universe. In other words, if you're going to start running the LHC, then, conditioning on your own survival, you are nearly certain to be in the non-dangerous universe. Then further conditioning on the long string of failures, you are equally likely to be in either universe. If you start out by conditioning on the long string of failures, then conditioning on your own survival indeed doesn't tell you anything more.

But under anthropic reasoning, the argument doesn't play out like this; the way anthropic reasoning works, particularly under the Quantum Suicide or Quantum Immortality versions, is something along the lines of, "You are never surprised by your own survival".

From the above, we can see that we need something like:

Initial probability of Danger: 50%
Initial probability of subjective Survival: 100%
Probability of Failure given Danger and Survival: 100%
Probability of Failure given ~Danger and Survival: 1%
Probability of Danger given Survival and Failure: ~1%


So to comment through Simon's logic vs. anthropic logic step by step:

First thing to note is that since F => S, we have P(W|F) = P(W|F,S), so we can just work out P(W|F)

still holds technically true

Bayes:

P(W|F) = P(F|W)P(W)/P(F)

Still technically true; but once you condition on survival, as anthropics does in effect require, then P(Fail|Danger) is very high.

Note that none of these probabilities are conditional on survival. So unless in the absence of any selection effects the probability of failure still depends on whether the LHC would destroy Earth, P(F|W) = P(F), and thus P(W|F) = P(W).

Here we depart from anthropic reasoning. As you might expect, quantum suicide says that P(Fail|Danger) != P(Fail). That's the whole point of raising the possibility of, "given that the LHC might destroy the world, how unusual that it seems to have failed 50 times in a row"

In effect what Eliezer and many commenters are doing is substituting P(F|W,S) for P(F|W). These probabilities are not the same and so this substitution is illegitimate.

...but as stated originally, conditioning on the existence of "observers" is what anthropics is all about. It's not that we're substituting, but just that all our calculations were conditioned on survival in the first place.

comment by simon2 · 2008-09-22T00:55:00.000Z · LW(p) · GW(p)

Eliezer, I used "=>" (intending logical implication), not ">=".

I would suggest you read my post above on this second page, and see if that changes your mind.

Also, in a previous post in this thread I argued that one should be surprised by externally improbable survival, at least in the sense that it should make one increase the probability assigned to alternative explanations of the world that do not make survival so unlikely.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-22T01:14:00.000Z · LW(p) · GW(p)
Eliezer, I used "=>" (intending logical implication), not ">=".

Zis would seems to explains it.

(I use -> to indicate logical implication and => to indicate a step in a proof, or otherwise implication outside the formal system - I do understand this to be conventional.)

I would suggest you read my post above on this second page, and see if that changes your mind.

Not particularly. I use 4 but with P(W|S) = P(W) which renders it valid. (We're not talking about two side-by-side universes, but about prior probabilities on physical law plus a presumption of survival.)

Also, in a previous post in this thread I argued that one should be surprised by externally improbable survival, at least in the sense that it should make one increase the probability assigned to alternative explanations of the world that do not make survival so unlikely.

This could only reflect uncertainty that anthropic reasoning was valid. If you were certain anthropic reasoning were valid (I'm sure not!) then you would make no such update. In practice, after surviving a few hundred rounds of quantum suicide, would further survivals really seem to call for alternative explanations?

comment by Nominull2 · 2008-09-22T02:18:00.000Z · LW(p) · GW(p)

After surviving a few hundred rounds of quantum suicide the next round will probably kill you.

Are you familiar with the story of the man who got the winning horse race picks in the mail the day before the race was run? Six times in a row his mysterious benefactor was right, even correctly calling a victory for a horse with forty-to-one odds. Now he gets an envelope in the mail from the same mysterious benefactor asking for $1,000 in exchange for the next week's picks. Are you saying he should take the deal and clean up?

comment by simon2 · 2008-09-22T02:32:00.000Z · LW(p) · GW(p)

Not particularly. I use 4 but with P(W|S) = P(W) which renders it valid. (We're not talking about two side-by-side universes, but about prior probabilities on physical law plus a presumption of survival.)

You mean you use method 2. Except you don't, or you would come to the same conclusion that I do. Are you claiming that P(W|S)= P(W)? Ok, I suspect you may be applying Nick Bostrom's version of observer selection: hold the probability of each possible version of the universe fixed independent of the number of observers, then divide that probability equally amongst the observers. Well, that approach is BS whenever the number of observers differs between possible universes, since if you imagine aliens in the universe but causally separate, the probabilities would depend on their existence.

Also, does it really make sense to you, intuitively, that you should get a different result given two actually existing universes compared to two possible universes?

This could only reflect uncertainty that anthropic reasoning was valid. If you were certain anthropic reasoning were valid (I'm sure not!) then you would make no such update. In practice, after surviving a few hundred rounds of quantum suicide, would further survivals really seem to call for alternative explanations?

As I pointed out earlier, if there was even a tiny chance of the machine being broken in such a way as to appear to be working, that probability would dominate sooner or later.

One last thing: if you really believe that annihilational events are irrelevant, please do not produce any GAIs until you come to your senses.

comment by simon2 · 2008-09-22T02:42:00.000Z · LW(p) · GW(p)

Whoops, I didn't notice that you did specifically claim that P(W|S)=P(W).

Do you arrive at this incorrect claim via Bostrom's approach, or another one?

comment by RobinHanson · 2008-09-22T02:45:00.000Z · LW(p) · GW(p)

This is a subject I've long been meaning to give some thought too, but at the moment I'm pretty swamped - hope to get back to it when I have more time.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-22T02:55:00.000Z · LW(p) · GW(p)

Simon, pretty much Bostrom's approach. Self-Sampling without Self-Indication. I know it's wrong but I don't have any better approach to take.

comment by simon2 · 2008-09-22T03:15:00.000Z · LW(p) · GW(p)

Why do you reject self-indication? As far as I can recall the only argument Bostrom gave against it was that he found it unintuitive that universes with many observers should be more likely, with absolutely no justification as to why one would expect that intuition to reflect reality. That's a very poor argument considering the severe problems you get without it.

I suppose you might be worried about universes with many unmangled worlds being made more likely, but I don't see what makes that bullet so hard to bite either.

comment by Nominull2 · 2008-09-22T03:23:00.000Z · LW(p) · GW(p)

Wasn't one of the conclusions we arrived at in the quantum mechanics sequence that "observer" was a nonsense, mystical word?

comment by simon2 · 2008-09-22T03:47:00.000Z · LW(p) · GW(p)

I might add, for the benefit of others, that self-sampling forbids playing favourites among which observers to believe that you are in a single universe (beyond what is actually justified by the evidence available), and self-indication forbids the same across possible universes.

Nominull: It's a bad habit of some people to say that reality depends on, or is relative to observers in some way. But even though observers are not a special part of reality, we are observers and the data about the universe that we have is the experience of observers, not an outside view of the universe. So long as each universe has no more than one observer with your experience, you can take your experience as objective evidence that you live in a universe with one such observer instead of zero (and with this evidence to work with, you don't need to talk about observers). But it's difficult to avoid talking about observers when a universe might have multiple observers with the same subjective experience.

comment by Richard_Hollerith · 2008-09-22T04:49:00.000Z · LW(p) · GW(p)
in a previous [comment] in this thread I argued that one should be surprised by externally improbable survival, at least in the sense that it should make one increase the probability assigned to alternative explanations of the world that do not make survival so unlikely.

Simon, I think that the previous comment you refer to was the smartest thing anyone has said in this comment section. Instead of continuing to point out the things you got right, I hope you do not mind if I point out something you got wrong, namely,

Richard: your first criticism has too low an effect on the probability to be significant. I was of course aware that humanity could be wiped out in other ways but incorrectly assumed that commenters here would be smart enough to understand that it was a justifiable simplification.

It is not a justifiable simplification. A satisfactory answer to the question you were trying to answer should remain satisfactory even if other existential risks (e.g., a giant comet) are high. If other existential risks were high, would you just throw up your hands and say that the question you were trying to answer is unanswerable?

Again, I think your contributions to this comment thread were better than anyone else's. I hope you continue to contribute here.

comment by Allan_Crossman · 2008-09-22T05:13:00.000Z · LW(p) · GW(p)

Allan: your intuition is wrong here too. Notice that if Zeus were to have independently created a zillion people in a green room, it would change your estimate of the probability, despite being completely unrelated.

I don't see how, unless you're told you could also be one of those people.

comment by Allan_Crossman · 2008-09-22T05:27:00.000Z · LW(p) · GW(p)

Benja: Allan, you are right that if the LHC would destroy the world, and you're a surviving observer, you will find yourself in a branch where LHC has failed, and that if the LHC would not destroy the world and you're a surviving observer, this is much less likely. But contrary to mostly everybody's naive intuition, it doesn't follow that if you're a surviving observer, LHC has probably failed.

I don't believe that's what I've been saying; the question is whether the LHC failing is evidence for the LHC being dangerous, not whether surviving is evidence for the LHC having failed.

comment by simon2 · 2008-09-22T06:34:00.000Z · LW(p) · GW(p)

Richard, obviously if F does not imply S due to other dangers, then one must use method 2:

P(W|F,S) = P(F|W,S)P(W|S)/P(F|S)

Let's do the math.

A comet is going to annihilate us with a probability of (1-x) (outside view) if the LHC would not destroy the Earth, but if the LHC would destroy the Earth, the probability is (1-y) (I put this change in so that it would actually have an effect on the final probability)
The LHC has an outside-view probability of failure of z, whether or not W is true
The universe has a prior probabilty w of being such that the LHC if it does not fail will annihilate us.

Then:
P(F|W,S) = 1
P(F|S) = (ywz+x(1-w)z)/(ywz+x(1-w)z+x(1-w)(1-z))
P(W|S) = (ywz)/(ywz+x(1-w)+x(1-w)(1-z))

so, P(W|F,S) = ywz/(ywz+x(1-w)z) = yw(yw+x(1-w))

I leave it as an exercise to the reader to show that there is no change in P(W|F,S) if the chance of the comet hitting depends on whether or not the LHC fails (only the relative probability of outcomes given failure matters).

Really though Richard, you should not have assumed in the first place that I was not capable of doing the math. In the future, don't expect me to bother with a demonstration.

Allan: you're right, I should have thought that through more carefully. It doesn't make your interpretation correct though...

I have really already spent much more time here today than I should have...

comment by simon2 · 2008-09-22T06:39:00.000Z · LW(p) · GW(p)

Err... I actually did the math a silly way, by writing out a table of elementary outcomes... not that that's silly itself, but it's silly to get input from the table to apply to Bayes' theorem instead of just reading off the answer. Not that it's incorrect of course.

comment by simon2 · 2008-09-22T06:46:00.000Z · LW(p) · GW(p)

And by elementary I mean the 8 different ways W, F, and the comet hit/non hit can turn out.

comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-09-22T12:59:00.000Z · LW(p) · GW(p)

Allan: I don't believe that's what I've been saying; the question is whether the LHC failing is evidence for the LHC being dangerous, not whether surviving is evidence for the LHC having failed.

I was trying to restate in different terms the following argument for failure to be considered evidence:

The intuition on my side is that, if you consider yourself a random observer, it's amazing that you should find yourself in one of the extremely few worlds where the LHC keeps failing, unless the LHC is dangerous, in which case all observers are in such a world.

For "observer" I substituted "surviving observer," because when doing the math I find it more helpful to consider all potential observers and then say that some of them are dead and thus can't observe anything. So my "surviving observer" is the same as your "observer," right?

So I read your argument as: If the LHC is benign, and you're a random (surviving) observer, then it's amazing if (i.e., there is a low probability that) you find yourself in one of the few worlds where the LHC keeps failing. If the LHC is dangerous, and you're a random observer, then it's non-amazing (i.e., there is a high probability that) you find yourself in a world where the LHC keeps failing. Therefore, if you're a random observer, and you find yourself in a world where the LHC keeps failing, then the LHC is probably dangerous (because then, we don't need to assume something amazing going on). Am I misunderstanding something?

If I understand you right, what I'm saying is that both the if's are clearly correct, but I believe that the 'therefore' doesn't follow.

To me, the problem is essentially the same as the following: You are one of 10,000 people who have been taken to a prison. Nobody has explained why. Every morning, the guards randomly select 9/10 of the remaining prisoners and take them away, without explanation. Among the prisoners, there are two theories: one faction thinks that the people taken away are set free. The other faction thinks that they are getting executed.

It is the fourth morning. You're still in prison. The nine other people who remained have just been taken away. Now, if the other people have been executed, then you are the only remaining observer, so if you're a random observer, it's not surprising that you should find yourself in prison. But if the other people have been set free, then they're still alive, so if you're a random observer, there is only a 1/10,000 chance that you are still in prison. Both of these statements are correct if you are a random (surviving) observer. But it doesn't follow that you should conclude that the other people are getting shot, does it? (Clearly you learned nothing about that, because whether or not they get shot does not affect anything you're able to observe.)

Now, I get that you probably think something makes this line of reasoning not apply when we consider the anthropic principle (although I do think that you're wrong then :)). But my point is that, unless I'm missing something, the probabilistic reasoning is the same as in my restatement of your argument, so if the laws of probability don't make the conclusion follow in this scenario, they don't make the conclusion follow in your argument, either.

I should say that I don't reject "the" anthropic principle. I wholeheartedly embrace the version of it that I can derive from the kind of reasoning as above. For example: If our theory of evolution seems to suggest that there is one very improbable step in the evolution of intelligent life -- so improbable that it's not likely to have happened even a single time in the history of the universe -- should we then take that as a reason to conclude that something is wrong with our theory? If we are pretty sure that there is only a single universe, yes. If we have independent evidence that all possible Everett branches exist, no. (If something like mangled worlds is true, maybe -- but let's not get into that now...)

Why should we reject our theory in a single universe, but not if all Everett branches exist? Consider again the prison analogy. You observed how the guards chose the prisoners to take away, and it sure looked random. But now you are the only surviving prisoner. Should you conclude that the guards' selection process wasn't really random? There's no reason to: If the guards used a random process, one prisoner had to remain on the fourth day, and this may just as well have been you -- nothing surprising going on. This corresponds to the scenario where all possible Everett branches exist.

But suppose that you were the only prisoner to begin with (and you know this), and every morning the guards threw a ten-sided die which is marked "keep in prison" on one side and "take away" on the nine others -- and it came up "keep in prison" every morning. In this case, it seems to me that you do have a reason to start suspecting that the die is fixed (i.e., that your original theory, that the "keep in prison" outcome had only a 10% chance of happening, was wrong). This corresponds to the scenario where there is only a single universe.

This is how I always understood the anthropic principle when reading about it, and this version of it I embrace. The other version I'm pretty sure is wrong.

That said, if you have the energy to do so, please do keep arguing with me! :-) I don't really understand this "other anthropic principle," and I'm rejecting it simply because it disagrees with my calculations and I'm really pretty sure that I'm applying my probability theory right here. If I'm wrong, that will be humbling, but I would still rather know than not know, please :-)

comment by Zubon · 2008-09-22T13:07:00.000Z · LW(p) · GW(p)

My prior probability for the existence of a secret and powerful crackpot group willing to sabotage the LHC to prevent it from "destroying the world" is larger than my prior probabilty for the LHC-actually-destroying-the-world scenarios being true

Alejandro has a good point.

comment by Allan_Crossman · 2008-09-22T15:33:00.000Z · LW(p) · GW(p)

Benja: But it doesn't follow that you should conclude that the other people are getting shot, does it?

I'm honestly not sure. It's not obvious to me that you shouldn't draw this conclusion if you already believe in MWI.

(Clearly you learned nothing about that, because whether or not they get shot does not affect anything you're able to observe.)

It seems like it does. If people are getting shot then you're not able to observe any decision by the guards that results in you getting taken away. (Or at least, you don't get to observe it for long - I'm don't think the slight time lag matters much to the argument.)

comment by Anders_Sandberg · 2008-09-22T17:19:00.000Z · LW(p) · GW(p)

I did a calculation here:
http://tinyurl.com/3rgjrl
and concluded that I would start to believe there was something to the universe-destroying scenario after about 30 clear, uncorrelated mishaps (even when taking a certain probability of foul play into account).

comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-09-23T17:33:00.000Z · LW(p) · GW(p)

...Allan, sorry for the delay in replying. Hopefully tomorrow. (In my defense, I've spent the whole day seriously thinking about the problem ;-))

comment by RobinHanson · 2008-09-24T21:38:00.000Z · LW(p) · GW(p)

OK, I've finally had a little time to go over these comments and I am now persuaded to take the position of simon and Benja Fallenstein. I'd already decided to be a Presumptuous Philosopher and accept self-indication, and this just supports that further.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-09-24T21:58:00.000Z · LW(p) · GW(p)
To me, the problem is essentially the same as the following: You are one of 10,000 people who have been taken to a prison. Nobody has explained why. Every morning, the guards randomly select 9/10 of the remaining prisoners and take them away, without explanation. Among the prisoners, there are two theories: one faction thinks that the people taken away are set free. The other faction thinks that they are getting executed.

It is the fourth morning. You're still in prison. The nine other people who remained have just been taken away. Now, if the other people have been executed, then you are the only remaining observer, so if you're a random observer, it's not surprising that you should find yourself in prison. But if the other people have been set free, then they're still alive, so if you're a random observer, there is only a 1/10,000 chance that you are still in prison. Both of these statements are correct if you are a random (surviving) observer. But it doesn't follow that you should conclude that the other people are getting shot, does it? (Clearly you learned nothing about that, because whether or not they get shot does not affect anything you're able to observe.)

An excellently clear way of putting it!

bites bullet

comment by steven · 2008-09-24T23:37:00.000Z · LW(p) · GW(p)

I suspect that anthropics is easy to solve if you think in terms of cognitive decision theory.

comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-09-25T20:41:00.000Z · LW(p) · GW(p)

Okay, after reading several of Nick Bostrom's papers and mulling about the problem for a while, I think I may have sorted out my position enough to say something interesting about it. But now I'm finding myself suffering from a case of writer's block in explaining it, so I'll try to pull a small-scale Eliezer and say it in a couple of hiccups, rather than one fell swoop :-)

I have been significantly wrong at least twice in this thread, the first time when I thought everybody was reasoning from the same definitions as me, but getting their math wrong, and the second time when I said I held my view because I was "pretty sure I [was] applying my probability theory right". I had an intuition and a formal argument, but then I found that the two disagree in some edge cases, and I decided to retain the intuition, so my formal argument was not the solid rock I thought it was. All of which is a long-winded way of saying, it's about time that I concede that I may still be wrong about this, and if so, please do help me figure it out...

We all seem to agree that the issue depends on whether we accept self-indication, and that self-indication is equivalent to being a thirder in the Sleeping Beauty problem. When I first learned about this problem from Robin's post, I was very convinced that the halfer view was right -- to the tune of having been willing to bet money on it -- for about fifteen minutes. Then I thought about something like the following variation of it:

Beauty is put to sleep on Sunday, and a fair coin is tossed. Beauty is awakened twice, once on Monday and once on Tuesday; in between, she is given an amnesia-inducing drug, so that when she wakes up, she cannot tell whether she has been woken up before. One minute after Beauty wakes up, a light flashes. If it is Tuesday, and the coin came up heads, the light is red; otherwise, it is blue.

When Beauty wakes up, before the light flashes, what is her subjective probability that (h1) the coin came up heads, and it's Monday; (h2) heads, Tuesday; (t1) tails, Monday; (t2) tails, Tuesday?

I cannot conceive of a reason not to assign the probability 1/4 to each of these propositions, and in my opinion, when Beauty sees the light flash red, she must update her subjective probability in the obvious way (or the notion of subjective probability no longer makes much sense to me). Then, of course, after seeing the light flash blue, Beauty's probability that the coin fell heads is 1/3.

Short of assigning special ontological status to being consciously awake, I don't see a way to distinguish between the original Sleeping Beauty and my variation after the light flashes blue, so I'm a thirder now. My new view is that observing the random variable (color=blue) can change my probability in non-mysterious ways, so observing the random variable (awake=yes) can, too.

In his paper on the problem, Nick argues for a "solution" that would apply to my version, too. He would reject my view of how Beauty must update her probabilities if she sees a blue light. His argument goes something like this:

What I really need to consider is all of Beauty's observer-moments in all possible worlds; Beauty has a prior over these moments, considers the evidence she has for which moment she is in, and does a Bayesian update. The moment when Beauty wakes up is different from the moment when the light flashes, so she needs to consider at least eight possible moments: (h1-) heads, Monday, she wakes up; (h1+) heads, Monday, the light flashes; and so on. Nothing in the axioms of probability theory requires the probability of (h1+) to be related in any way to the probability of (h1-)! In fact, Nick would argue, we should simply assign probabilities like this:

p(xx- | h1- \/ h2- \/ t1- \/ t2-) = 1/4 (for xx in {h1,h2,t1,t2})
p(h1+ | h1+ \/ t1+ \/ t2+) = 1/2
p(xx+ | h1+ \/ t1+ \/ t2+) = 1/4 (for xx in {t1,t2})

I agree that this is formally consistent with the axioms of probability, but in order for Beauty to be rational, in my opinion she must still update her probability estimate in the "normal" way when the light flashes blue. Nick's approach strikes me as saying, "I'm a completely new observer-moment now, why should I care about my probability estimates a minute ago?" If our formalism allows us to do that, I think our formalism isn't strong enough. In this case, I'd require that

p(xx- | h1- \/ h2- \/ t1- \/ t2-)
= p(xx+ | h1+ \/ h2+ \/ t1+ \/ t2+)

--i.e., before conditioning on the actual colors she sees, Beauty's probability estimates when the light flashes must be the same as when she wakes up. I don't know how well this generalizes, but if we accept it in this case, it blocks Nick's proposal.

Anybody here who finds Nick's solution intuitively right?

comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-09-26T18:20:00.000Z · LW(p) · GW(p)

It may be silly to continue this here, since I'm not sure anybody's still reading, but at least I'm writing it down at all this way, so... here's "Nick's Sleeping Beauty can be Dutch Booked" (by Nick's own rules)

In his Sleeping Beauty paper, Nick considers the ordinary version of the problem: Beauty is awakened on Monday. An hour later, she is told that it is Monday. Then she is given an amnesia drug and put to sleep. A coin is flipped. If the coin comes up tails, she is awakened again on Tuesday (and can't tell the difference to Monday). Otherwise, she sleeps through to Wednesday.

Nick distinguishes five possible observer-moments: Beauty wakes up on Monday (h1 and t1, depending on heads/tails); Beauty is told that it's Monday (h1m and t1m); Beauty wakes up on Tuesday (t2). Let P-(x) := P(x | h1 \/ t1 \/ t2), and P+(x) := P(x | h1m \/ t1m).

There are two possible worlds, heads-world (h1,h1m) and tails-world (t1,t1m,t2). Within each of the groups (h1,t1,t2) and (h1m,t1m), Nick assigns equal probabilities to each observer-moment in a given possible world. This gives:

P-(h1) = 1/2; P-(t1) = 1/4; P-(t2) = 1/4
P+(h1m) = 1/2; P+(t1m) = 1/2

In his paper, Nick considers the following Dutch book, suggested by a referee (I'm quoting from the paper):

Upon awakening, on both Monday and Tuesday, before either knows what day it is, the bookie offers Beauty the following bet: Beauty gets $10 if HEADS and MONDAY. Beauty pays $20 if TAILS and MONDAY. (If TUESDAY, then no money changes hands.) On Monday, after both the bookie and Beauty have been informed that it is Monday, the bookie offers Beauty a further bet: Beauty gets $15 if TAILS. Beauty pays $15 if HEADS. If Beauty accepts these bets, she will emerge $5 poorer.

Nick dismisses this argument because if the coin falls tails, Beauty will accept the first bet twice, once on Monday and once on Tuesday. Now, on Tuesday no money changes hands, so what's the difference? Well, Nick thinks it's very interesting that it could make a difference, but clearly it does, you see, because otherwise Sleeping Beauty could be Dutch booked if she accepts his probability assignments!

Instead of trying to argue that it makes no difference, let me just exhibit a variation where Beauty only accepts every bet at most once in every possible world.

Before Beauty is put to sleep, we throw a second fair coin, labelled A and B. If it comes up A, then on Monday, we tell Beauty, "It's day A!" And if we wake her up on Tuesday, we tell her, "It's day B!" If the coin comes up B, Monday is B, and Tuesday is A.

We now have doubled the number of worlds and observer-moments. The worlds are HA, HB, TA, and TB, each with probability 1/4; the observer-moments are ha1, ha1m; hb1, hb1m; ta1, ta1m, ta2; tb1, tb1m, tb2. P- and P+ are defined analogously to before, and again, we assign equal probability to each of the awakenings in every possible world (and make them sum to the probability of that world). This gives:

P-(ha1) = P-(hb1) = 1/4
P-(ta1) = P-(ta2) = P-(tb1) = P-(tb2) = 1/8
P+(ha1m) = P+(hb1m) = P+(ta1m) = P+(tb1m) = 1/4

The sets of observer-moments that Beauty cannot distinguish are: {ha1,ta1,tb2}; {hb1,tb1,ta2}; {ha1m,ta1m}; {hb1m,tb1m}. (E.g., on {ha1,ta1,tb2}, Beauty just knows that she's been awakened and that it's "Day A." In world B, Tuesday is Day A, thus tb2 is in this set.)

Note well that in none of these sets, there is more than one observer-moment from the same possible world. I exhibit the following variation of the above Dutch Book.

On {ha1,ta1,tb2}, the Bookie offers Beauty the first bet above: Beauty gets $10 if HEADS and MONDAY. Beauty pays $20 if TAILS and MONDAY. (If TUESDAY, then no money changes hands.)

On {ha1m,ta1m}, the Bookie offers the second bet: Beauty gets $15 if TAILS. Beauty pays $15 if HEADS.

Beauty now loses $5 if the day-label-coin comes up A, and breaks even if it comes up B. Every bet is accepted exactly once in every possible world in which it is offered at all. We could add symmetrical additional bets to make sure that Beauty also loses money in B worlds, but I think I've made my point. Nick can create his priors over observer-moments without violating the axioms of probability, but if it worries him if Beauty can be Dutch-booked in the way he discusses in his paper, I do believe he needs to be worried...

comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-09-26T20:12:00.000Z · LW(p) · GW(p)

So if I think that (something like) the Self-Indication Assumption is correct, what about Nick's standard thought experiment in which the silly philosopher thinks she can derive the size of the cosmos from the fact she's alive?

Well, the experiment does worry me, but I'd like to note that self-sampling without self-indication produces, in fact, a very similar result (if the reference class is all conscious observers, which Nick's version of the experiment seem to assume). I give you The Presumptuous Philosopher and the Case of the Twin Stars:

Physicists have narrowed down the search for the Theory of Everything to T1 and T2, between which considerations of super-duper-symmetry are indifferent. We know that the cosmos is very big, to the tune of containing a trillion trillion galaxies, most of which are expected to contain life. But there's a twist: According to T1, all but one in a trillion galaxies should consist entirely of twin star systems. T2 does not make this prediction. Physicists are preparing to do a simple test that would decide between the theories. Enter the Presumptuous Philosopher: "Guys, our galaxy has lots of single-star systems. The conditional probability of this if T1 is true and we're a random sample from all conscious observers is only one in a trillion! Stop doing this silly experiment and do something else instead!"

If you accept this thought experiment (which requires only self-sampling) but reject a variation where T1 is ruled out because it predicts that cosmological death rays will make life impossible in all galaxies but one in a trillion (which requires self-sampling), then I think you've allowed yourself to be suckered into implicitly assuming that conscious observation is something ontologically fundamental. Though I accept that you may not be convinced of this yet :-)

(Side note: Lest you be biased against the philosopher just because she dares to apply probability theory, do also consider the case where T1 predicts that Mars had a chance of 4/5 per year of flying out of the solar system since it came into existence -- and beat those odds by random chance every single time. Of course, in that case, the physicists would already be convinced that her reasoning is sound, to the tune that they would already have applied it itself.)

comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-09-26T23:14:00.000Z · LW(p) · GW(p)

In my previous comment, I mentioned my worry that accepting observer self-sampling without self-indication means that you've been suckered into taking conscious observation as an ontological primitive. (Also, I've been careful not to use examples that involve the size of the cosmos.) I would like to suggest that instead of a prior over observer-moments in possible worlds, we start with a prior over space-time-Everett locations in possible worlds. If all possible worlds we consider have the same set of space-time-Everett locations, and we have a prior P0 over possible worlds, then I suggest that we adopt the prior over (world, location) pairs:

P((w,x)) = P0(w) / number of possible locations

(Actually, that's not necessarily quite right: If the "amplitude as degree of reality" interpretation is true, Everett branches should of course be weighted in the obvious way.)

As with observer-moments, we then condition on all the evidence we have about our actual space-time-Everett location in our actual possible world, and call the result our "subjective probability" distribution.

Isn't anthropic reasoning about taking into account the observer selection effects related to the fact that we are conscious observers? Sure, but it seems to me that any non-mysterious anthropic reasoning is taken care of just fine by the conditioning step. Any possible worlds, Everett branches and cosmic regions that don't support intelligent life will automatically be ruled out, for example.

The above definition trivially implies the following weak principle of self-indication:

If all possible worlds we consider have the same set of locations, worlds that contain more locations consistent with our evidence will tend to be more likely after conditionalization. (To be precise, the probability of each world w is weighted by P0(w) * number of locations in w consistent with our evidence).

This principle is enough to support being a thirder in the Sleeping Beauty problem, for example (which was what originally suggested it to me, when I was wondering what prior Beauty should update when she observes herself to be awake).

comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-09-27T09:49:00.000Z · LW(p) · GW(p)

So what if we are uncertain about the size of the universe (so that its size depends on which possible world we are in)? Then we are faced with the same question as before: Should we treat finding ourselves in bigger universes as more probable a priori, or not?

Formally, the question we face is, if we have a prior P0 over possible worlds, what should our prior over (possible world, space-time-Everett location) pairs be?

Physical self-sampling without self-indication. P((w,x)) = P0(w) / number of possible locations in world w

Physical self-sampling with physical self-indication. P((w,x)) = alpha P0(w), where alpha is a normalization constant (alpha = 1 / Sum_w'. P0(w') number of possible locations in world w')

(As before, we may want to weigh Everett branches in the obvious way.) Both of these definitions give us the weak principle of self-indication (defined in the previous comment), since they agree with the previous comment's definition when all possible worlds contain the same number of locations. So they both support thirding in Sleeping Beauty.

But which of the definitions should we adopt? Note that sampling without self-indication has the property that P(w) = P0(w), i.e., before we condition on any evidence (including the fact that we are conscious observers), the probability of finding ourselves in world w is exactly the probability of that world, according to P0. On the face of it, this sounds exactly like what we mean by having a prior P0 over the possible worlds.

I think we may mean different things with P0 depending on how we arrive at P0, though. But for the moment, let me note that while the principle of weak self-indication forces me to accept the presumptuous philosopher's position in both the Case of the Twin Stars and the Case of the Death Rays, I may still have a good reason to reject the conclusion that the cosmos is infinite with probability one.

comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-09-29T11:56:00.000Z · LW(p) · GW(p)

Unfortunately, physical self-sampling without self-indication has odd consequences of its own. Consider the following thought experiment:

Physicists have conclusively figured out what the theory of everything is. We know roughly how the cosmos will behave until a trillion years into the future. However, it's still unclear what will happen at this point: either (T1) the universe will end, or (T2) the universe will continue for another trillion trillion years, but be unable to support intelligent life. A hard mathematical calculation can show which of these is true, but before doing the calculation, each theory has a 1/2 prior probability (in the same sense that before doing the calculation, you have a 1/10 subjective probability that the trillionth decimal digit of pi is a seven).

Physicists want to schedule supercomputer time to determine the answer. Enter Presumptuous: "By physical self-sampling, the probability of T2 given our observations is only about one in a trillion. This calculation is a waste of money!"

She calculates as follows. P0(T1) = P0(T2) = 1/2. According to T2, the universe contains a trillion more space-time locations than according to T1. But according to both theories, the universe contains only one location consistent with our evidence. According to the definition given in the previous comment, this makes T2 much less likely that T1.

Intuitively, the argument is, "According to T2, there are a trillion more places we could have found ourselves at (at most of which we would not have been conscious observers, but taking that into account would be supernatural wonder tissue). So having found ourselves at this particular place is much more surprising according to T2."

But this argument doesn't sound very convincing to me. From where do we get this supposed lottery over space-time locations? At least, the argument sounds much less intuitively convincing than the following: "Our uncertainty is mathematical, and our observations would be exactly the same according to each theory -- we can't conclude anything about the mathematical result from the fact that one would destroy the universe, while the other would only leave it barren."

In the next comment, I'll develop that intuition into a more formal argument supporting self-indication.

comment by Cameron_Taylor · 2008-09-29T19:57:00.000Z · LW(p) · GW(p)

As you know, I don't spend much time worrying about the Large Hadron Collider when I've got much larger existential-risk-fish to fry.
----
After observing empirically that the LHC had failed 100 times in a row, would you endorse a policy of keeping the LHC powered up, but trying to fire it again only in the event of, say, nuclear terrorism or a global economic crash?
----

The real question, Eliezer, is how many times the LHC would have to fail before you decide to fundamentally change the direction of your research? At some point the most profitable avenue of research in the pursuit of friendly AI would become the logistics of combining a mechanism for quantum suicide with a random number generator. Would you shut up, multiply and then invest the entirety of your research on the nuances of creating a secure, hostile-AI-preventing universe-suicide bunker? Let a RNG write the AI and save-scum yourself to friendly AI paradise!

comment by Cameron_Taylor · 2008-09-30T00:02:00.000Z · LW(p) · GW(p)

Pardon me, my question skipped far too many inferential steps for me to be comfortable that my meaning is clear. Allow me to query for the underlying premises more clearly:

* Is quantum-destroying-the-entire-universe suicide different to plain quantum-I-killed-myself-in-a-box suicide?

That is to say, does Eliezer consider it rational to optimise for the absolute tally of Everett branches or the percentage of them? In "The Bottom Line" Elizier gives an example definition of my effectiveness as a rationalist as how well my decision optimizes the percentage of Everett branches that don't get me killed by faulty brakes.

Absurd as it may be, let us say that the LHC, or perhaps the LHSM (Large Hadron Spaghetti Monster) destroys the entire universe. If a particular Everett branch is completely obliterated by the Large Hadron Spaghetti Monster then do I still count those branches when I'm doing my percentage or do I only count the worlds where there is an actual universe there to count?

I can certainly imagine some consider parts of the Everett tree that are not in existence in any manner at all to have been 'pruned', think the end result is kind of neat and so decide that their utility function optimizes according to the number of Everett branches that contain the desired outcome divided by the number of Everett branches among those that exist. Another could say that they simply want to maximise the absolute number of Everett branches that contain desired outcomes. The LHSM eating an entire Everett branch would be the same as any other particular event making an Everett branch undesirable.

The above is my intuitive interpretation of the core of Elizier's parting question. Obviously, if we are optimizing for percentage of Everett branches then it is rational to create an LHSM rigged to eat the branch if it contains nuclear terrorism, global economic crashes or Elizier accidentally unleashing the Replicators upon us all. If, however, we are optimizing by absolute desirable Everett branch count then rigging the LHSM to fire whenever the undesirable outcome occurs is merely a waste of resources.


* Are there fates worse than the universe being obliterated?

I note this question simply to acknowledge that other factors could weigh in to the answer to Elizier's question than the significant one of whether we count not-actually-Everett branches. Perhaps Joe considers the obliteration of the universe to be an event like all others. The analogy would perhaps be to having x% of Everett branches go off in straight line together all rather transparent and not going any place in particular while the remaining (100-x)% of Everett branches head off in the typical way. Joe happens to assign -100 utility to the universe being obliterated, +2 to getting a foot massage and -3,000 to being beaten by a girl. Joe would multiply the x% of the not-Everett branch by -100 and (1-x)% by -3000. It would be rational for Joe to create a LHSM that would be fired whenever he suffered the feared humiliation. That is, unless he anticipated 1,450 foot massages in return for keeping the universe intact!

It seems to me that in order for it to be rational to LHSM the universe on the event of nuclear terrorism or global economic collapse and yet not rational to use the LHSM to make a friendly AI then:

Universe obliteration must be evaluated as a standard Everett branch.
AND
Universe obliteration must be assigned a greater utility than nuclear terrorism or global economic collapse.
(This possibility is equivalent to standard quantum suicide with a caveat that tails was going to give you cancer anyway and you're rather be dead.)

OR, ALTERNATIVELY

Universe obliteration is different from quantum suicide. Obliterated universes don't count at all in the utility function so preventing nuclear terrorism by obliterating the universe makes the average world a better place once you do the math.
AND
The complications involved in using the same LHSM to create a friendly AI are just not worth the hassle. (Or otherwise irrational for some reason that is unrelated to the rather large amount of universe obliteration that would be going on.)


Eliezer never implied an answer on whether he would fire a universe destroying LHC to prevent disaster. I wonder, if he did endorse that policy, would he also endorse using the same mechanism to further his research aim?

comment by Richard_Hollerith · 2008-10-01T13:15:00.000Z · LW(p) · GW(p)
At some point the most profitable avenue of research in the pursuit of friendly AI would become the logistics of combining a mechanism for quantum suicide with a random number generator.

Usually learning new true information increases a person's fitness, but learning about the many-worlds interpretation seems to decrease the fitness of many who learn it.

comment by Cameron_Taylor · 2008-10-02T08:01:00.000Z · LW(p) · GW(p)

Am I to assume then, Richard, that you consider the destruction of a branch entirely using whatever mechanism the LHC was supposedly going to destroy the fabric of reality is exactly equivalent to a more mundane death in a box? Or, did you simply use your cached thought regarding quantum suicide and saw a chance to be rude? I've got a hunch that it's the latter since the implication doesn't logically follow.

Dull, I was hoping something more useful to tell me. The implications of whatever the LHC could supposedly do and in particular why ever someone would choose blowing up the universe in preference to a couple of nukes going off were intriguing.


Incidentally, whatever makes you claim that 'new information increases a person's fitness'? Education notoriously reduces the rate of breeding in humans. I also haven't found many people who actually apply their information about evolution and make it their full time occupation to find places through which they can donate sperm.

comment by Richard_Hollerith · 2008-10-02T11:35:00.000Z · LW(p) · GW(p)

OK, my previous comment was too rude. I won't do it again, OK?

Rather than answer your question about fitness, let me take back what I said and start over. I think you and I have different terminal values.

I am going to assume -- and please correct me if I am wrong -- that you assign an Everett branch in which you painless wink out of existence a value of zero (neither desirable or undesirable) and that consequently, under certain circumstances (e.g., at least one alternative Everett branch remains in which you survive) you would prefer painlessly winking out of existence to enduring pain.

My objection to this talk of destroying the universe in response to a terrorism incident, etc, is that the people whose terminal values are served by that outcome (such as, I am assuming, you) share the universe with people whose terminal values assign a negative value to that outcome (such as me). By using this method of increasing your utility you impose severe negative utility on me.

Note that if you engage in ordinary quantum suicide then my circumstances remain materially the same in both Everett branches, and the objection I just described does not apply.

comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-10-02T12:09:00.000Z · LW(p) · GW(p)

Richard, I am going to assume ... that you assign an Everett branch in which you painless wink out of existence a value of zero (neither desirable or undesirable)

I'd rather say that people who find quantum suicide desirable have a utility function that does not decompose into a linear combination of individual utility functions for their individual Everett branches-- even if they had to deal with a terrorist attack on all of these branches, say. Surely everybody here would find an outcome undesirable where all of their future Everett branches wink out of existence. So if somebody prefers one Everett branch winking out and one continuing to exist to both continuing to exist, you can only describe their utility function by looking at all the branches, not by looking at the different branches individually. (Did that make sense?)

comment by Benya_Fallenstein (Benja_Fallenstein) · 2008-10-02T12:15:00.000Z · LW(p) · GW(p)

Gawk! "even if they had to deal with a terrorist attack on all of these branches, say" was supposed to come after "Surely everybody here would find an outcome undesirable where all of their future Everett branches wink out of existence." (The bane of computers. On a typewriter, this would not have happened.)

comment by Richard_Hollerith · 2008-10-02T15:43:00.000Z · LW(p) · GW(p)
Did that make sense?

Yes, and I can see why you would rather say it that way.

My theory is that most of those who believe quantum suicide is effective assign negative utility to suffering and also assign a negative utility to death, but knowing that they will continue to live in one Everett branch removes the sting of knowing (and consequently the negative utility of the fact) that they will die in a different Everett branch. I am hoping Cameron Taylor or another commentator who thinks quantum suicide might be effective will let me know whether I have described his utility function.

comment by Cameron_Taylor · 2008-10-03T22:20:00.000Z · LW(p) · GW(p)

Richard, Cameron Taylor has still not advocated quantum suicide. That straw man is already dead.

I assign quantum suicide a utility of "(utility(death) + utility(alternative))/ 2 - time wasted - risk of accidently killing yourself while making the death machine". That is to say, I think it is bloody stupid.

What I do assert is that anyone answering 'yes' to Elizier's proposal to destroy the universe with an LHC to avert terrorism would also be expected to use the same mechanism to achieve any other goal for which the utility is lower than the cost of creating an LHC. For E, that would mean his FAI. The question seems to logically imply one of:
- Eliezer can see something different between LHC death and cyanide death.
- There are some really messed up utility functions out there.
- The question is simply utterly trivial, barely worth asking.

comment by Cameron_Taylor · 2008-10-03T22:33:00.000Z · LW(p) · GW(p)

I'd rather say that people who find quantum suicide desirable have a utility function that does not decompose into a linear combination of individual utility functions for their individual Everett branches-- even if they had to deal with a terrorist attack on all of these branches, say. Surely everybody here would find an outcome undesirable where all of their future Everett branches wink out of existence. So if somebody prefers one Everett branch winking out and one continuing to exist to both continuing to exist, you can only describe their utility function by looking at all the branches, not by looking at the different branches individually. (Did that make sense?)
------

I like your explanation Benja. There is no particular reason why a utility function need to consider 'branch winking out of existence' with the same simplicity with which they evaluate more mundane catastrophes. For example, consider the practice of thousands of gamers out there: "That start sucks! Restart!" I can give no mathematical reason why this preference ought to be dismissed.

comment by james5 · 2008-11-18T21:34:00.000Z · LW(p) · GW(p)

If it fails 100 times in a row, i`ll sue the researchers for killing me a hundred times in all those other realities.

Oh the humanity-ity-ity-ty-ty-y-y-y-y!

comment by james5 · 2008-11-18T21:56:00.000Z · LW(p) · GW(p)

Of course the future repeated failures of the LHC have got to seem non-miraculous though since the likelhood of each experiment failing becomes lower the more experiments you plan on running.

Perhaps some sort of funding problem after a collapse of the world financial system, but that`s not likely, is it?

Its like the idea applying the idea of quantum immortality and the anthropic principle to my own experience. Wouldnt it make sense for me to observe my apparent immortality in a world where immortality wasnt miraculous, such as when technology had advanced to a point where it wasnormal`.

A bit of a contradiction there, technology advances to the point where destruction of humanity is easy, but immortality is possible as well.

comment by Cameron_Taylor · 2008-11-19T02:40:00.000Z · LW(p) · GW(p)

Benja: Wrong analogy. You left out a bit. All people who actually HAVE CANCER AND that would get a POSITIVE RESULT are killed during the mammograph, never to receive the result. Your task is then to condition on first receiving a result and then that result being positive and alter your estimate of how likely you are to have cancer.

(Depending on how you meant the analogy it may be the negative result + positive actual cancer who are killed. Point is, your analogy completely misses the point. Not every person who takes the test gets a result but you do. That is important.)

comment by Cameron_Taylor · 2008-11-19T02:43:00.000Z · LW(p) · GW(p)

Pardon me, ignore that or delete it. I clicked "How Many LHC Failures Is Too Many?" rather than "James" on the recent posts link. Death.

comment by Ben_Jones · 2008-12-02T15:49:00.000Z · LW(p) · GW(p)

Right, that's it, I'm gonna start cooking up some nitroglycerin and book my Eurostar ticket tonight. Who's with me?

I dread to think of the proportion of my selves that have already suffered horrible gravitational death.

comment by Christian_Szegedy · 2009-10-13T20:43:33.265Z · LW(p) · GW(p)

Holger Nielsen sides with this idea.

Playing with quantum suicide?

"Dr. Nielsen and Dr. Ninomiya have proposed a kind of test: that CERN engage in a game of chance, a “card-drawing” exercise using perhaps a random-number generator, in order to discern bad luck from the future. If the outcome was sufficiently unlikely, say drawing the one spade in a deck with 100 million hearts, the machine would either not run at all, or only at low energies unlikely to find the Higgs."

comment by CAE_Jones · 2013-05-02T21:27:13.212Z · LW(p) · GW(p)

Am I misunderstanding, missing a joke, or did the overwhelming majority here consider the probability that the LHC could destroy the world non-negligible? After reading this article, I wound up looking up articles on collider safety just to make sure I wasn't crazy. My understanding of physics told me that all the talk of LHC-related doomsday scenarios was just some sort of science fiction meme. I was under the impression that artificial black holes would take levels of energy comparable to the big bang, and a micro black hole would be pretty low risk even then. (Reading the wikipedia article further, I see that FHI was involved in the raising of concerns over the LHC, which is the closest thing to an explanation for this discussion I've found so far.)

I'm actually kinda concerned about this, since if the discussion on this page is taking LHC risk seriously, then either I or LW had serious problems modeling reality. This wouldn't be in the category of "weird local culture"; cryonics involves a lot of unknowns and most LWers notice this, and UFAI actually makes much more sense as existential risk, since an unfriendly transhuman intelligence would actually be dangerous... but there were plenty of knowns that could be used to predict the LHC's risk, and they all pointed toward the risk being infinitecimal.

If, on the other hand, this was some bit of humor playing on pop-sci memes, used to play with the anthropic principal and quantum suicide, then oops.

Replies from: Qiaochu_Yuan, Eliezer_Yudkowsky
comment by Qiaochu_Yuan · 2013-05-02T21:32:19.076Z · LW(p) · GW(p)

The question "how many LHC failures is too many?" is the question "how negligible was your prior on the LHC being dangerous, really?" Is it low enough to ignore 10 failures? 100? 1000? Do you have enough confidence in your understanding of physics to defy the data that many times?

Replies from: CAE_Jones
comment by CAE_Jones · 2013-05-02T22:01:30.830Z · LW(p) · GW(p)

Ok. Somehow it came across as taking the idea of LHC risk more seriously than is rational. I'm not sure why it didn't feel hypothetical enough (I should have been tipped off when Eliezer didn't mention the obvious part where the LHC would lose funding if the failures became too numerous. I'd consider 1000 LHC failures indicative that my model of how scientists get funding is broken before the LHC actually being a doomsday weapon.).

Replies from: ESRogs
comment by ESRogs · 2013-05-04T23:25:04.717Z · LW(p) · GW(p)

I'd consider 1000 LHC failures indicative that my model of how scientists get funding is broken before the LHC actually being a doomsday weapon.

Not both?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-05-02T22:57:28.696Z · LW(p) · GW(p)

The idea is that the risk is infinitesimal but you want to put an approximate number on that using a method of imaginary updates - how much imaginary evidence would it take to change your mind?

Replies from: CAE_Jones
comment by CAE_Jones · 2013-05-03T10:36:08.560Z · LW(p) · GW(p)

That makes sense. I made a similar misinterpretation on a different post around the same time I read this one, so putting the two together makes me pretty confident I was not thinking at my best yesterday. (Either that, or my best is worse than I usually believe.)