An Anthropic Principle Fairy Tale

post by Nominull · 2012-08-28T20:48:16.200Z · LW · GW · Legacy · 17 comments

Contents

17 comments

A robot is going on a one-shot mission to a distant world to collect important data needed to research a cure for a plague that is devastating the Earth. When the robot enters hyperspace, it notices some anomalies in the engine's output, but it is too late to get the engine fixed. The anomalies are of a sort that, when similar anomalies have been observed in other engines, 25% of the time it indicates a fatal problem, such that the engine will explode virtually every time it tries to jump. 25% of the time, it has been a false positive, and the engine exploded only at its normal negligible rate. 50% of the time it has indicated a serious problem, such that each jump was about a 50/50 chance of exploding. Anyway, the robot goes through the ten jumps to reach the distant world, and the engine does not explode. Unfortunately, the jump coordinates for the mission were a little off, and the robot is in a bad data-collecting position. It could try another jump - if the engine doesn't explode, the extra data it collects could save lives. If the engine does explode, however, Earth will get no data from the distant world at all. (The FTL radio is only good for one use, so he can't collect data and then jump.) So how did you program your robot? Did you program your robot to believe that since the engine worked 10 times, the anomaly was probably a false positive, and so it should make the jump? Or did you program your robot to follow the "Androidic Principle" and disregard the so-called "evidence" of the ten jumps, since it could not have observed any other outcome? People's lives are in the balance here. A little girl is too sick to leave her bed, she doesn't have much time left, you can hear the fluid in her lungs as she asks you "are you aware of the anthropic principle?" Well? Are you?

17 comments

Comments sorted by top scores.

comment by orthonormal · 2012-08-26T23:39:25.531Z · LW(p) · GW(p)

Isn't this one adequately dissolved by the "anthropics as decision theory" perspective? If I'm programming the robot, I weigh the relevant problem as if there were 256:1:epsilon odds of false positive/serious problem/fatal problem, and thus I'd have it make the final jump if and only if the expectation of better data justified a 1-in-514 chance of mission failure.

Also, I don't think that your tone at the end adds anything.

Replies from: Nominull
comment by Nominull · 2012-08-26T23:44:28.019Z · LW(p) · GW(p)

Uh, yes, you would program the robot that way. That's the point.

Replies from: mfb
comment by mfb · 2012-08-28T12:55:39.174Z · LW(p) · GW(p)

In this case: Where is the issue? If the ship survived 10 jumps, it is probably safe to make another one - the same decision could be done on earth, which exists in both relevant cases.

With sufficient intelligence, the robot could calculate a probability of 1-eps that there are robots from earth, even if earth could have developed without a robot-building species. The robot can use its own existence to update probabilities in both cases.

comment by thomblake · 2012-08-27T19:32:59.727Z · LW(p) · GW(p)

A post formatted this badly does not belong in Main.

comment by [deleted] · 2012-08-30T14:30:21.043Z · LW(p) · GW(p)

Replying to this thread to test a question mentioned here: http://lesswrong.com/r/discussion/lw/eb9/meta_karma_for_last_30_days/7ap4

comment by maia · 2012-08-30T17:49:24.444Z · LW(p) · GW(p)

I think this post is attacking a strawman associated with the Anthropic Principle, but I don't quite understand the nature of said strawman. What kind of arguments were you trying to show fault in by writing this?

comment by DanielLC · 2012-08-28T17:29:07.150Z · LW(p) · GW(p)

The robot could have observed lots of other things. It could have observed that it hasn't even started the journey. It could have observed that it's a factory robot, rather than a space probe. It could have observed that it's a human.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-08-28T22:16:13.927Z · LW(p) · GW(p)

Then it should follow a conditional strategy that specifies what it should do in each of these cases, depending on what's observed. This strategy should be chosen based on the consequence of following it in all of these cases simultaneously.

comment by selylindi · 2012-08-27T14:52:37.012Z · LW(p) · GW(p)

Presumably you're referencing the notion of quantum immortality. If QI is a possibly real effect, then the robot's repeated survival counts as evidence for both the false-positive situation and for QI. For plausible priors it probably makes no difference, because in this story the robot's survival and humanity's survival are linked, and if QI applies to the robot then it applies to humanity, too.

Replies from: philh, ArisKatsaris
comment by philh · 2012-08-27T17:21:10.173Z · LW(p) · GW(p)

Remember that if humanity only survives due to QI, then almost all of humanity's total measure does not survive. We can't observe the "universes" in which humanity is dead, but that doesn't mean we shouldn't care about them.

Replies from: selylindi
comment by selylindi · 2012-08-28T16:31:19.347Z · LW(p) · GW(p)

Why should I care about dead measure? Serious question.

Replies from: philh, Manfred
comment by philh · 2012-08-28T21:09:54.280Z · LW(p) · GW(p)

I care because that measure contains human beings, and I care about human beings, even ones I can't observe.

I can't tell you what to care about, but I repeat that "we can't observe that measure" is not (as far as I'm concerned) a reason to jump to "I don't care about the human beings who make up that measure".

(I suspect there's a sequence post that covers this better than I can, but I don't know it offhand.)

comment by Manfred · 2012-08-28T22:32:38.952Z · LW(p) · GW(p)

You should care for the same reason you don't want to die - not because being dead is so bad, but because you'd rather be living.

If you play russian roulette with a quantum coin, "you" don't move into a different world if the gun fires. There is no immortal soul that occupies quantum states like a hermit crab, scuttling off into a different sized shell when necessary. If you don't like dying, you won't like getting shot.

It's possible to construct an agent that rationally chooses quantum suicide. But this is inconsistent with not wanting to die.

comment by ArisKatsaris · 2012-08-29T11:04:36.603Z · LW(p) · GW(p)

This isn't about quantum immortality, because frankly we don't give a damn about the robot's internal subjective experience, only about whether how we'll program it so that it'll maximize our own expected benefit.

"Quantum immortality" is about the supposed persistence of an internal subjective experience.

Replies from: selylindi
comment by selylindi · 2012-08-29T18:05:54.222Z · LW(p) · GW(p)

Suppose the reality of "quantum immortality" effects is one hypothesis that the robot considers. The probability of observing surviving 10 jumps given the noQI hypothesis is approximately 33%, since the robot could have failed to observe anything. But given the QI hypothesis, the probability of observing surviving 10 jumps is virtually 100%, since some version would survive to observe itself under all three engine-failure conditions. If QI had a prior P(QI)=X before, after the 10 jumps it has a posterior of 3X/(1+2X). So it seems clear that QI is relevant to whether the anthropic evidence can be usefully employed and that the anthropic evidence is relevant to evaluation of QI. Furthermore, it becomes relevant to the robot's decision whether you value merely the fact the person is alive (as I initially assumed) or instead the total measure of the person that is alive (as philh argued).

There is no requirement to interpret QI in the silly way you and Manfred characterized it. An objective version is simply that some of the total measure of a person will survive.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-08-29T23:28:31.027Z · LW(p) · GW(p)

Quantum Immortality is the idea that you personally will never experience death, because somehow those version of you that die "can't experience anything" and so don't count -- it's an idea that can only be believed by those people who have a confused mystical view of Death as a single event on a physical level, instead of the termination of a trillion different individual processes in the brain.

The example that this article provides can on the other hand be simulated (and thus answered on a decision theoretical level very specifically) programmatically by a simple process that undergoes "fork()" and is variably terminated or allowed to continue further.

As such it has nothing to do with the various confusions that the "quantum immortality" believers tend to entertain.

Replies from: selylindi
comment by selylindi · 2012-09-04T19:01:16.566Z · LW(p) · GW(p)

I don't care to argue the true definition of the QI hypothesis, though it does indeed have an identifiable original form. The math I mentioned still works for both your mystical-QI-hypothesis and my physical-QI-hypothesis. Both versions of the hypothesis will have their odds trebled, which could give them enough weight (e.g. for an AIXI-bot) to noticeably affect the choice of action that will return the optimal expected value. Specifically, if it has been programmed to care about the total measure of surviving humanity, then the more weight is given to QI hypotheses, the less weight (proportionally) is given to the false-positive hypothesis, and the less likely the robot is to take new jumps.

Here's Wikipedia's description of the original thought experiment: quantum suicide and immortality. Quite importantly to the whole point, in the thought experiment death does indeed come by a machine controlled by a single quantum event. Here's a critical view, though in my opinion it quickly gets bogged down in dubious philosophy-of-mind assumptions.