2 Anthropic Questions

post by abramdemski · 2012-05-26T22:51:53.783Z · LW · GW · Legacy · 31 comments

I have just finished reading the section on anthropic bias in Nassim Taleb's book, The Black Swan. In general, the book is interesting to compare to the sort of things I read on Less Wrong; its message is largely very similar, except less Bayesian (and therefore less formal-- at times slightly anti-formal, arguing against misleading math).

Two points concerning anthropic weirdness.

First:

If we win the lottery, should we really conclude that we live in a holodeck (or some such)? From real-life anthropic weirdness:

Pity those poor folk who actually win the lottery!  If the hypothesis "this world is a holodeck" is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck.  (I.e. to believe that the universe is such that most conscious observers observe ridiculously improbable positive events.)

It seems to me that the right way of approaching the question is: before buying the lottery ticket, what belief-forming strategy would we prefer ourselves to have? (Ignore the issue of why we buy the ticket, of course.) Or, slightly different: what advice would you give to other people (for example, if you're writing a book on rationality that might be widely read)?

"Common sense" says that it would be quite silly to start believing some strange theory, just because I win the lottery. However, Bayes says that if we assign greater than 10-8 prior probability to "strange" explanations of getting a winning lottery ticket, then we should prefer them. In fact, we may want to buy a lottery ticket to test those theories! (This would be a very sensible test, which would strongly tend to give the right result.)

However, as a society, we would not want lottery-winners to go crazy. Therefore, we would not want to give the advice "if you win, you should massively update your probabilities".

(This is similar to the idea that we might be persuaded to defect in Prisoner's Dilemma if we are maximizing our personal utility, but if we are giving advice about rationality to other people, we should advise them that cooperating is the optimal strategy. In a somewhat unjustified leap, I suppose we should take the advice we would give to others in such matters. But I suppose that position is already widely accepted here.)

On the other hand, if we were in a position to give advice to people who might really be living in a simulation, it would suddenly be good advice!

 

Second:

Taleb discusses an interesting example of anthropic bias:

Apply this reasoning to the following question: Why didn't the bubonic plague kill more people? People will supply quantities of cosmetic explanations involving theories about the intensity of the plague and "scientific models" of epidemics. Now, try the weakened causality argument that I have just emphasized in this chapter: had the bubonic plague killed more people, the observers (us) would not be here to observe. So it may not neccessarily be the property of diseases to spare us humans.

You'll have to read the chapter if you want to know exactly what "argument" is being discussed, but the general point is (hopefully) clear from this passage. If an event was a necessary prerequisite for our existence, then we should not take our survival of that event as evidence for a high probability of survival of such events. If we remember surviving a car crash, we should not take that to increase our estimates for surviving a car crash. (Instead, we should look at other car crashes.)

This conclusion is somewhat troubling (as Taleb admits). It means that the past is fundamentally different from the future! The past will be a relatively "safe" place, where every event has led to our survival. The future is alien and unforgiving. As is said in the story The Hero With A Thousand Chances:

"The Counter-Force isn't going to help you this time.  No hero's luck.  Nothing but creativity and any scraps of real luck - and true random chance is as liable to hurt you as the Dust.  Even if you do survive this time, the Counter-Force won't help you next time either.  Or the time after that.  What you remember happening before - will not happen for you ever again."

Now, Taleb is saying that we are that hero. Scary, right?

On the other hand, it seems reasonable to be skeptical of a view which presents difficulties generalizing from the past to the future. So. Any opinions?

31 comments

Comments sorted by top scores.

comment by timtyler · 2012-05-27T00:45:29.531Z · LW(p) · GW(p)

Now, try the weakened causality argument that I have just emphasized in this chapter: had the bubonic plague killed more people, the observers (us) would not be here to observe. So it may not neccessarily be the property of diseases to spare us humans.

Fortunately, we know plenty about diseases from other species - where we don't need to be concerned with anthropic bias.

Replies from: abramdemski, JoshuaZ, gwern
comment by abramdemski · 2012-05-27T22:49:06.997Z · LW(p) · GW(p)

Fortunately, we know plenty about diseases from other species - where we don't need to be concerned with anthropic bias.

Yes, I think this is an important point. It's similar to the idea that we can still figure out how lethal car crashes tend to be, if we look at other car crashes rather than the ones we've been in.

Interestingly, this sort of thing is perfectly "communicable". If I have survived several car crashes, I can tell you about it; and you can similarly tell me about events that you have survived. We are all survivors, so it's not quite like the "quantum immortality" scenario where you perceive yourself to be uniquely immune to death in a world of mortals. (For Taleb, it is an important point: we should not take advice from the "survivors" we will frequently see; we have to work hard to account for the many dead who will unfortunately not be giving us advice based on their experiences.)

comment by JoshuaZ · 2012-05-27T03:03:54.973Z · LW(p) · GW(p)

We do however have examples of diseases that seem to be doing a decent job at wiping out the species. Look at Tasmanian Devil face cancer for example. It is likely that the primary reason we don't see diseases wiping out a lot of species right now is that with all the human caused extinctions the ones being caused by disease are being lost in the noise.

Replies from: timtyler
comment by timtyler · 2012-05-27T11:36:53.698Z · LW(p) · GW(p)

DIseases are surely having a field day at the moment - due to humans stirring their ecosystems, and introducing unfamiliar pathogens to hosts with no resistance.

My point was that biologists don't just depend on sources muddied by selection effects for their knowledge of this subject.

comment by gwern · 2012-05-27T02:45:19.297Z · LW(p) · GW(p)

Are you sure? Many diseases cross over from animals, including some famous recent examples...

Or to put it another way, animal diseases are just a step removed from the anthropic filter: if there were more extremely fatal animal diseases, then because they often cross over to humans...

Replies from: timtyler, JoshuaZ
comment by timtyler · 2012-05-27T11:47:22.768Z · LW(p) · GW(p)

There's plenty that don't cross to huamns.

I don't think there's any shortage of data unmuddied by anthropic bais.

comment by JoshuaZ · 2012-05-27T03:06:20.534Z · LW(p) · GW(p)

Some species don't easily cross-over to humans, and in fact most diseases don't easily cross-over (influenza and SIV are major exceptions but the pattern is that cross-over is itself pretty rare). We could maybe look at species which don't cross-over to humans due to much different biology, like say in cephalopods. I'm not sure we know enough about disease in such species though, or that they are similar enough to mammals to be a useful comparison.

comment by Grognor · 2012-05-27T05:37:10.967Z · LW(p) · GW(p)

Your first question is different depending on some things. If you play the lottery for the usual reason (because people are irrational and cannot do math) then it is safe to say that you will not only be better off not doing any updating if you magically end up in the winners' pool, but you probabilistically have no idea what Bayes' theorem is. This is me looking at you, hypothetical irrational person.

However, if you buy a lottery ticket in order to test whether you're in a simulation, then you have to consider that one of four things is true:

  1. You are not in a simulation.
  2. You are in a simulation, and the simulators do not want you to know. In which case, do not bet on being able to find out. But it brings up more questions, like why are we able to talk about simulations at all?
  3. You are in a simulation, and the simulators do want you to know. In which case, why use something esoteric like lotteries? It's an antiprediction that it would be something other than a lottery, like an angel appearing in your bedroom and telling you himself, or something.
  4. You are in a simulation, and the simulators want sufficiently clever people to be able to figure out they're in a simulation.
  5. Something I haven't thought of. I always include these on my lists but I never think about them because it's a tautology that I have not done so.

I think #4 is the only interesting possibility, so it might make sense to buy one lottery ticket, but that's not really very clever as tests go. I wouldn't recommend more than that, though; they're addicting. You could also attempt to perform private miracles to see if they work, though that's even less clever. I admit to having done this.

Replies from: DanArmak, Armok_GoB, abramdemski, kilobug
comment by DanArmak · 2012-06-02T10:44:20.819Z · LW(p) · GW(p)

But it brings up more questions, like why are we able to talk about simulations at all?

Because things are connected, and to make humans unable to talk or conceive of simulations would require great changes to the simulated history in many other places.

In the scenario where future humans are running the sim to simulate their own (exact or approximate) past history, simulated humans have to know and talk about simulations, because they are simulating the original humans who proceeded to build simulations!

comment by Armok_GoB · 2012-05-27T16:24:15.992Z · LW(p) · GW(p)

6 You are in a simulation, and the simulators care much more about accuracy and non-intervention than about if you can figure out you're in a simulation or not, as long as you cant convince the general public of it.

comment by abramdemski · 2012-05-27T22:40:00.444Z · LW(p) · GW(p)

I see you as arguing that the probability of winning the lottery still seems to be low, even in "strange worlds" (like simulations). I agree. It just seems to be much higher, which is what we need. The physical probability of me selecting the winning numbers is several orders of magnitude smaller than the seeming 'mental salience' of the possibility; and the mental salience is a much better estimate of the odds if the universe is 'fundamentally mental' (ie, a simulation put together by intelligent beings, or other related strange possibilities).

So, it still seems to be an excellent test; if we buy a ticket and win, we can conclude that we are in a simulation (or other 'strange world').

comment by kilobug · 2012-05-27T20:21:36.649Z · LW(p) · GW(p)

I don't think abramdemski was referring to a "generic" simulation, as if you were running the laws of physics in a computer, and it happens that consciousness arose in the world (like replicators arise in a Game of Life). Or anything like that.

I said "holodeck", if I got the concept well, then in the real world there is a player, and only one, who is running through a simulation, where he is a PC, while all others are non-self-aware NPC. And in this scenario, the player doesn't want it to be clear that he is inside a simulation (in most CRPG there is no in-game evidence that it is just a game, only very few do), but on the other hand, he does want unusual things to happen to him - like winning at the lottery. Even if the sheer luck of the PC is suspicious - like it is in most games and movies.

So the hypothesis "I am in fact a player who is controlling the PC in a game, in which the scenario is that the PC gains the lottery and then can have fun with the money" does make sense. And indeed, if you had a >10^-8 chance of it being true, after winning the lottery, you should have a decent chance of it being true.

But I don't think a 10^-8 prior is really that low for such a scenario. You've a lot of "and" in it, and each "and" does a multiplication of the probabilities...

Replies from: abramdemski
comment by abramdemski · 2012-05-27T22:44:48.786Z · LW(p) · GW(p)

But I don't think a 10^-8 prior is really that low for such a scenario. You've a lot of "and" in it, and each "and" does a multiplication of the probabilities...

Well, maybe... I suppose it's "difficult to estimate". My intuition is that there will be some "strange possibilities" which make the probability of winning the lottery much higher. But maybe those particular "strange possibilities" have a prior probability significantly lower than 10^-8, since we have to pick them out of the space of possible "strange possibilities"...

comment by philh · 2012-05-27T00:20:50.745Z · LW(p) · GW(p)

If the hypothesis "this world is a holodeck" is normatively assigned a calibrated confidence well above 10⁻⁸, the lottery winner now has incommunicable good reason to believe they are in a holodeck. (I.e. to believe that the universe is such that most conscious observers observe ridiculously improbable positive events.)

It isn't clear to me that I'm that much more likely to win the lottery if the world is a holodeck. Of all possible holodecks, why one in which I win the lottery? The world doesn't look like I'd expect it to if "most conscious observers observe ridiculously improbable positive events"; unless they just happened to start the simulation shortly before I won the lottery, and give people memories of a past that doesn't look like the future. And that in turn seems vastly unlikely even conditioning on "the world is a holodeck".

Replies from: Manfred
comment by Manfred · 2012-05-27T00:58:44.309Z · LW(p) · GW(p)

Yeah, who cares about winning the lottery. I want my volcano lair filled with catgirls.

Replies from: shokwave
comment by shokwave · 2012-05-27T11:57:51.063Z · LW(p) · GW(p)

If volcano lairs with cat girls have diminishing returns on utility, and we have a lot of time, it's plausible we end up simulating winning the lottery.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-05-27T15:00:05.970Z · LW(p) · GW(p)

The question is, how much more probable is that than that we end up simulating losing the lottery?

Replies from: wedrifid
comment by wedrifid · 2012-05-27T17:01:20.666Z · LW(p) · GW(p)

The question is, how much more probable is that than that we end up simulating losing the lottery?

I almost agreed and answered "7 times more" before noticing that this is not quite a right question. We shouldn't be asking "how much more probable" when it can actually be less probable than ending up simulating losing the lottery and have the "win" observation still be evidence in favor of simulation.

An actual "how much more probable" question that fits - and is the one I was initially trying to answer - seems to be:

How much more probable is it that a given world selected from the expected simulated worlds gives a win than that I would win the real lottery?

It's hard to specify the question more simply than that (and even that specification is borderline). The language gets ambiguous or misleading when it comes to "but we'll simulate both!" scenarios.

Replies from: TheOtherDave, abramdemski
comment by TheOtherDave · 2012-05-27T17:13:36.446Z · LW(p) · GW(p)

I've missed you!

Agreed with all of this.

comment by abramdemski · 2012-05-27T23:02:36.247Z · LW(p) · GW(p)

How much more probable is it that a given world selected from the expected simulated worlds gives a win than that I would win the real lottery?

Yes, very good... I suppose I'm not sure. My heuristic is to imagine that simulations are probable roughly in proportion to the length of their English descriptions, whereas "real worlds" are probable in proportion to physical probability (ie, descriptions in the language of physics). According to that heuristic, the question is how probable the phrase "winning the lottery" is as compared to 10^-8 (which I assume is a sufficiently good estimate of the physical probability of winning the lottery, conditioned on experiences so far). I don't have a good estimate of this phrase's frequency. (Anyone have suggestions for how to find one?)

comment by Thomas · 2012-05-27T07:06:34.995Z · LW(p) · GW(p)

At least, when you win a lottery, assign the biggest probability to the possibility that it is a hoax or a mistake of some kind, since that is usually the case. Real winners are rare among those who get the impression they are.

Even, when you are watching the youtube poker game where FR just won over the AAAA, consider yourself one of those, who watch a fake, not a real event, probable one in several tens of millions. When you see a famous character in this movie, you can be pretty damn sure it's a hoax.

A hoax, a mistake, or an illusion are even more probable than a holodex.

comment by jacobt · 2012-05-26T23:43:03.328Z · LW(p) · GW(p)

For the second question:

Imagine there are many planets with a civilization on each planet. On half of all planets, for various ecological reasons, plagues are more deadly and have a 2/3 chance of wiping out the civilization in its first 10000 years. On the other planets, plagues only have a 1/3 chance of wiping out the civilization. The people don't know if they're on a safe planet or an unsafe planet.

After 10000 years, 2/3 of the civilizations on unsafe planets have been wiped out and 1/3 of those on safe planets have been wiped out. Of the remaining civilizations, 2/3 are on safe planets, so the fact that your civilization survived for 10000 years is evidence that your planet is safe from plagues. You can just apply Bayes' rule:

P(safe planet | survive) = P(safe planet) P(survive | safe planet) / P(survive) = 0.5 * 2/3 / 0.5 = 2/3

EDIT: on the other hand, if logical uncertainty is involved, it's a lot less clear. Supposed either all planets are safe or none of them are safe, based on the truth-value of a logical proposition (say, the trillionth digit of pi being odd) that is estimated to be 50% likely a priori. Should the fact that your civilization survived be used as evidence of the logical coin flip? SSA suggests no, SIA suggests yes because more civilizations survive when the coin flip makes all planets safe. On the other hand, if we changed the thought experiment so that no civilization survives if the logical proposition is false, then the fact that we survived is proof that the logical proposition is true.

Replies from: abramdemski
comment by abramdemski · 2012-05-27T23:08:35.281Z · LW(p) · GW(p)

Yes! I thought of this too. So, the anthropic bias does not give us a reason to ignore evidence; it merely changes the structure of specific inferences. We find that we are in an interestingly bad position to estimate those probabilities (the probability will appear to be 0%, if we look just at our history). Yet, it does seem to provide some evidence of higher survival probabilities; we just need to do the math carefully...

comment by shokwave · 2012-05-27T11:59:10.891Z · LW(p) · GW(p)

Taleb has the second question mostly right. It is reasonable to be skeptical of a view which presents difficulties reasoning from the past to the future, but we have a lot of evidence that we're bad at reasoning from the past to the future, and the models that suggest anthropic issues are robust.

comment by Richard_Kennaway · 2012-05-28T05:46:11.564Z · LW(p) · GW(p)

However, Bayes says that if we assign greater than 10^-8 prior probability to "strange" explanations

Well, don't do that then. Does 10^-8, besides being the chances of a ticket winning a typical big lottery, carry in addition the implied meaning "unimaginably small", "so small that one must consider all manner of weird other possibilities that in fact we have no way of assessing the probability of, but 10^-8 is so extraordinarily small that surely they must be considered alongside the simple explanation that my ticket won"? "How could we ever be 10^-8 sure of anything?"

Because I would dispute that. Consider someone who has a lottery ticket in their hand, for a draw about to be announced, with 1 chance in 100,000,000 of having the winning numbers. If their numbers are drawn, they must overcome 80dB of prior improbability to be persuaded of that. (It does not matter whether they know that is what they are doing: they are nonetheless doing that.) An impossible task? No, almost all jackpots in the Euromillions lottery (probability 1/76275360) are claimed. Ordinary people, successfully comparing two strings of seven numbers and getting the right answer. It is news when a Euromillions jackpot goes unclaimed for as little as one week.

One of the alternative hypotheses that one must consider, of course, is the mundane "I am mistaken: this is not a winning ticket, despite the fact that I have stared at the two sets of numbers and the date over and over and they still appear to be identical." I don't know how many false positives the claims line gets. But the jackpot is awarded at least every few weeks, and every time it is claimed by people who were not mistaken.

There is no such thing as a small number.

Replies from: abramdemski
comment by abramdemski · 2012-06-04T03:33:01.040Z · LW(p) · GW(p)

There are two questions we must consider, according to Bayes: What is the prior probability of living in a simulation, and given we live in a simulation, what is the probability of winning the lottery?

We can invoke your argument at either point, and I'm not sure which you intended.

-- Is 10^-8 enough evidence to overcome the prior improbability? In this case, "prior" means just before we bought the ticket; so we have a lifetime of evidence to help us decide whether we live in a simulation. (Determining this may be difficult, of course, but the lottery argument presumes we can get some distinguishing evidence in various ways!) -- Is 10^-8 actually so much lower than the probability of winning the lotto in a simulation? It could even be higher, depending on what we think is likely!

Other commentors pointed out the second possibility, but I dodn't think of the first until your post. We might accept the idea that winning the lotto is rather more probable in a simulation, and yet reject the idea that we should believe we're in a simulation if we win, simply because the simulation hypothesis is much more complex than the regular-world hypothesis. We are then "safe" unless we win the lotto twice. :)

comment by DanArmak · 2012-06-02T10:48:01.904Z · LW(p) · GW(p)

Why didn't the bubonic plague kill more people?

Why does it surprise him that it didn't? What is his evidence that the plague would have been expected to kill more people than it did?

Replies from: abramdemski
comment by abramdemski · 2012-06-02T18:22:38.204Z · LW(p) · GW(p)

He doesn't have any evidence like that. He is merely pointing out that if we were to ask that question, among experts, we would get post-facto explanations which he would take with a heap of salt (because of the anthropic bias).

Taleb's brand of rationality, which he calls empirical skepticism (as opposted to just empiricism or just skepticism), largely trupmpets uncertainty. I think he sees this as against Bayesians (because Bayesians will usually choose an artificially narrow hypothesis space when making formal Bayesian models, and as a result, will usually get answers which are much more certain that is merited). He hasn't yet spoken about Bayesians specifically, though-- just "nerds" (statisticians who lack street smarts). When reading his stuff, though, I feel it converts well into Bayes. He is just saying that we shouldn't allow our beliefs to converge faster than is merited.

People are overconfident far more often than underconfidant.

So, his point with the black plague is really that we should answer "I don't know" if we are asked such a question, and even if an expert gives a better-sounding answer, we should assume it's an example of the narrative fallacy.

Replies from: DanArmak
comment by DanArmak · 2012-06-02T19:39:01.359Z · LW(p) · GW(p)

The point of his argument, if I understand correctly, is that we should expect a bubonic plague in the future to be more of an x-risk than it was in the past, because our past evidence is filtered by anthropic considerations. And because his argument isn't in any way specific to the plague, he will expect x-risks in general to be more prevalent in the future.

However, I don't understand how to quantify this. How much should I update towards the next bubonic plague being an x-risk? A little? A lot?

The historical plague could have wiped out humanity, but for anthropic reasons. And also, the flu of 1918 could have wiped out humanity, but for anthropic reasons. And the flu virus created recently in the lab could have escaped and wiped out humanity, but didn't, for anthropic reasons. And also I have in my garage the pestilent bacterium Draco invisibilis, and if it ever infects a human, we are all doomed; but it never has, for anthropic reasons...

comment by Nisan · 2012-05-28T09:32:47.705Z · LW(p) · GW(p)

Regarding the first question: We are making an argument here about what kind of advice we would want to give to people, and we are also considering a hypothesis under which most people aren't "conscious". Using a functionalist philosophy of mind, perhaps we can say that we cannot expect to change the behavior of non-conscious people by giving them advice. So then maybe there is no reason to want to advise anyone against updating in favor of a simulation hypothesis.

On the other hand, there is the hypothesis that everyone is "conscious", but we are in a simulation in which one particular person has good things happen to them. In that case, every time someone wins the lottery, we want to update in favor of a simulation hypothesis under which that person is special.

comment by see · 2012-05-27T00:47:39.718Z · LW(p) · GW(p)

In the second case, the easy answer is, "How often do diseases wipe out other species?" That is, when possible, calibrate possibilities against similar events that don't involve anthropic questions. Our disease model tends to hold quite effectively for animals whose extinction would be harmless or even beneficial from the anthropic perspective.