Perspective Reasoning’s Counter to The Doomsday Argument

post by Xianda_GAO_duplicate0.5321505782395719 · 2017-09-16T19:39:21.597Z · LW · GW · Legacy · 38 comments

To be honest I feel a bit frustrated that this is not getting much attention. I am obviously biased but I think this article is quite important. It points out the controversies surrounding the doomsday argument, simulation argument, boltzmann's brain, presumptuous philosopher,  sleeping beauty problem and many other aspects of anthropic reasoning is caused by the same thing: perspective inconsistency. If we keep the same perspective then the paradoxes and weird implications just goes away. I am not a academic so I have no easy channel for publication. That's why I am hoping this community can give some feedback. If you have half an hour to waste anyway why not give it a read? There's no harm in it. 


Abstract: 

From a first person perspective, a self-aware observer can inherently identify herself from other individuals. However, from a third person perspective this identity through introspection does not apply. On the other hand, because an observer’s own existence is a prerequisite for her reasoning she would always conclude she exists from a first person perspective. This means observers have to take a third person perspective to meaningfully contemplate her chance of not coming into existence. Combining the above points suggests arguments which utilize identity through introspection and information about one’s chance of existence fails by not keeping a consistent perspective. This helps explaining questions such as doomsday argument and sleeping beauty problem. Furthermore, it highlights the problem with anthropic reasonings such as self-sampling assumption and self-indication assumption.


Any observer capable of introspection is able to recognize herself as a separate entity from the rest of the world. Therefore a person can inherently identify herself from other people. However, due to the first-person nature of introspection it cannot be used to identify anybody else. This means from a third-person perspective each individual has to be identified by other means. For ordinary problems this difference between first- and third-person reasoning bears no significance so we can arbitrarily switch perspectives without affecting the conclusion. However this is not always the case.

One notable difference between the perspectives is about the possibility of not existing. Because one’s existence is a prerequisite for her thinking, from a first person perspective an observer would always conclude she exists (cogito ergo sum). It is impossible to imagine what your experiences would be like if you don’t exist because it is self-contradictory. Therefore to envisage scenarios which oneself does not come into existence an observer must take a third person perspective. Consequently any information about her chances of coming into existence is only relevant from a third-person perspective.

Now with the above points in mind let’s consider the following problem as a model for the doomsday argument (taken from Katja Grace’s Anthropic Reasoning in the Great Filter):


God’s Coin Toss

Suppose God tosses a fair coin. If it lands on heads, he creates 10 people, each in their own room. If it lands on tails he creates 1000 people each in their own room. The people cannot see or communicate with the other rooms. Now suppose you wake up in a room and was told of the setup. How should you reason the coin fell? Should your reason change if you discover that you are in one of the first ten rooms?

The correct answer to this question is still disputed to this day. One position is that upon waking up you have learned nothing. Therefore you can only be 50% sure the coin landed on heads. After learning you are one of the first ten persons you ought to update to 99% sure the coin landed on heads. Because you would certainly be one of the first ten person if the coin landed on heads and only have 1% chance if tails. This approach follows the self-sampling assumption (SSA).

This answer initially reasons from a first-person perspective. Since from a first-person perspective finding yourself exist is a guaranteed observation it offers no information. You can only say the coin landed with an even chance at awakening. The mistake happens when it updates the probability after learning you are one of the first ten persons. Belonging to a group which would always be created means your chance of existence is one. As discussed above this new information is only relevant to third-person reasoning. It cannot be used to update the probability from first-person perspective. From a first person perspective since you are in one of the first ten rooms and know nothing outside this room you have no evidence about the total number of people. This means you still have to reason the coin landed with even chances.

Another approach to the question is that you should be 99% sure that the coin landed on tails upon waking up, since you have a much higher chance of being created if more people were created. And once learning you are in one of the first ten rooms you should only be 50% sure that the coin landed on heads. This approach follows the self-indication assumption (SIA).

This answer treats your creation as new information, which implies your existence is not guaranteed but by chance. That means it is reasoning from a third-person perspective. However your own identity is not inherent from this perspective. Therefore it is incorrect to say a particular individual or “I” was created, it is only possible to say an unidentified individual or “someone” was created. Again after learning you are one of the first ten people it is only possible to say “someone” from the first ten rooms was created. Since neither of these are new information the probability of heads should remains at 50%.

It doesn’t matter if one choose to think from first- or third-person perspective, if done correctly the conclusions are the same: the probability of coin toss remains at 50% after waking up and after learning you are in one of the first ten rooms. This is summarized in Figure 1.

Figure 1. Summary of Perspective Reasonings for God’s Coin Toss

The two traditional views wrongfully used both inherent self identification as well as information about chances of existence. This means they switched perspective somewhere while answering the question. For the self-sampling assumption (SSA) view, the switch happened upon learning you are one of the first ten people. For the self-indication assumption (SIA) view, the switch happened after your self identification immediately following the wake up. Due to these changes of perspective both methods require to defining oneself from a third-person perspective. Since your identity is in fact undefined from third-person perspective, both assumptions had to make up a generic process. As a result SSA states an observer shall reason as if she is randomly selected among all existent observers while SIA states an observer shall reason as if she is randomly selected from all potential observers. These methods are arbitrary and unimaginative. Neither selections is real and even if one actually took place it seems incredibly egocentric to assume you would be the chosen one. However they are necessary compromises for the traditional views.

One related question worth mentioning is after waking up one might ask “what is the probability that I am one of the first ten people?”. As before the answer is still up to debate since SIA and SSA gives different numbers. However, base on perspective reasoning, this probability is actually undefined. In that question “I” – an inherently self identified observer, is defined from the first-person perspective, whereas “one of the first ten people” – a group based on people’s chance of existence is only relevant from the third-person perspective. Due to this switch of perspective in the question it is unanswerable. To make the question meaningful either change the group to something relevant from first-person perspective or change the individual to someone identifiable from third-person perspective. Traditional approaches such as SSA and SIA did the latter by defining “I” in the third person. As mentioned before, this definition is entirely arbitrary. Effectively SSA and SIA are trying to solve two different modified versions of the question. While both calculations are correct under their assumptions, none of them gives the answer to the original question.

A counter argument would be an observer can identify herself in third-person by using some details irrelevant to the coin toss. For example, after waking up in the room you might find you have brown eyes, the room is a bit cold, dust in the air has certain pattern etc. You can define yourself by these characteristics. Then it can be said, from a third-person perspective, it is more likely for a person with such characteristics to exist if they are more persons created. This approach is following full non-indexical conditioning (FNC), first formulated by Professor Radford M.Neal in 2006. In my opinion the most perspicuous use of the idea is by Michael Titelbaum’s technicolor beauty example. Using this example he argued for a third position in the sleeping beauty problem.Therefore I will provide my counter argument while discussing the sleeping beauty problem.


The Sleeping Beauty Problem

You are going to take part in the following experiment. A scientist is going to put you to sleep. During the experiment you are going to be briefly woke up either once or twice depending the result of a random coin toss. If the coin landed on heads you would be woken up once, if tails twice. After each awakening your memory of the awakening would be erased. Now supposed you are awakened in the experiment, how confident should you be that the coin landed on heads? How should you change your mind after learning this is the first awakening?

The sleeping beauty problem has been a vigorously debated topic since 2000 when Adam Elga brought it to attention. Following self-indication assumption (SIA) one camp thinks the probability of heads should be 1/3 at wake up and 1/2 after learning it is the first awakening. On the other hand supporters of self-sampling assumption (SSA) think the probability of heads should be 1/2 at wake up and 2/3 after learning it is the first awakening.

Astute readers might already see the parallel between sleeping beauty problem and God’s coin toss problem. Indeed the cause of debate is exactly the same. If we apply perspective reasoning we get the same result – your probability should be 1/2 after waking up and remain at 1/2 after learning it is the first awakening. In first-person perspective you can inherently identify the current awakening from the (possible) other but cannot contemplate what happens if this awakening doesn’t exist. Whereas from third-person perspective you can imagine what happens if you are not awake but cannot justifiably identify this awakening. Therefore no matter from which perspective you chose to reason, the results are the same, aka double halfers are correct.

However, Titelbaum (2008) used the technicolor beauty example arguing for a thirder’s position. Suppose there are two pieces of paper one blue the other red. Before your first awakening the researcher randomly choose one of them and stick it on the wall. You would be able to see the paper’s color when awoke. After you fall back to sleep he would switch the paper so if you wakes up again you would see the opposite color. Now suppose after waking up you saw a piece of blue paper on the wall. You shall reason “there exist a blue awakening” which is more likely to happen if the coin landed tails. A bayesian update base on this information would give us the probability of head to be 1/3. If after waking up you see a piece of red paper you would reach the same conclusion due to symmetry. Since it is absurd to purpose technicolor beauty is fundamentally different from sleeping beauty problem they must have the same answer, aka thirders are correct.

Technicolor beauty is effectively identifying your current awakening from a third-person perspective by using a piece of information irrelevant to the coin toss. I purpose the use of irrelevant information is only justified if it affects the learning of relevant information. In most cases this means the identification must be done before an observation is made. The color of the paper, or any details you experienced after waking up does not satisfy this requirement thus cannot be used. This is best illustrated by an example.

Imagine you are visiting an island with a strange custom. Every family writes their number of children on the door. All children stays at home after sunset. Furthermore only boys are allowed to answer the door after dark. One night you knock on the door of a family with two children . Suppose a boy answered. What is the probability that both children of the family are boys? After talking to the boy you learnt he was born on a Thursday. Should you change the probability?

A family with two children is equally likely to have two boys, two girls, a boy and a girl or a girl and a boy. Seeing a boy eliminates the possibility of two girls. Therefore among the other cases both boys has a probability of 1/3. If you knock on the doors of 1000 families with two children about 750 would have a boy answering, out of which about 250 families would have two boys, consistent with the 1/3 answer. Applying the same logic as technicolor beauty, after talking to the boy you shall identify him specifically as “a boy born on Thursday” and reason “the family has a boy born on Thursday”. This statement is more likely to be true if both the children are boys. Without getting into the details of calculation, a bayesian update on this information would give the probability of two boys to be 13/27. Furthermore, it doesn’t matter which day he is actually born on. If the boy is born on, say, a Monday, we get the same answer by symmetry.

This reasoning is obviously wrong and answer should remain at 1/3. This can be checked by repeating the experiment by visiting many families with two children. Due to its length the calculations are omitted here. Interested readers are encouraged to check. 13/27 would be correct if the island’s custom is “only boys born on Thursday can answer the door”. In that case being born on a Thursday is a characteristic specified before your observation. It actually affects the chance of you learning relevant information about whether a boy exists. Only then you can justifiably identifying whom answering the door as “a boy born on Thursday”and reason “the family has a boy born on Thursday”. Since seeing the blue piece of paper happens after you waking up which does not affect your chance of awakening it cannot be used to identify you in a third-person perspective. Just as being born on Thursday cannot be used to identify the boy in the initial case.

On a related note, for the same reason using irrelevant information to identify you in the third-person perspective is justified in conventional probability problems. Because the identification happens before observation and the information learned varies depends one which person is specified. That’s why in general we can arbitrarily switch perspectives without changing the answer.

38 comments

Comments sorted by top scores.

comment by turchin · 2017-09-16T20:03:28.727Z · LW(p) · GW(p)

I think that most discussions about Doomsday argument are biased in the way that author tries to disprove it.

Also, it looks like that in the multiverse all possible observers exist, so the mere fact of my existence is non-informational. However, I could ask if some of my properties are random or not, and could they be used for some predictions.

For example, my birthday month seems to be random. And if I know my birthday month, but don't know how many months are in the year, I could estimate that they are approximately 2 times of my birthday month rank. It works.

The problem appears when I apply the same logic to the future of human civilization, as I don't like the result.

Replies from: entirelyuseless, Xianda_GAO_duplicate0.5321505782395719
comment by entirelyuseless · 2017-09-17T19:23:50.671Z · LW(p) · GW(p)

"I think that most discussions about Doomsday argument are biased in the way that author tries to disprove it."

This article is a good example: talking about "solutions" to an argument implies that you started out from the beginning with the desire to prove it was false, without first considering whether it was likely to be true or not.

Replies from: Yosarian2
comment by Yosarian2 · 2017-09-17T20:14:24.512Z · LW(p) · GW(p)

I think the argument probably is false, because arguments of the same type can be used to "prove" a lot of other things that also clearly seem to be false. When you take that kind of anthropomorphic reasoning and take it to it's natural conclusion, you reach a lot of really bizzare places that don't seem to make sense.

In math, it's common for a proof to be disputed by demonstrating that the same form of proof can be used to show something that seems to be clearly false, even if you can't find the exact step where the proof went wrong, and I think the same is true about the doomsday argument.

Replies from: turchin
comment by turchin · 2017-09-18T12:42:59.045Z · LW(p) · GW(p)

I think the opposite: Doomsday argument (in one form of it) is an effective predictor in many common situations, and thus it also could be allied to the duration of human civilization. DA is not absurd: our expectations about human future are absurd.

For example, I could predict medium human life expectancy based on supposedly random my age. My age is several decades, and human life expectancy is 2 х (several decades) with 50 percent probability (and it is true).

Replies from: Yosarian2, Xianda_GAO_duplicate0.5321505782395719, entirelyuseless
comment by Yosarian2 · 2017-09-18T20:57:32.962Z · LW(p) · GW(p)

Let me give a concrete example.

If you take seriously the kind of anthropic probabilistic reasoning that leads to the doomsday argument, then it also invalidates the same argument, because we probably aren't living in the real universe at all, we're probably living in a simulation. Except you're probably not living in a simulation because we're probably living in a short period of time of quantum randomness that appears long after the universe ends which recreates you for a fraction of a second through random chance and then takes you apart again. There should be a vast number of those events that happen for every real universe and even a vast number of those events for every simulated universe, so you probably are in one of those quantum events right now and only think that you existed when you started reading this sentence.

And that's only a small part of the kind of weirdness these arguments create. You can even get opposite conclusions from one of these arguments just by tweaking exactly what reference class you put things in. For example, "i should be roughly the average human" gives you an entierly different doomsday answer then "i should be roughly the average life form" which gives you an entierly different answer then "I should be roughly the average life form that has some kind of thought process". And there's no clear way to pick a category; some intuitively feel more convincing then others but there's no real way to determine that.

Basically, I would take the doomsday argument (and the simulation argument, for that matter) a lot more seriously if anthropic probability arguments of that type didn't lead to a lot of other conclusions that seem much less plausible, or in some cases seem to be just incoherent. Plus, we don't have a good way to deal with what's known as "the measurement problem" if we are trying to use anthropic probability in an infinite multiverse, which throws a further wrench into the gears.

A theory which fits most of what we know but gives one or a few weird results that we can test is interesting. A theory that gives a whole mess of weird and often conflicting results, many of which would make the scientific method itself a meaningless joke if true, and almost none of which are testable, is probably flawed somewhere, even if it's not clear to us quite where.

Replies from: turchin
comment by turchin · 2017-09-18T21:28:49.339Z · LW(p) · GW(p)

It is not a bug, it is a feature :) Quantum mechanics is also very counterintuitive, creates strange paradoxes etc, but it doesn' make it false.

I think that DA and simulation argument are both true, as they support each other. Adding Boltzmann brains is more complicated, but I don't see a problem to be a BB, as there is a way to create a coherent world picture using only BB and path in the space of possible minds, but I would not elaborate here as I can't do it shortly. :)

As I said above, there is no need to tweak reference classes to which I belong, as there is only one natural class. However, if we take different classes, we get a prediction for different events: for example, class of humans will extinct soon, but the class of animals could exist for billion more years, and it is quite a possible outcome: humans will extinct, but animals survive. There is nothing mysterious in reference classes, just different answers for different questions.

The measure is the real problem, I think so.

The theory of DA is testable if we apply it to many smaller examples like Gott successfully did for predicting the length of the Broadway shows.

So the theory is testable, no more weird than other theories we use, and there is no contradiction between doomsday argument and simulation argument (they both mean that there are many past simulations which will be turned off soon). However, it still could be false or have one more turn, which will make things even weirder, like if we try to account for mathematically possible observers or multilevel simulations or Boltzmann AIs.

Replies from: Yosarian2
comment by Yosarian2 · 2017-09-19T02:00:04.752Z · LW(p) · GW(p)

Quantum mechanics is also very counterintuitive, creates strange paradoxes etc, but it doesn' make it false.

Sure, and if we had anything like the amount of evidence we have for antropic probability theories that we do for quantum theory I'd be glad to go along with it. But short of a lot of evidence, you should be more skeptical of theories that imply all kinds of improbable results.

As I said above, there is no need to tweak reference classes to which I belong, as there is only one natural class.

I don't see that at all. Why not classify yourself as "part of an intelligent species that has nuclear weapons or otherwise poses an existential threat to itself"? That seems like just as reasonable a classification as any (especially if we're talking about "doomsday"), but it gives a very different (worse) result. Or, I donno, "part of an intelligent species that has built an AI capable of winning at Go?" Then we only have a couple more months. ;)

It also seems weird to just assume that somehow today is a normal day in human existence, no more or less special then any day any random hunter-gatherer wandered the plains. If you have some a priori reason to think that the present is unusual, you should probably look at that instead of vague anthropic arguments; if you just found out you have cancer and your house is on fire while someone is shooting at you, it probably doesn't make sense to just ignore all that and assume that you're halfway through your lifespan. Or if you were just born 5 minutes ago, and seem to be in a completely different state then anything you've ever experienced. And we're at a very unique point here in the history of our species, right on the verge of various existential threats and at the same time right on the verge of developing spaceflight and the kind of AI technology that would likely ensure our decedents may persist for billions of years. isn't it more useful to look at that instead of just assuming that today is just another day in humanity's life like any other?

I mean, it seems likely that we're already waaaaaay out on the probability curve here in one way or another, if the Great Silence of the universe is any guide. There can't have been many intelligent species who got to where we are in the history of our galaxy, or I think the galaxy would look very different.

Replies from: turchin
comment by turchin · 2017-09-19T09:29:31.788Z · LW(p) · GW(p)

I am a member of a class of beings, able to think about Doomsday argument, and it is the only correct referent class. And for these class, my day is very typical: I live in advance civilization interested in such things and start to discuss the problem of DA in the morning.

I can't say that I am randomly chosen from hunter-gathers, as they were not able to think about DA. However, I could observe some independent events (if they are independent of my existence) in a random moment of time of their existence and thus predict their duration. It will not help to predict the duration of existence of hunter-gathers, as it is not truly independent of my existence. But could help in other cases.

20 minutes ago I participate in shooting in my house - but it was just a night dream, and it supports simulation argument, which basically claims that most events I observe are unreal, as their simulation is cheaper. I participate during my life in hundreds shooting in dreams, games and movies, but never in real one: simulated events are much more often.

Thus DA and SA are not too bizarre, they become bizarre because of incorrect solving of the reference class problem.

The strangeness of DA appears when we try to compare it with some unrealistic expectations about our future: that there will be billion of years full of billion people living in human-like civilization. But more probable is that in several decades AI will appear, which will run many past simulations (and probably kill most humans). It is exactly what we could expect from observed technological progress, and DA and SA just confirm observed trends.

Replies from: Yosarian2
comment by Yosarian2 · 2017-09-22T13:38:51.170Z · LW(p) · GW(p)

If you're in a simulation, the only reference class that matters is "how long has the simulation been running for". And most likely, for anyone running billions of simulations, the large majority of them are short, only a few minutes or hours. Maybe you could run a simulation that lasts as long as the universe does in subjective time, but most likely there would be far more short simulations.

Basically, I don't think you can use the doomsday argument at all if you're in a simulation, unless you know how long the simulation's been running, which you can't know. You can accept either SA or DA, but you can't use both of them at the same time.

Replies from: turchin
comment by turchin · 2017-09-22T20:53:09.159Z · LW(p) · GW(p)

I agree that in the simulation one could have fake memories of the past of the simulation. But I don't see a practical reason to run few minutes simulations (unless of a very important event) - fermi-solving simulation must run from the beginning of 20 century and until the civilization ends. Game-simulations also will be probably life-long. Even resurrection-simulations should be also lifelong. So I think that typical simulation length is around one human life. (one exception I could imagine - intense respawning in case of some problematic moment. In that case, there will be many respawnings around possible death event, but consequences of this idea is worrisome)

If we apply DA to the simulation, we should probably count false memories as real memories, because the length of false memories is also random, and there is no actual difference between precalculating false memories and actually running a simulation. However, the termination of the simulation is real.

Replies from: Yosarian2
comment by Yosarian2 · 2017-09-22T21:19:55.713Z · LW(p) · GW(p)

But I don't see a practical reason to run few minutes simulations

The main explanation that I've seen for why an advanced AI might run a lot of simulations is in order to better predict how humans would react in different situations (perhaps to learn to better manipulate humans, or to understand human value system, or maybe to achieve whatever theoretically pro-human goal was set in the AI's utility function, ect). If so, then it likely would run a very large number of very short simulations, designed to put uploaded minds in very specific and very clearly designed unusual situations, and then end the simulation shortly afterwards. Likely if that was the goal it would run a very large number of iterations on the same scenario, each time varying the details ever so slightly, in order to try to find out exactally what makes us tick. For example, instead of philosophizing about the trolley car problem, it might just put a million different humans into that situation and see how each one of them reacts, and then iterate the situation ten thousand times with slight variations each time to see which variables change how humans will react.

If an AI does both (both short small-scale simulations and long universe-length simulations), then the number of short simulations would massively outnumber the number of long simulations, you could run quadrillions of them for the same resources as it takes to actually simulate an entire universe.

Replies from: turchin
comment by turchin · 2017-09-22T21:35:20.987Z · LW(p) · GW(p)

Sounds convincing. I will think about it.

Did you see my map of the simulation argument by the way? http://lesswrong.com/lw/mv0/simulations_map_what_is_the_most_probable_type_of/

Replies from: Yosarian2
comment by Yosarian2 · 2017-09-22T21:49:59.658Z · LW(p) · GW(p)

Yeah, I saw that. In fact looking back on that comment thread, it looks like we had almost the exact same debate there, heh, where I said that I didn't think the simulation hypothesis was impossible but that I didn't see the anthropic argument for it as convincing for several reasons.

Replies from: turchin
comment by turchin · 2017-09-23T05:44:53.199Z · LW(p) · GW(p)

Probably I also said it before, but SA is in fact comparison of prices. And it basically says that cheaper things are more often, and fakes are cheaper than real things. That is why we more often see images of a nuclear blast than real one.

And yes, there are many short simulations in our world, like dreams, thoughts, clips, pictures.

Replies from: entirelyuseless, Yosarian2
comment by entirelyuseless · 2017-09-23T16:17:08.905Z · LW(p) · GW(p)

The thing is that this requires you to what "fake" and "real" are. In practice those are relative terms that refer to something cheaper and something more expensive in your world. So saying "maybe I'm a Boltzman brain" or "maybe I'm in a simulation" have the problem that you are trying to compare the world you know to a potentially more expensive world and saying "maybe my world is cheaper than it seems." But since you haven't experienced a more expensive version than the real world, you don't even know what that would mean. Of course it is always possible, and even likely, that something is cheaper than it appears (even the real world) but it seems silly to describe that by saying "the real world is a fake world." The words "the real world" refer to the only world you know, even if it is quite likely that that world is cheaper than it seems.

In other words, it is likely that the world is cheap; it is meaningless to say the world is fake.

Replies from: turchin
comment by turchin · 2017-09-23T19:40:03.629Z · LW(p) · GW(p)

We could explain it in terms of observations. Fake observation is the situation than you experience something that does not actually exist. For example, you watch a video of a volcanic eruption on youtube. It is computationally cheaper to create a copy a video of volcanic eruption than to actually create a volcano - and because of it, we see pictures about volcanic eruptions more often than actual ones.

It is not meaningless to say that the world is fake, if only observable surfaces of things are calculated like in a computer game, which computationally cheaper.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-09-24T16:17:32.076Z · LW(p) · GW(p)

There can be a fake video of a volcanic eruption, because the video is a picture without the normal physical mechanism that causes such images. In other words, it only has the observable surface without the regular interior.

But it is not meaningful to say, "The whole world we know is fake." Because for that to be true, the world has to be missing a regular interior. But the regular interior, say, of a volcanic eruption is the interior that volcanic eruptions normally have in fact, whatever that is; so by definition the interior is there. In other words you need to experience the version you call real in order to call another version fake. It might be that there is more stuff that you do not know about, but calling the world fake is not a good way to say this.

Instead, you should just say that there is more stuff in reality than you know about. There is no need to call the stuff you do know fake.

Replies from: turchin
comment by turchin · 2017-09-24T21:31:41.036Z · LW(p) · GW(p)

I meant that in a simulation most efforts go to the calculating of only the visible surface of the things. Inside details which are not affecting the visible surface, may be ignored, thus the computation will be computationally much cheaper than atom-precise level simulation. For example, all internal structure of Earth deeper that 100 km (and probably much less) may be ignored to get a very realistic simulation of the observation of a volcanic eruption.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-09-25T15:05:52.877Z · LW(p) · GW(p)

We decide how much structure is needed to count as real by looking at how much structure is actually there. If volcanic eruptions have only 10 miles of structure, then only 10 miles of structure is needed for an eruption to be real.

This is perfectly obvious. How much structure is needed for a chair to count as a real chair? You decide that by looking at chairs and figuring out how much structure they actually have. You do not have some a priori idea of how much structure a chair needs, so that you can say that a chair is fake if it doesn't have that structure. You first check how much structure normal chairs have; then if other things look like chairs but don't have that structure, you can say they are fake.

In the same way, if normal eruptions have 10 miles of structure, but you find one that has not even 1 mile (e.g. a video), you can say it is fake. But you cannot say the one with 10 miles is fake because it doesn't have 100 miles, when you have never even seen one with 100 miles.

Replies from: turchin
comment by turchin · 2017-09-25T18:18:20.660Z · LW(p) · GW(p)

It looks like the word "fake" is not very correct here. Let say illusion. If one creates a movie about volcanic eruption, he has to model only ways it will appear to the expected observer. It is often done in the cinema when they use pure CGI to make a clip as it is cheaper than actually filming real event.

Illusions in most cases are computationally cheaper than real processes and even detailed models. Even if they fild a real actress as it is cheaper than multiplication, the copying of her image creates many illusionary observation of a human, but in fact it is only a TV screen.

Personally, I lost point which you would like to prove. What is the main disagreement?

Replies from: entirelyuseless
comment by entirelyuseless · 2017-09-27T14:32:47.385Z · LW(p) · GW(p)

"What is the main disagreement?"

Whether the stuff that generates our experience can reasonably be described in terms that contrast it with real stuff. Illusion has the same problem as "fake." The word is relative: it means something like a real thing, which isn't actually a real thing. But basically real just means the normal stuff, and illusions and fake things mean things which are externally similar. But "the normal stuff" just refers to whatever is normal for us. So all of the stuff that seems normal to us, is real, and is not fake or illusory.

Replies from: turchin
comment by turchin · 2017-09-27T19:10:46.192Z · LW(p) · GW(p)

So, are the night dreams illusions or real objects? I think that they are illusions: When I see a mountain in my dream, it is an illusion, and my "wet neural net" generates only an image of its surface. However, in the dream, I think that it is real. So dreams are some form of immersive simulations. And as they are computationally cheaper, I see strange things like tsunami more often in dreams than in reality.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-09-28T14:18:53.324Z · LW(p) · GW(p)

So, are the night dreams illusions or real objects? I think that they are illusions

I agree. But "they are illusions" only makes sense because they are illusions relative to the ones we see during the day, which are not illusions. In other words, as I said, fake or illusion is relative to real, so it only has meaning when you know about a real one.

In other words, if you lived all your life in a night dream and were never awake, the mountains in your dreams would not be illusions. They would be real. That does not mean they would be day mountains -- they would be something different. But when the dreaming you said "this is a mountain," the word "mountain" would refer to a dreamt mountain, not to a day one, since you would have never seen a day one and could not talk about them. So the dreaming you would say, "this is a real mountain," and that would be true. But other awake people would say, "he sees an illusion," and this would also be true. But that is because you and the awake people would be using "mountain" for different things. This is like what I said before about BBs.

Replies from: turchin
comment by turchin · 2017-09-28T15:16:14.720Z · LW(p) · GW(p)

I think there is one observable property of illusions, which become possible exactly because they are competitively cheap. And this is miracles. We constantly see flying mountains in the movies, in dreams, in pictures, but not in reality. If I have a lucid dream, I could recognise the difference between my idea of what is a mountain (a product of long-term geological history) and the fact that it has one peak and in the next second it has two peaks. This could make doubt about it consistency and often help to get lucidity in the dream.

So it is possible to learn about an illusion of something before I get the real one, if there is some unexpected (and computationally cheap) glitches.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-09-30T02:12:27.314Z · LW(p) · GW(p)

And this is miracles.

"Miracles" doesn't have a sufficiently well defined meaning for this purpose. I think you mean that real things tend to have more stability and permanence, and illusions tend to have less. And I agree: real mountains tend to stay the same, while illusory mountains like ones you are dreaming tend to change rapidly.

But this is relative, as I was saying before. There are real mountains, but there also real clouds, and real gusts of wind, even though clouds are less stable and permanent than mountains, and gusts of wind are less stable and permanent than clouds.

So if you lived all your life in a dream, the mountains you dreamed would be real. But as I said before, they would be "mountains" with a different meaning; as real things, they would be more like clouds in the real world.

Notice that if mountains in the real world suddenly multiplied or changed in a "miraculous" way, I would never conclude that the mountains were not real; I might conclude that there are other principles at work that I did not know about. Including that real mountains might have a relationship to something else that is similar to the relationship of an illusion to something real; but not that the mountains were not real.

Replies from: turchin
comment by turchin · 2017-09-30T10:09:22.619Z · LW(p) · GW(p)

if I see that mountain start to move, there will be a conflict between what I think they are - geological formations, and my observations, and I have to update my world model. Onу way to do so is to conclude that it is not a real geological mountain, but something which pretended (or was mistakenly observed as) to be a real mountain but after it starts to move, it will become clear that it was just an illusion. Maybe it was a large tree, or a videoprojection on a wall.

Replies from: entirelyuseless
comment by entirelyuseless · 2017-09-30T15:12:27.654Z · LW(p) · GW(p)

Sure. But then you will be relating the pretend mountain, to other mountains, which are still real ones. If all mountains start to move, you will not be able to do that. You will have to say, "Real mountains could not move before, but now they can."

Replies from: turchin
comment by turchin · 2017-09-30T20:02:58.385Z · LW(p) · GW(p)

In fact, I will probably do a reality check, if I am in a dream, if I see something like "all mountains start to move". I refer here to technics to reach lucid dreams that I know and often practice. Humans are unique as they are able to have completely immersive illusions of dreaming, but after all recognise them as dreams without wakening up.

But I got your point: definition of reality depends on the type of reality where one is living.

comment by Yosarian2 · 2017-09-23T17:47:53.592Z · LW(p) · GW(p)

It seems weird to place a "price" on something like the Big Bang and the universe. For all we know, in some state of chaos or quantum uncertainty, the odds of something like a Big Bang happening eventually approaches 100%, which makes it basically "free" by some definition of the term. Especially if something like the Big Bang and the universe happens an infinite number of times, either sequentially or simultaneously.

Again, we don't know that that's true, but we don't know it's not true either.

Replies from: turchin
comment by turchin · 2017-09-23T19:33:01.578Z · LW(p) · GW(p)

Maybe more correct is to say the price of the observation. It is cheaper to see a volcanic eruption in youtube than in reality.

Replies from: Yosarian2
comment by Yosarian2 · 2017-09-23T20:04:24.932Z · LW(p) · GW(p)

I guess, but it's cheaper to observe the sky in reality then it is on youtube. To observe the sky, you just have to look out the window; turning on your computer costs energy and such.

So in order for this to be coherent, I think you have to somehow make the case that our reality is in some extent rare or unlikely or expensive, and I'm not sure how you can do that without knowing more about the creation of the universe then we do, or how "common" the creation of universes is over...some scale (not even sure what scale you would use; over infinite periods of time? Over a multiverse? Does the question even make sense?)

Replies from: turchin
comment by turchin · 2017-09-24T09:29:11.216Z · LW(p) · GW(p)

In that case, I use just the same logic as Bostrom: each real civilization creates zillions of copies of some experiences. It already happened in form of dreams, movies and pictures.

Thus I normalize by the number of existing civilization and don't have obscure questions about the nature of the universe or price of the big bang. I just assumed that inside the civilization rare experiences are often faked. They are rare because they are in some way expensive to create, like diamonds or volcanic observation, but their copies are cheap, like glass or pictures.

comment by Xianda_GAO_duplicate0.5321505782395719 · 2017-09-18T18:37:32.858Z · LW(p) · GW(p)

The doomsday argument is controversial not because its conclusion is bleak but because it has some pretty hard to explain implications. Like the choice of reference class is arbitrary but affects the conclusion, it also gives some unreasonable predicting power and backward causations. Anyone trying to understand it would eventually have to reject the argument or find some way to reconcile with these implications. To me neither position are biased as long as it is sufficiently argued.

Replies from: turchin
comment by turchin · 2017-09-18T19:41:34.342Z · LW(p) · GW(p)

I don't see the problems with the reference class, as I use the following conjecture: "Each reference class has its own end" and also the idea of "natural reference class" (similar to "the same computational process" in TDT): "I am randomly selected from all, who thinks about Doomsday argument". Natural reference class gives most sad predictions, as the number of people who know about DA is growing from 1983, and it implies the end soon, maybe in couple decades.

Predictive power is probabilistic here and not much differ from other probabilistic prediction we could have.

Backward causation is the most difficult part here, but I can't imagine now any practical example for our world.

PS: I think it is clear what do I mean by "Each reference class has its own end" but some examples may be useful. For example, I have 1000 rank in all who knows DA, but 90 billions rank from all humans. In first case, DA claims that there will be around 1000 more people who know about DA, and in the second that there will be around 90 billion more humans. These claims do not contradict each other as they are probabilistic assessments with very high margin. Both predictions mean extinction in next decades or centuries. That is, changes in reference class don't change the final conclusion of DA that extinction is soon.

comment by entirelyuseless · 2017-09-18T14:03:52.903Z · LW(p) · GW(p)

Exactly. My current age is almost exactly halfway through a normal human lifetime, not a millionth of the way through or 99.9% of the way through.

Replies from: turchin
comment by turchin · 2017-09-18T15:04:21.915Z · LW(p) · GW(p)

However, if we look at Doomsday argument and Simulation argument together, they will support each other: most observers will exist in the past simulations of the something like 20-21 century tech civilizations.

It also implies some form of simulation termination soon or - and this is our chance - unification of all observers into just one observer, that is the unification of all minds into one superintelligent mind.

But the question - if most minds in the universe are superintelligences - why I am not superintelligence, still exist :(

comment by Xianda_GAO_duplicate0.5321505782395719 · 2017-09-17T02:12:31.310Z · LW(p) · GW(p)

The post specifically explained why your properties cannot be used for predictions in the context of doomsday argument and sleeping beauty problem. I would like to know your thoughts on that.

Replies from: turchin
comment by turchin · 2017-09-18T12:47:36.214Z · LW(p) · GW(p)

I can't easily find the flaw in your logic, but I don't agree with your conclusion because the randomness of my properties could be used for predictions.

For example, I could predict medium human life expectancy based on (supposedly random) my age now. My age is several decades, and human life expectancy is 2 х (several decades) with 50 percent probability (and it is true).

I could suggest many examples, where the randomness of my properties could be used to get predictions, even to measure the size of Earth based on my random distance from the equator. And in all cases that I could check, the DA-style logic works.