Are you Living in a Me-Simulation?

post by Michaël Trazzi (mtrazzi) · 2018-05-03T22:02:03.967Z · score: 6 (5 votes) · LW · GW · 8 comments

Contents

  The Simulation Argument
  Incentive and Probabilities
    Ancestor Simulations
    Physicalism and Consciousness
  Me-Simulations
    Virtual Reality and Complexity
    Efficiency
    Purpose
  Wrapping it up
None
8 comments

In Yesterday's Post [LW · GW] I tried to explain why I thought I was living in a computer simulation, and in particular a first-person simulation where I would be the only full-time-conscious being.

Unfortunately, I wanted to explain too many things at the same time, without taking the time to give precise arguments for any of them. I ended up spending only one sentence on why I believe I am living in such a simulation, and did not even try to give a precise estimation of the probability of being in such a simulation.

Luckily, Ikaxas [LW · GW] took the time to understand what were the premises, and gave precise counter-arguments. It forced me to be much more precise, and updated my beliefs on why I thought I was living in a first-person simulation. I am grateful he did so, and will be re-using arguments we exchanged in Yesterday's discussion.

The Simulation Argument

In what follows, I will assume the reader is familiar with the simulation argument. I will be referring more precisely to this text: https://www.simulation-argument.com/simulation.html.

Ancestor simulations are defined in the simulation argument as "A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) [...]."

The simulation argument estimates the probability of ancestor simulations, and not first-person simulations. Therefore, I cannot infer the probability of a first-person simulation from the probability of being in case 3) of the simulation argument (almost certain to live in a computer simulation), as I did in Yesterday's post.

However, Bostrom mentions the case of what he calls selective simulations (Part VI., Paragraph 13), and in particular me-simulations:

In addition to ancestor-simulations, one may also consider the possibility of more selective simulations that include only a small group of humans or a single individual. The rest of humanity would then be zombies or “shadow-people” – humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious. It is not clear how much cheaper shadow-people would be to simulate than real people. It is not even obvious that it is possible for an entity to behave indistinguishably from a real human and yet lack conscious experience. Even if there are such selective simulations, you should not think that you are in one of them unless you think they are much more numerous than complete simulations. There would have to be about 100 billion times as many “me-simulations” (simulations of the life of only a single mind) as there are ancestor-simulations in order for most simulated persons to be in me-simulations.

In short, me-simulations are simulations containing only one conscious being and one should only believe that he lives in a me-simulation if he thinks those are (at least) 100 billion times more probable than ancestor-simulations.

I briefly introduced at the beginning the concept of first-person simulations. I define first-person simulations as a particular case of me-simulation, where only the perceived environment of the conscious being is rendered (like it is the case in a first-person shooter).

I will try to explain why I think the probability of being in a me-simulation is comparable to the one of being in an-ancestor simulation.

Incentive and Probabilities

In this part, I will suppose that we are not in case 1) of the simulation argument (where "the human species is very likely to go extinct before reaching a posthuman stage"), and try to understand what would motivate a posthuman civilization to run ancestor simulations and me-simulations.

Ancestor Simulations

In the next paragraphs, I will assume that the reader is familiar with the Fermi Paradox and the concept of Great Filters. If this is not the case, here are introductions by Kurzgesact:

Now, my claim is that one of the following statement if true:

i) Posthumans will run ancestor-simulations because they don't observe other forms of (conscious) intelligence in the universe (Fermi Paradox), and are therefore trying to understand if there was indeed a Great Filter before them.
ii) Posthumans will observe that other forms of conscious intelligences are abundant and the universe, and will have low interest in running Ancestor simulations.

The premise of this claim is that human-like civilizations able to invent computers/internet/spaceships are either extremely rare (Great Filter behind us), or they are not, and in this case there exists a Great Filter ahead of us (because of the Fermi Paradox).

So Posthumans will only have interest in running Ancestor-simulations if they believe their form of intelligence is extremely rare. In this case, they would want to run a large amount of those simulations in order to have precise-enough estimations of how likely is each event in their past (e.g. Cambrian explosion) and determine which events were extremely unlikely.

As a consequence, we would be living in an ancestor-simulation if and only if all of the following statements are true:

1) There are no Great Filters between us and the Posthuman running the simulation
2) There is at least one Great Filter behind us
3) The simulation passed this first Great Filter, and his now "playing" the digital age phase (1990-2018)

Because of the Alignment problem, and other existential risks in general, I personally find the statement 1) extremely unlikely (less than one in a thousand).

Now, assuming the statement 2) being true, the probability of being in a simulation which successfully passed at least one Great Filter and is now full of intelligent species capable of exploring space (in SpaceX I believe) is extremely unlikely. In fact, the ratio (number of simulations that passed a Great Filter F) / (number of simulations that arrive to the Great Filter F) is by definition small (smaller than 0.000001 in my intuition of a Great Filter).

Therefore, I believe that the probability of being in a 2018-Earth-like Ancestor-simulation consisting of 7 billion humans is extremely small (less than a one-in-a-billion chance).

Physicalism and Consciousness

Before going into more details with Me-Simulations, I need to make sure I was clear enough about how I see Ancestor-Simulations in contrast with Me-Simulations, and in particular how they differ in the consciousness of their inhabitants.

Physicalism essentially states that "everything is physical" and that there is nothing magical about consciousness. Hence, if posthumans want to simulate human civilizations with as a starting point the Big Bang, they must implement enough complexity in whatever makes the simulation work so that the humans in the simulation are complex enough to (for instance) build spaceships, and from this complexity will naturally arise consciousness.

I will not discuss here the probability of Physicalism being true in my model of reality. For this post, I assume Physicalism to be true in order to reason conveniently about consciousness of simulated minds (I don't see how we could discuss consciousness in simulations with Dualism for instance).

However, I will answer one issue Ikaxas addressed in Yesterday's post:

"[...] even in a first-person simulation, the people you were interacting with would be conscious as long as they were within your frame of awareness (otherwise the simulation couldn't be accurate), it's just that they would blink out of existence once they left your frame of awareness."

I claim that even though they would be conscious in the "frame of awareness", they would not be fully-conscious. The reason is that if you give them sporadic consciousness, they would greatly lack of what constitutes conscious-human's subjective experiences: identity and continuity of consciousness.

Me-Simulations

I have close to none personal empirical evidence about the existence of Me-Simulations. The closest phenomenon of first-person simulation (particular case of me-simulation) to my personal experience would be Virtual Reality (VR).

To compare the probability of Me-Simulations I will start by explaining why I am convinced that Me-Simulations can be made cost-efficient, and then I would discuss the usages I believe I posthuman civilization could have of me-simulations.

Virtual Reality and Complexity

I grew up playing video-games where only a tiny fraction of the fictional universe is rendered at each moment. In 2017 I went to a VR conference where they explained how 2017 would be a VR winter because plenty of startups would be developing VR Software but there would be neither investment nor public interest. Moreover, the complexity of rendering 360 high-resolution images would be overwhelming considering the current (2017) algorithms/computation power.

In particular, I went to see a startup developing an experience where humans could go to the movie... using VR. You would then be virtually sitting on a chair, watching a screen included in the image from your VR headset. Even in this simple environment, the startup founders had to only render the chairs at the left and right of your chair because it would be to computationally expensive to fully render all the movie theater.

Efficiency

I claim that me-simulations could be made at lest 100 billion times less computationally expensive than full simulations. Here are my reasons to believe so:

1) Even though it would be necessary to generate consciousness to mimic human processes, it would only be necessary for the humans you directly interact to, so maybe 10 hours of human consciousness other than yours every day.
2) The physical density needed to simulate a me-simulation would be at most the size of your room (about 20 meters squared times the height of your room). If you are in room it is trivially true, and if you are in the outer world I believe you are less self-aware of the rest of the physical world, so the "complexity of reality" necessary so that you believe the world is real is about the same as if you were in your room. However, Earth's Surface is about 500 million km squared, so 2.5 * 10^13 times greater. Hence, it would be at least 100 billion times less computationally intensive to run a me-simulation, assuming you would want to simulate at least the same height for the ancestor-simulation.

Furthermore, you would only need to run one ancestor simulation to run an infinitely large number of me-simulations: if you knew about the environment, and had in memory how the conscious humans from the ancestor simulation behaved, you could easily run a me-simulation where a bunch of the other characters are just a copy of what they did in the past, but only one (or a small number of people) is conscious. A bit like in Westworld there are some plots where robots are really convincing, but whenever they are not in the same place (e.g. not in the saloon) their non-consciousness is more apparent.

Purpose

"Agreed, me-simulations might be much more cost-efficient than an ancestor-simulation, but why would a posthuman civilization want to simulate me-simulations in the first place?" you might say.

Here is a list of scenarios that convince me of the usage of such simulations:

Wrapping it up

8 comments

Comments sorted by top scores.

comment by TAG · 2018-05-04T11:30:41.806Z · score: 8 (3 votes) · LW · GW

The reason is that if you give them sporadic consciousness, they would greatly lack of what constitutes conscious-human’s subjective experiences: identity and continuity of consciousness.

That's a very big claim. For one thing, humans don't seem to have continuity of consciousness, in that we sleep. Also, it seems plausible that you could fake the subjective continuity of consciousness in a sim.

comment by Michaël Trazzi (mtrazzi) · 2018-05-04T12:33:48.774Z · score: 2 (1 votes) · LW · GW
humans don't seem to have continuity of consciousness, in that we sleep

Yes, humans do sleep. Let's suppose that the consciousness "pauses" or "vanishes" during sleep. Is it how would you define a discontinuity in consciousness? An interval of time delta_t without consciousness separating two conscious processes?

Bergson defines in Time and Free Will: An Essay on the Immediate Data of Consciousness duration as inseparable from consciousness. What would it mean to change consciousness instantaneously with teleportation? Would we need a minimum delta_t for it to make sense (maybe we could infer it from physical constraints given by general relativity?).

Also, it seems plausible that you could fake the subjective continuity of consciousness in a sim.

The question is always how do you fake it. Assuming physicalism, there would be some kind of threshold of cerebral activity which would lead to consciousness. At what point in faking "the subjective continuity of consciousness" do we reach this threshold?

I think my intuition behind the fact that (minimum cerebral activity + continuity of memories) is what lead to the human-like "consciousness" is the first season of WestWorld, where Maeve, a host, becomes progressively self-aware by trying to connect her memories during the entire season (same with Dolores).

comment by TAG · 2018-05-04T12:54:38.250Z · score: 8 (3 votes) · LW · GW

Yes, humans do sleep. Let’s suppose that the consciousness “pauses” or “vanishes” during sleep. Is it how would you define a discontinuity in consciousness? An interval of time delta_t without consciousness separating two conscious processes?

Ok.

Bergson defines in Time and Free Will: An Essay on the Immediate Data of Consciousness duration as inseparable from consciousnes

So? Maybe he is wrong. Even if he is right, it only means that duree has to stop when consciousness stops, and so on

What would it mean to change consciousness instantaneously with teleportation?

I don't see why that arises. If cosnc. is a computational process, and you halt it., when you restart it, it restarts in an identical state, so no change has occurred.

The question is always how do you fake it.

Computationally, that is easy. The halted process has no way of knowing it is not runnig when it is not running, so there is nothing to be papered over.

Assuming physicalism, there would be some kind of threshold of cerebral activity which would lead to consciousness.

Assuming physicalism is true, and computationalism is false ... you are not in a simulation.

Aside from that. you seem to think that when I am talking about halting a sim, I am emulating some gradual process like fallign asleep. I'm not.

I think my intuition behind the fact that (minimum cerebral activity + continuity of memories) is what lead to the human-like “consciousness”

I think you are just conflating consciousness (conscious experience) and sense-of-self. It is quite possible to have the one without the other. eg severe amnesiacs are not p-zombies.

comment by Michaël Trazzi (mtrazzi) · 2018-05-04T14:50:35.473Z · score: 2 (1 votes) · LW · GW
Aside from that. you seem to think that when I am talking about halting a sim, I am emulating some gradual process like fallign asleep. I'm not.

I was not thinking that you were talking about a gradual process.

I think you are just conflating consciousness (conscious experience) and sense-of-self. It is quite possible to have the one without the other. eg severe amnesiacs are not p-zombies.

I agree that I am not being clear enough (with myself and with you) and appear to be conflating two concepts. With you're example of amnesiacs and p-zombies two things come to my mind:

1) p-zombies: when talking about ethics (for instance in my Effective Egoist article) I was aiming at qualias, instead of just sim. conscious agents (with conscious experience as you say). To come back to your first comment, I wanted to say that "identity and continuity of consciousness" contribute to qualia, and make p-zombies less probable.

2) amnesiacs: in my video game I don't want to play with a world full of amnesiacs. If whenever I ask questions about their past they're being evasive, it does not feel real enough. I want them to have some memories. Here is a claim:

(P) "For memories to be consistent, the complexity needed would be the same as the complexity needed to emulate the experience which would produce the memory"

I am really unsure about this claim (one could produce fake memories just good enough for people not to notice anything. We don't have great memories ourselves). However, I think it casts light on what I wanted to express with "The question is always how do you fake it." Because it must be real/complex enough for them not to notice anything (and the guy in the me-sim. too) but also not too complex (otherwise you could just run full-simulations).

comment by TAG · 2018-05-04T16:04:09.500Z · score: 3 (1 votes) · LW · GW

“identity and continuity of consciousness” contribute to qualia, and make p-zombies less probable.

Why? To what extent?

amnesiacs: in my video game

I was trying to use evidence from (presumed) real life.

I don’t want to play with a world full of amnesiacs.

I don't know why you would "want" to be in a simulation at all. Most people would find the idea disturbing.

comment by Dacyn · 2018-05-05T13:46:21.864Z · score: 7 (2 votes) · LW · GW

If you play a game with complete brainwash then there is no meaningful connection between you and the entity playing the game, so it doesn't make sense to say that it would relieve boredom. If you just mean that the memories of Napoleon would be uploaded to the traveller, then there is no need to redo the simulation every time, it is enough to do it once and then copy the resulting memories.

In any case, intergalactic travel seems like a silly example of where it would be necessary to relieve boredom; in an advanced society presumably everyone has been uploaded and so the program could just be paused during travel. Or if people haven't been uploaded, there is probably some other technology to keep them in cryogenic sleep or something.

comment by Tiago de Vassal (tiago-de-vassal) · 2018-05-03T22:44:23.696Z · score: 2 (2 votes) · LW · GW

Thanks for your post, I appreciate the improvement to your argument !

1- It isn't clear to me why observing other forms of conscious intelligences leads to low interest in running non-me-simulations and only non-me-simulations. To me, there are other reasons to make simulations than those you highlited. I might concede that observing other forms of conscious intelligence might lessen motivation for creating full-scale simulations, but I think it would also reduce the need for me-simulations, for the same reasons.

2- Since there is a non-zero chance for you to be in an non-me-simulation or in reality, wouldn't it still compell you to act altruistically, at least to some extent ? What are the probabilities you would assign to those events ?

comment by Michaël Trazzi (mtrazzi) · 2018-05-04T06:01:54.043Z · score: 1 (1 votes) · LW · GW

Thank you for reading me.

1- In this post I don't really mention "non-me-simulations". I try to compare the probability of only one full-time conscious being (me-simulation) to what Bostrom calls ancestor-simulations, as those full-scale simulations where one could replay "the entire mental history of humankind".

For any simulation consisting of N individuals (e.g. N = 7 billion), there could in principle exist simulations where 0, 1, 2, ... or N of those individuals are conscious.

When the number k of individuals being conscious satisfies k << N then I call the simulation selective.

I think your comment points out to the following apparent conjunction fallacy: I am trying to estimate the probability of the event "simulation of only one conscious individual" instead of "simulation of a limited number of individuals k << N" of greater probability (first problem)

The point I was trying to make is the following: 1) ancestor-simulations (i.e. full-scale and computationally-intensive simulations to understand ancestor's history) would be motivated by more and more evidence of a Great Filter behind the posthuman civilization. 2) the need for me-simulation (which would be the most probable type of selective simulation because it only needs one player (e.g. a guy in his spaceship)) do not appear to rely on the existence of a Great Filter behind the posthuman civilization. They could be like cost-efficient single consciousness play for fun, or prisoners are condemned to.

I guess the second problem with my argument for the probablity of me-simulations is that I don't give any probability of being in a me-simulation, whereas in the original simulation argument, the strength of Bostrom's argument is that whenever an ancestor-simulation is generated, 100 billion conscious lives are created, which greatly improves the probability of being in such a simulation. Here, I could only estimate the cost-effectiveness of me-simulation in comparison with ancestor simulation.

2- I think you are assuming I believe in Utilitarianism. Yes, I agree that if I am Utilitarian I may want to act altruistically, even with some very small non-zero probability of being in a non-me-simulation or in reality.

I already answered to this question Yesterday in the effective egoist post (cf. comment to Ikaxas) and I am realizing that my answer was wrong because I didn't assume that other people could be full-time-conscious.

My argument (supposing I am Utilitarian, for the sake of argument), essentially, was that if I had 10$ in my pocket and wanted to buy me an icecream (utility of 10 for me let's say) I would need to provide an utility of 10*1000 to someone being full-time conscious to consider giving him the icecream (his utility would rise to 10 000 for instance). In the absence of some utility monster, I believe this case to be extremely unlikely and would end up eating icecreams all by myself.

[copy paste from Yesterday's answer to Ikaxas] In practice, I don't share deeply the Utilitarian view. To describe it shortly I believe I am a Solipsist who values the perception of complexity. So I value my own survival because I may not have any proof of any kind of the complexity of the Universe if I stop to exist, but also the survival of Humanity (because I believe humans are amazingly complex creatures), but I don't value positive subjective perceptions of other conscious human beings. I value my own positive subjective perceptions because it maximizes my utility function of maximizing my perception of complexity.

Anyway, I don't want to enter the debate of highly-controversial Effective Egoism inside what I wanted to be a more scientific probability-estimation post about a particular kind of simulation.

Thank you for your comment. I hope I answered you well. Feel free to ask any other clarification or point out to other fallacies in my reasoning.