post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by TAG · 2018-05-04T11:30:41.806Z · LW(p) · GW(p)

The reason is that if you give them sporadic consciousness, they would greatly lack of what constitutes conscious-human’s subjective experiences: identity and continuity of consciousness.

That's a very big claim. For one thing, humans don't seem to have continuity of consciousness, in that we sleep. Also, it seems plausible that you could fake the subjective continuity of consciousness in a sim.

Replies from: mtrazzi
comment by Michaël Trazzi (mtrazzi) · 2018-05-04T12:33:48.774Z · LW(p) · GW(p)
humans don't seem to have continuity of consciousness, in that we sleep

Yes, humans do sleep. Let's suppose that the consciousness "pauses" or "vanishes" during sleep. Is it how would you define a discontinuity in consciousness? An interval of time delta_t without consciousness separating two conscious processes?

Bergson defines in Time and Free Will: An Essay on the Immediate Data of Consciousness duration as inseparable from consciousness. What would it mean to change consciousness instantaneously with teleportation? Would we need a minimum delta_t for it to make sense (maybe we could infer it from physical constraints given by general relativity?).

Also, it seems plausible that you could fake the subjective continuity of consciousness in a sim.

The question is always how do you fake it. Assuming physicalism, there would be some kind of threshold of cerebral activity which would lead to consciousness. At what point in faking "the subjective continuity of consciousness" do we reach this threshold?

I think my intuition behind the fact that (minimum cerebral activity + continuity of memories) is what lead to the human-like "consciousness" is the first season of WestWorld, where Maeve, a host, becomes progressively self-aware by trying to connect her memories during the entire season (same with Dolores).

Replies from: TAG
comment by TAG · 2018-05-04T12:54:38.250Z · LW(p) · GW(p)

Yes, humans do sleep. Let’s suppose that the consciousness “pauses” or “vanishes” during sleep. Is it how would you define a discontinuity in consciousness? An interval of time delta_t without consciousness separating two conscious processes?

Ok.

Bergson defines in Time and Free Will: An Essay on the Immediate Data of Consciousness duration as inseparable from consciousnes

So? Maybe he is wrong. Even if he is right, it only means that duree has to stop when consciousness stops, and so on

What would it mean to change consciousness instantaneously with teleportation?

I don't see why that arises. If cosnc. is a computational process, and you halt it., when you restart it, it restarts in an identical state, so no change has occurred.

The question is always how do you fake it.

Computationally, that is easy. The halted process has no way of knowing it is not runnig when it is not running, so there is nothing to be papered over.

Assuming physicalism, there would be some kind of threshold of cerebral activity which would lead to consciousness.

Assuming physicalism is true, and computationalism is false ... you are not in a simulation.

Aside from that. you seem to think that when I am talking about halting a sim, I am emulating some gradual process like fallign asleep. I'm not.

I think my intuition behind the fact that (minimum cerebral activity + continuity of memories) is what lead to the human-like “consciousness”

I think you are just conflating consciousness (conscious experience) and sense-of-self. It is quite possible to have the one without the other. eg severe amnesiacs are not p-zombies.

Replies from: mtrazzi
comment by Michaël Trazzi (mtrazzi) · 2018-05-04T14:50:35.473Z · LW(p) · GW(p)
Aside from that. you seem to think that when I am talking about halting a sim, I am emulating some gradual process like fallign asleep. I'm not.

I was not thinking that you were talking about a gradual process.

I think you are just conflating consciousness (conscious experience) and sense-of-self. It is quite possible to have the one without the other. eg severe amnesiacs are not p-zombies.

I agree that I am not being clear enough (with myself and with you) and appear to be conflating two concepts. With you're example of amnesiacs and p-zombies two things come to my mind:

1) p-zombies: when talking about ethics (for instance in my Effective Egoist article) I was aiming at qualias, instead of just sim. conscious agents (with conscious experience as you say). To come back to your first comment, I wanted to say that "identity and continuity of consciousness" contribute to qualia, and make p-zombies less probable.

2) amnesiacs: in my video game I don't want to play with a world full of amnesiacs. If whenever I ask questions about their past they're being evasive, it does not feel real enough. I want them to have some memories. Here is a claim:

(P) "For memories to be consistent, the complexity needed would be the same as the complexity needed to emulate the experience which would produce the memory"

I am really unsure about this claim (one could produce fake memories just good enough for people not to notice anything. We don't have great memories ourselves). However, I think it casts light on what I wanted to express with "The question is always how do you fake it." Because it must be real/complex enough for them not to notice anything (and the guy in the me-sim. too) but also not too complex (otherwise you could just run full-simulations).

Replies from: TAG
comment by TAG · 2018-05-04T16:04:09.500Z · LW(p) · GW(p)

“identity and continuity of consciousness” contribute to qualia, and make p-zombies less probable.

Why? To what extent?

amnesiacs: in my video game

I was trying to use evidence from (presumed) real life.

I don’t want to play with a world full of amnesiacs.

I don't know why you would "want" to be in a simulation at all. Most people would find the idea disturbing.

comment by Dacyn · 2018-05-05T13:46:21.864Z · LW(p) · GW(p)

If you play a game with complete brainwash then there is no meaningful connection between you and the entity playing the game, so it doesn't make sense to say that it would relieve boredom. If you just mean that the memories of Napoleon would be uploaded to the traveller, then there is no need to redo the simulation every time, it is enough to do it once and then copy the resulting memories.

In any case, intergalactic travel seems like a silly example of where it would be necessary to relieve boredom; in an advanced society presumably everyone has been uploaded and so the program could just be paused during travel. Or if people haven't been uploaded, there is probably some other technology to keep them in cryogenic sleep or something.

comment by Tiago de Vassal (tiago-de-vassal) · 2018-05-03T22:44:23.696Z · LW(p) · GW(p)

Thanks for your post, I appreciate the improvement to your argument !

1- It isn't clear to me why observing other forms of conscious intelligences leads to low interest in running non-me-simulations and only non-me-simulations. To me, there are other reasons to make simulations than those you highlited. I might concede that observing other forms of conscious intelligence might lessen motivation for creating full-scale simulations, but I think it would also reduce the need for me-simulations, for the same reasons.

2- Since there is a non-zero chance for you to be in an non-me-simulation or in reality, wouldn't it still compell you to act altruistically, at least to some extent ? What are the probabilities you would assign to those events ?

Replies from: mtrazzi
comment by Michaël Trazzi (mtrazzi) · 2018-05-04T06:01:54.043Z · LW(p) · GW(p)

Thank you for reading me.

1- In this post I don't really mention "non-me-simulations". I try to compare the probability of only one full-time conscious being (me-simulation) to what Bostrom calls ancestor-simulations, as those full-scale simulations where one could replay "the entire mental history of humankind".

For any simulation consisting of N individuals (e.g. N = 7 billion), there could in principle exist simulations where 0, 1, 2, ... or N of those individuals are conscious.

When the number k of individuals being conscious satisfies k << N then I call the simulation selective.

I think your comment points out to the following apparent conjunction fallacy: I am trying to estimate the probability of the event "simulation of only one conscious individual" instead of "simulation of a limited number of individuals k << N" of greater probability (first problem)

The point I was trying to make is the following: 1) ancestor-simulations (i.e. full-scale and computationally-intensive simulations to understand ancestor's history) would be motivated by more and more evidence of a Great Filter behind the posthuman civilization. 2) the need for me-simulation (which would be the most probable type of selective simulation because it only needs one player (e.g. a guy in his spaceship)) do not appear to rely on the existence of a Great Filter behind the posthuman civilization. They could be like cost-efficient single consciousness play for fun, or prisoners are condemned to.

I guess the second problem with my argument for the probablity of me-simulations is that I don't give any probability of being in a me-simulation, whereas in the original simulation argument, the strength of Bostrom's argument is that whenever an ancestor-simulation is generated, 100 billion conscious lives are created, which greatly improves the probability of being in such a simulation. Here, I could only estimate the cost-effectiveness of me-simulation in comparison with ancestor simulation.

2- I think you are assuming I believe in Utilitarianism. Yes, I agree that if I am Utilitarian I may want to act altruistically, even with some very small non-zero probability of being in a non-me-simulation or in reality.

I already answered to this question Yesterday in the effective egoist post (cf. comment to Ikaxas) and I am realizing that my answer was wrong because I didn't assume that other people could be full-time-conscious.

My argument (supposing I am Utilitarian, for the sake of argument), essentially, was that if I had 10$ in my pocket and wanted to buy me an icecream (utility of 10 for me let's say) I would need to provide an utility of 10*1000 to someone being full-time conscious to consider giving him the icecream (his utility would rise to 10 000 for instance). In the absence of some utility monster, I believe this case to be extremely unlikely and would end up eating icecreams all by myself.

[copy paste from Yesterday's answer to Ikaxas] In practice, I don't share deeply the Utilitarian view. To describe it shortly I believe I am a Solipsist who values the perception of complexity. So I value my own survival because I may not have any proof of any kind of the complexity of the Universe if I stop to exist, but also the survival of Humanity (because I believe humans are amazingly complex creatures), but I don't value positive subjective perceptions of other conscious human beings. I value my own positive subjective perceptions because it maximizes my utility function of maximizing my perception of complexity.

Anyway, I don't want to enter the debate of highly-controversial Effective Egoism inside what I wanted to be a more scientific probability-estimation post about a particular kind of simulation.

Thank you for your comment. I hope I answered you well. Feel free to ask any other clarification or point out to other fallacies in my reasoning.