0 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2022-02-24T20:39:51.499Z · LW(p) · GW(p)
Have you read The Ghost in the Quantum Turing Machine? It deals with these questions and many others.
Replies from: joshua-clymer↑ comment by joshc (joshua-clymer) · 2022-02-26T01:26:57.410Z · LW(p) · GW(p)
This looks great. I'll check it out thanks!
comment by Viliam · 2022-02-24T19:32:17.216Z · LW(p) · GW(p)
Let's say that the engineers intended it to simulate New York City on a Sunday afternoon. They might see an old man walking his dog on their monitor and imagine that they brought him to life … but maybe they are 'really' simulating a fluctuating stick.
They are probably simulating both.
I think it is clear that the stack isn't producing conscious experiences as it doesn't seem like time could be felt by the people represented in the pages.
That just means that our time isn't their time. (Related [LW · GW].)
Replies from: joshua-clymer↑ comment by joshc (joshua-clymer) · 2022-02-26T01:34:58.684Z · LW(p) · GW(p)
- If they are simulating both, then I think they are simulating a lot of things. Could they be simulating Miami on a Monday morning? If the simulation states are represented with N bits and states of Miami can be expressed with <= N bits then I think they could be. You just need to define a one-to-one map between the sequence of bit arrays to sensible sequences of states of Miami, which is guaranteed to exist (every bit array is unique and every state of Miami is unique). Extending this argument implies we are almost certainly in an accidental simulation.
- Then what determines what 'their time' is? Does the order of the pages matter? Why would their time need to correspond to our space?
↑ comment by Viliam · 2022-02-26T18:44:45.155Z · LW(p) · GW(p)
Observe the flow of causality/information.
I suggest to stop treating this as a philosophical problem, and approach it as an engineering problem. (Philosophers are rewarded for generating smart-sounding sequences of words. Engineers are rewarded for designing technical solutions that work.) Suppose you have a technology of 24th century at your disposal, how exactly would you simulate "an old man walking his dog"? If it helps, imagine that this is your PhD project.
One possibility would be to get some atomic scanner, and scan some old man with his dog. If that is not possible, e.g. because the GDPR still applies in the 24th century, just download some generic model of human and canine physiology and run it -- that is, simulate how the individual atoms move according to the laws of physics. This is how you get a simulation of "an old man walking his dog".
Is it simultaneously a simulation of "a stick that magically keeps changing its size so that the size represents a binary encoding of an old man walking his dog"? Yes. But the important difference -- and the reason why I called the stick "magical" -- is that the behavior of the stick is completely unlike the usual laws of physics.
Like, if you want to compute the "old man walking his dog" at the next moment of time, you need to look at positions and momentum of all atoms, and then calculate their positions and momentum in the next fraction of second. Ignoring the computing power necessary to do this, the algorithm is kinda simple. But if you want to compute the "magical stick" at the next moment of time... the only way to do this is to decode the information stored in the current length of the stick, update the information, and then encode it again. In other words, the simulation of the old man and his dog is in some sense direct, while the simulation of the stick is effectively a simulation of the old man and his dog... afterwards transformed into the length of the stick. You are simulating a stick that "contains" the old man and his dog. Not a normal stick. (Similarly, you could simulate a magical "Monday morning Miami" that "contains" the old man and his dog, e.g. encoded as current positions of ants in the city. But you couldn't simulate a normal Monday morning Miami like that. The ants would not move in the same pattern as normal ants do.)
tl;dr -- "how exactly do you calculate the next moment of your simulation?" is the key question here
For your second question, observe in which direction of your simulated universe you can run computations. Imagine that as a god outside that universe, you are allowed to make exactly one intervention: to insert into the universe at some point a computer that will (within the universe) compute e.g. the prime numbers. Ok, so... here is the place where you inserted the computer... and where exactly do you expect that the results (the computed prime numbers) will appear? That is the in-universe direction of time.
If you inserted a computer into our universe at some place on Earth, the logical place to look for the results would be the same place on Earth, some moment later in the future. If you insert a computer into a realistic comic book, the place to look for the results is a few pages later to the right (or to the left if it is a Japanese manga).
Replies from: TAG↑ comment by TAG · 2022-02-26T19:31:23.636Z · LW(p) · GW(p)
I suggest to stop treating this as a philosophical problem, and approach it as an engineering problem. (Philosophers are rewarded for generating smart-sounding sequences of words. Engineers are rewarded for designing technical solutions that work.
How do you tell what works? Did someone invent a consciousness detector?
Replies from: Viliam↑ comment by Viliam · 2022-02-26T21:14:21.635Z · LW(p) · GW(p)
I suggest to focus on the technical difficulties of designing the stick whose length happens to encode the mental state of an old man walking his dog, without somehow simulating the old man at the same time.
Imagining such stick... is kinda easy, because imagination is not constrained by being logically consistent. I can imagine that the stick just happens to have the right length all the time, without asking myself by which mechanism such thing might possibly happen. Even if we assume amazing sci-fi mechanisms of the 24th century, thinking about technology still makes us focus on "how".
(Similar approach can be applied to other philosophical questions. Such as: how would you design a robot that has a free will? How would you detect which robots have a free will, and which ones don't?)
Replies from: TAGcomment by TAG · 2022-02-24T23:21:10.148Z · LW(p) · GW(p)
Note that future states of the simulation are stored in memory instead of being computed on demand, so this cannot be a necessary condition for producing consciousness
How are you defining "state"? If a future state is a snapshot of the outer world, like a movie frame that hasnt been projected yet, and it is sent to the brain by something like normal sensory channels, then all the "producing consciousness" is happening in the brain , and the pre-arranged nature of the snapshots being fed in is irrelevant.
Replies from: joshua-clymer↑ comment by joshc (joshua-clymer) · 2022-02-26T01:47:38.248Z · LW(p) · GW(p)
I am defining it as you said. They are like movie frames that haven't been projected yet. I agree that the pre-arranged nature of the snapshots is irrelevant -- that was the point of the example (sorry that this wasn't clear).
The purpose of the example was to falsify the following hypothesis:
"In order for a simulation to produce conscious experiences, it must compute the next state based on the previous state. It can't just 'play the simulation from memory'"
Maybe what you are getting at is that this hypothesis doesn't do justice to the intuitions that inspired it. Something complex is happening inside the brain that is analogous to 'computation on-demand.' This differentiates the computer-brain system from the stack of papers that are being moved around.
This seems legit... I just would like to have a more precise understanding of what this 'computation on-demand' property is.
comment by feigned-trait · 2022-03-22T16:36:48.977Z · LW(p) · GW(p)
It is first worth noting that I think these questions directly assume a functionalist philosophy of mind, by loosening this I believe you do not hit as many of these issues.
Your question in 1 is discussed in Qualia Research Institute's Against Functionalism (specifically the section Objection 6: Mapping to reality). Though perhaps a better area to explore this is the discussion of absent mental states in philosophy of mind. I think there is a mapping from allowing non-bijective internal experiences in functionalism, to non-bijective mappings of computation substrate to simulations. Finally, on a grander scale related to your first question - Greg Egan's Permutation City explores simulations running eternally on anything. I would suggest this is an enjoyable way of exploring this topic.
The question of where the moral value lies in flipbooks is discussed in Tomasik's Eternalism and Its Ethical Implications (specifically the section What is the ontological primitive?). Villiam has also noted Yudkowski's Timeless Physics [LW · GW].
comment by Signer · 2022-02-24T21:32:45.692Z · LW(p) · GW(p)
-
Experiences are real, so different third person descriptions of experiences are as valid as different third person descriptions of reality. Decoding fundamental reality as simulations of agents is only constrained by specific preferences for binding between language of simulation and fundamental reality. So it depends: if you only care about fundamental reality, then only real non-simulated universe experience things, because even division to agents is arbitrary. If you care about simulations potentially recognized by someone simulating them, then you probably can construct some argument for upper bound for simulated agents using dimensionality of physics. But ultimately it all depends on preferences, so in a sense all possible interpretations of reality as agents are valid.
-
There is no fundamental difference. Stack is producing conscious experiences and time could be felt by the people represented in the pages and could be even felt by people represented as something dependent on a stack contents if there are appropriate differential laws describing stack's content.