On the Impossibility of Interaction with Simulated Agents
post by Walkabout · 2022-01-13T17:50:25.987Z · LW · GW · 8 commentsContents
It can't be done But what if I do it anyway? But what if I do it a lot? So what can I do? None 8 comments
Is it possible to interact with simulated agents?
It is certainly possible from the perspective of the simulator. It could make changes to the state of the simulated environment, and run the simulation rules forward to see what would happen.
It can't be done
But consider the perspective of the simulated. Each simulated agent's experience is the sum of all possible ways that that agent's subjective experience could happen, including all possible ways it could exist in fundamental base reality, and all possible ways it could be simulated.
The total measure of those cases that correspond to the ones in which any particular simulators choose to intervene in the simulation in any particular way is almost certainly so small as to be discountable for any agent with legitimate concerns of its own. It is far more likely, from the agent's perspective, that it will die of a sudden failure of a critical body component, in a way completely consistent with the physical laws it has observed up to that point, than it is that God will appear to it.
But what if I do it anyway?
Suppose, however, that it is God's intention to appear, and He does so, by altering the simulation state appropriately. Consider the subsequent situation from the perspective of the simulated agent.
The agent perceives something in the world that it thinks is a communication from outside the world. It is, once again, a member of the set of all possible ways the agent could be having having the same subjective experience. What measure-fraction of agents who think this are actually in simulations being intervened in in that way? Given that the simulated agents are, like ourselves, sufficiently shoddily constructed, there are a large number of ways an agent could become convinced that it had observed divine intervention that completely accord with the physical laws that have governed the simulation up to that point.
Even supposing the agent is particularly well-built, smart, and rational, and very little of its subjective experience is out of correspondence with its environment, the ways in which verifiably real, highly unlikely phenomena could be genuine signs from God the Simulator need to compete against the ways in which those phenomena could be merely highly unlikely mundane occurrences. Is it more likely that the agent is living in a simulation where someone outside it has elected to perform a miracle, or is it more likely that something within the agent's universe (perhaps another conspecific agent, space alien, quantum fluctuation, or previously undiscovered physical effect) has caused the phenomenon? Even if one regards simulated universes to be quite common, only the ways to have this subjective experience in simulated universes that have had this particular intervention applied will count here, whereas ways to have the same subjective experience in simulations without the intervention contribute to the other side of the question.
(Of course, all of these have caused the phenomenon, to some degree, because we are indexing on the composite of all agents undergoing this particular subjective experience.)
But what if I do it a lot?
Some thinkers have attempted to get around the problem of a particular simulation being unlikely from a simulated agent's perspective by multiplying the count of the simulated agents, such as in Simulation Capture [LW · GW] or The Dr. Evil Problem [LW · GW]. Variously, either you or Dr. Evil are simulated repeatedly, and threatened with the torture of those simulations, to try and unify your subjective experience with one where you ought to believe you will experience that torture with nontrivial probability, depending on how you act given that experience. This does not in fact work.
Thus far I've dealt informally with the mathematics of the relative measures of distinct ways to have a fixed subjective experience, as a way to model what additional elements might come into that subjective experience when it extends itself into its subjective future. In Bayesian terms, there is a missing prior over ways to have a subjective experience. Independent of the count of distinct ways matching a certain criterion, some ways to have a subjective experience might have more "realness" than others.
But any method of attributing a realness measure to ways to have subjective experiences would have to work from the perspective of the agent having the experiences in order to matter to it. Anything based on simulation count might be well-formed from the perspective of the simulator, but is uninterpretable from the perspective of the simulated, who is, remember, unified over all ways in which it could be having its given subjective experience.
So what can I do?
A propensity for simulators to create many simulations might matter to a potentially-simulated agent, though it leaves open the problem of how realness is to be distributed over the simulators. An operationalized model of the Mathematical Universe theory [LW · GW] might allow a subjective experience to acquire reference to how realness is distributed among types of ways it might be being had. Acuasal trade [? · GW] with the ensemble of potential simulators, weighted according to their contributions of realness to a subjective experience, might be possible.
Overall, though, simulated agents are essentially subjectively immune to interaction with their simulators.
8 comments
Comments sorted by top scores.
comment by JBlack · 2022-01-14T00:10:04.904Z · LW(p) · GW(p)
I can't communicate with you, for exactly the same reason. This is a serious objection, though phrased frivolously. Your statement
Each simulated agent's experience is the sum of all possible ways that that agent's subjective experience could happen, including all possible ways it could exist in fundamental base reality, and all possible ways it could be simulated
(emphasis mine) is given no justification whatsoever. You appear to be conflating the ensemble of all possible agents with the individual agents having different experiences in that ensemble.
Nothing you have said applies only to simulated agents, and so it seems that you are proving too much.
That is, you seem to be saying that "I" can't communicate with "you" because some versions of "you" could be atmospheric life forms in a gas giant in a different universe where "I" am not even present.
Replies from: TAG↑ comment by TAG · 2022-01-17T23:11:16.458Z · LW(p) · GW(p)
This is a serious objection, though phrased frivolously
I particularly enjoyed that phrase.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2022-01-18T07:21:40.991Z · LW(p) · GW(p)
I can't communicate with you, for exactly the same reason.
Always good to start a comment by telling the author you think they won't listen :P (sarcasm!)
Replies from: TAGcomment by Charlie Steiner · 2022-01-18T07:26:04.089Z · LW(p) · GW(p)
If you, the simulator, think you're in a simple universe, and you decide to intervene in the simulation... then you should think the simulated people would be perfectly right to infer that they are being intervened on, because it happens in such a simple universe!
Replies from: Pattern↑ comment by Pattern · 2022-01-19T18:17:36.447Z · LW(p) · GW(p)
Suppose a program p is playing chess (or go or checkers*) against someone or something, and models 'it' as a program, and tries to simulate what they will do in response to some moves p is considering.
*Solved under at least one rule set.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2022-01-19T19:11:45.801Z · LW(p) · GW(p)
Then it would think it was in a universe much simpler than ours, and to convince it otherwise we would have to give it a number of bits ~ the difference in complexities.
Replies from: Pattern