What's Wrong With the Simulation Argument?
post by AynonymousPrsn123 · 2025-01-18T02:32:44.655Z · LW · GW · 2 commentsThis is a question post.
Contents
2 comments
In LessWrong contributor Scott Alexander's essay, Espistemic Learned Helplessness, he wrote,
Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.
I can't help but agree with Scott Alexander about the simulation argument. No one has refuted it, ever, in my books. However, this argument carries a dramatic, and in my eyes, frightening implication for our existential situation.
Joe Carlsmith's essay, Simulation Arguments [LW · GW], clarified some nuances, but ultimately the argument's conclusion remains the same.
When I looked on Reddit for the answer, the attempted counterarguments were weak and disappointing.
It's just that, the claims below feel so obvious to me:
- It is physically possible to simulate a conscious mind.
- The universe is very big, and there are many, many other aliens.
- Some aliens will run various simulations.
- The number of simulations that are "subjectively indistinguishable" from our own experience far outnumbers authentic evolved humans. (By "subjectively indistinguishable," I mean the simulates can't tell they're in a simulation. )
When someone challenges any of those claims, I'm immediately skeptical. I hope you can appreciate why those claims feel evident.
Thank you for reading all this. Now, I'll ask for your help.
Can anyone here provide a strong counter to Bostrom's simulation argument? If possible, I'd like to hear specifically from those who've engaged deeply and thoughtfully with this argument already.
Thank you again.
Answers
2 comments
Comments sorted by top scores.
comment by quila · 2025-01-18T03:21:19.493Z · LW(p) · GW(p)
This isn't an argument against the idea that we have many instantiations[1] in simulations, which I believe we do. My view is that, still, the most impact to be had is (modulo this [LW(p) · GW(p)]) in the worlds where I'm not in a simulation (where I can improve a very long future by reducing s-/x-risks), so those are the contexts which my decisions should be about effecting from within.
IIUC, this might be a common belief, but I'm not sure. I know at least a few other x-risk focused people believe this.
It's also more relevant for the question of "what choice helps the most beings"; if you feel existential dread over having many simulated instantiations, this may not help with it.
- ^
If there are many copies of one, the question is not "which one am I really?", you basically become an abstract function choosing how to act for all of them at once [? · GW].
↑ comment by AynonymousPrsn123 · 2025-01-18T04:39:38.696Z · LW(p) · GW(p)
I have to say, quila, I'm pleasantly surprised that your response above is both plausible and logically coherent—qualities I couldn't find in any of the Reddit responses. Thank you.
However, I have concerns and questions for you.
Most importantly, I worry that if we're currently in a simulation, physics and even logic could be entirely different from what they appear to be. If all our senses are illusory, why should our false map align with the territory outside the simulation? A story like your "Mutual Anthropic Capture" offers hope: a logically sound hypothesis in which our understanding of physics is true. But why should it be? Believing that a simulation exactly matches reality sounds to me like the privileging the hypothesis fallacy.
By the way, I'm also somewhat skeptical of a couple of your assumptions in Mutual Anthropic Capture. Still, I think it's a good idea overall, and some subtle modifications to the idea would probably make logically sound. I won't bother you about those small issues here, though; I'm more interested in your response to my concern above.