Posts

What's Wrong With the Simulation Argument? 2025-01-18T02:32:44.655Z

Comments

Comment by AynonymousPrsn123 on What's Wrong With the Simulation Argument? · 2025-01-20T14:49:15.528Z · LW · GW

Maybe. But what do you mean by, "you can narrow nothing down other than pure logic"?

I interpret the first part—"you can narrow nothing down"—to mean that the simulation argument doesn't help us make sense of reality. But I don't understand the second part: "other than pure logic." Can you please clarify this statement?

Comment by AynonymousPrsn123 on What's Wrong With the Simulation Argument? · 2025-01-19T04:20:18.043Z · LW · GW

Thank you, I feel inclined to accept that for now.

But I'm still not sure, and I'll have to think more about this response at some point.

Edit: I'm still on board with what you're generally saying, but I feel skeptical of one claim:

It seems to me the main ones produce us via base physics, and then because there was an instance in base physics, we also get produced in neighboring civilizations' simulations of what other things base physics might have done in nearby galaxies so as to predict what kind of superintelligent aliens they might be negotiating with before they meet each other.

My intuition tells me there will probably be superior methods of gathering information about superintelligent aliens. To me, it seems like the most obvious reason to create sims would be to respect the past for some bizarre ethical reason, or for some weird kind of entertainment, or even to allow future aliens to temporarily live in a more primitive body. Or perhaps for a reason we have yet to understand.

I don't think any of these scenarios would really change the crux of your argument, but still, can you please justify your claim for my curiosity?

Comment by AynonymousPrsn123 on What's Wrong With the Simulation Argument? · 2025-01-18T13:29:29.392Z · LW · GW

I think I understand your point. I agree with you: the simulation argument relies on the assumption that physics and logic are the same inside and outside the simulation. In my eyes, that means we may either accept the argument's conclusion or discard that assumption. I'm open to either. You seem to be, too—at least at first. Yet, you immediately avoid discarding the assumption for practical reasons:

If we have no grasp on anything outside our virtualized reality, all is lost.

I agree with this statement, and that's my fear. However, you don't seem to be bothered by the fact. Why not? The strangest thing is that I think you agree with my claim: "The simulation argument should increase our credence that our entire understanding of everything is flawed." Yet somehow, that doesn't frighten you. What do you see that I don't see? Practical concerns don't change the territory outside our false world.

Second:

It seems to me the main reason is because we're near a point of high influence in original reality and they want to know what happened - the simulations then are effectively extremely high resolution memories.

That's surely possible, but I can imagine hundreds of other stories. In most of those stories, altruism from within the simulation has no effect on those outside it. Even worse, is that there are some stories in which inflicting pain within a simulation is rewarded outside of it. Here's a possible hypothetical:

Imagine humans in base reality create friendly AI. To respect their past, the humans ask the AI to create tons of sims living in different eras. Since some historical info was lost to history, the sims are slightly different from base reality. Therefore, in each sim, there's a chance AI never becomes aligned. Accounting for this possibility, base reality humans decide to end sims in which AI becomes misaligned and replace those sims with paradise sims where everyone is happy.

In the above scenario, both total and average utilitarianism would recommend intentionally creating misaligned AI so that paradise ensues.

I'm sure you can craft even more plausible stories. 

My point is, even if our understanding of physics and logic is correct, I don't see why we ought to privilege the hypothesis that simulations are memories. I also don't see why we ought to privilege the idea that it's in our interest to increase utility within the simulation. Can you please clarify why you're so confident about these notions?

Thank you

Comment by AynonymousPrsn123 on What's Wrong With the Simulation Argument? · 2025-01-18T04:39:38.696Z · LW · GW

I have to say, quila, I'm pleasantly surprised that your response above is both plausible and logically coherent—qualities I couldn't find in any of the Reddit responses. Thank you.

However, I have concerns and questions for you.

Most importantly, I worry that if we're currently in a simulation, physics and even logic could be entirely different from what they appear to be. If all our senses are illusory, why should our false map align with the territory outside the simulation? A story like your "Mutual Anthropic Capture" offers hope: a logically sound hypothesis in which our understanding of physics is true. But why should it be? Believing that a simulation exactly matches reality sounds to me like the privileging the hypothesis fallacy.

By the way, I'm also somewhat skeptical of a couple of your assumptions in Mutual Anthropic Capture. Still, I think it's a good idea overall, and some subtle modifications to the idea would probably make logically sound. I won't bother you about those small issues here, though; I'm more interested in your response to my concern above.

Comment by AynonymousPrsn123 on [deleted post] 2025-01-08T00:37:17.914Z

This is an interesting observation which may well be true, I'm not sure, but the more intuitive difference is that SSA is about actually existing observers, while SIA is about potentially existing observers. In other words, if you are reasoning about possible realities in the so-called "multiverse of possibilities," than you are using SIA. Whereas if you are only considering a single reality (e.g., the non-simulated world), you select a reference class from that reality (e.g., humans), you may choose to use use SSA to say that you are a random observer from that class (e.g., a random human in human history).

Comment by AynonymousPrsn123 on [deleted post] 2025-01-08T00:11:30.360Z

You are describing the SIA assumption to a T.

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T23:51:10.612Z

This is what I was thinking:

If simulations exist, we are choosing between two potentially existing scenarios, either I'm the only real person in my simulation, or there are other real people in my simulation. Your argument prioritizes the latter scenario because it contains more observers, but these are potentially existing observers, not actual observers. SIA is for potentially existing observers.

I have a kind of intuition that something like my argument above is right, but tell me if that is unclear.

And note: one potential problem with your reasoning is that if we take it to it's logical extreme, it would be 100% certain that we are living in a simulation with infinite invisible observers. Because infinity dominates all the finite possibilities.

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T23:34:15.611Z

I think you are overlooking that your explanation requires BOTH SSA and SIA, but yes, I understand where you are coming from.

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T23:10:45.753Z

Other people here have responded in similar ways to you; but the problem with your argument is that my original argument could also just consider only simulations in which I am the only observer. In which case Pr(I'm distinct | I'm in a simulation)=1, not 0.5. And since there's obviously some prior probability of this simulation being true, my argument still follows.

I now think my actual error is saying Pr(I'm distinct | I'm not in a simulation)=0.0001, when in reality this probability should be 1, since I am not a random sample of all humans (i.e., SSA is wrong), I am me. Is that clear?

Lastly, your final paragraph is akin to the SSA + SIA response to the doomsday paradox, which I don't think is widely accepted since both those assumptions lead to a bunch of paradoxes.

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T19:51:52.324Z

True, but that wasn't my prior. My assumption was that if I'm in a simulation, there's quite a high likelihood that I would be made to be so 'lucky' to be the highest on this specific dimension. Like a video game in which the only character has the most Hp.

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T16:57:17.450Z

But, on second thought, why are you confident that the way I'd fill the bags is not "entangled with the actual causal process that filled these bags in a general case?" It seems likely that my sensibilities reflect at least in some manner the sensibilities of my creator, if such a creator exists.

Actually, in addition, my argument still works if we only consider simulations in which I'm the only human and I'm distinct (on my aforementioned axis) from other human-seeming entities. So the 0.5 probability becomes identically 1, and I sidestep your argument. So if I assign any non-zero prior on this theory whatsoever, the observation that I'm distinct makes this theory way way way more likely.

The only part of your comment I still agree with is that SIA and SSA may not be justified. Which means my actual error may have been to set Pr(I'm distinct | I'm not in a sim)=0.0001 instead of identically 1 — since 0.0001 assumes SSA. Does that make sense to you?

But thank you for responding to me; you are clearly an expert in anthropic reasoning, as I can see from your posts.

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T16:49:51.246Z

Thank you Ape, this sounds right.

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T05:18:43.749Z

I don't understand. We should entertain the possibility because it is clearly possible (since it's unfalsifiable), because I care about it, because it can dictate my actions, etc. And the probability argument follows after specifying a reference class, such as "being distinct" or "being a presumptuous philosopher."

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T04:48:21.678Z

You are misinterpreting the PP example. Consider the following two theories:

T1 : I'm the only one that exists, everyone else is an NPC

T2 : Everything is as expected, I'm not simulated. 

Suppose for simplicity that both theories are equally likely. (This assumption really doesn't matter.) If I define Presumptuous Philosopher=Distinct human like myself=1/(10,000) humans, then I get in most universes, I am indeed the only one, but regardless, most copies of myself are not simulated.

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T04:04:38.167Z

I don't appreciate your tone sir! Anyway, I've now realized that this is a variant on the standard Presumptuous Philosopher problem, which you can read about here if you are mathematically inclined: https://www.lesswrong.com/s/HFyami76kSs4vEHqy/p/LARmKTbpAkEYeG43u#1__Proportion_of_potential_observers__SIA

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T03:52:39.927Z

Thank you Anon User. I thought a little more about the question and I now think it's basically the Presumptuous Philosopher problem in disguise. Consider the following two theories that are equally likely:

T1 : I'm the only real observer

T2: I'm not the only real observer

For SIA, the ratio is 1:(8 billion / 10,000)=800,000, so indeed, as you said above, most copies of myself are not simulated. 

For the SSA, the ratio is instead 10,000:1, so in most universes in the "multiverse of possibilities", I am the only real observer.

So it's just a typical Presumptuous Philosopher problem. Does this sound right to you?

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T03:06:54.757Z

Yes okay fair enough. I'm not certain about your claim in quotes, but neither am I certain about my claim which you phrased well in your second paragraph. You have definitely answered this better than anyone else here.

But still, I feel like this problem is somehow similar to the Presumtuous Philosopher problem, and so there should be some anthropic reasoning to deduce which universe I'm likely in / how exactly to update my understanding. 

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T02:10:44.738Z

I suspect it's quite possible to give a mathematical treatment for this question, I just don't know what that treatment is. I suspect it has to do with anthropics. Can't anthropics deal with different potential models of reality?

The second part of your answer isn't convincing to me, because I feel like it assumes we can understand the simulators and their motivations, when in reality we cannot (these may not be the future-human simulators philosophers typically think about, mind you, they could be so radically different that ordinary reasoning about their world doesn't apply). But anyway, this latter part of your argument, even if valid, only effects the quantitative part of the initial estimates, not the qualitative part, so I'm not particularly concerned with it.

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T01:21:35.457Z

That makes sense. But to be clear, it makes intuitive sense to me that the simulators would want to make their observers so 'lucky' as I am, so I assigned 0.5 probability to this hypothesis. Now I realize this is not the same as Pr(I'm distinct | I'm in a simulation) since there's some weird anthropic reasoning going on since only one side of this probability has billions of observers. But what would be the correct way of approaching this problem? Should I have divided 0.5 by 8 billion? That seems too much. What is the correct mathematical approach?

Comment by AynonymousPrsn123 on [deleted post] 2025-01-07T01:11:57.252Z

Good questions. Firstly, let's just take as an assumption that I'm very distinct — not just unique. In my calculation, I set Pr(I'm distinct | I'm not in a simulation)=0.0001 to account for this (1 in 10,000 people), but honesty I think the real probability is much much lower than this figure (maybe 1 in a million) — so I was even being generous to your point there.

To your second question, the reason why, in my simulator's earth, I imagine the chance of uniqueness to be larger is that if I'm in a simulation then there could be what I will call "NPCs." People who seem to exist but are really just figments of my mind. (Whereas the probability of NPCs existing if I'm not in a simulation is basically 0.) At least that's my intuition. There might even be a way of formalizing that intuition; for example, saying that in a simulated world, the population of earth is an upper bound on the number of "true observers" vs NPCs, whereas in the real world, everyone is a "true observer." Is there something wrong in this intuition?

Comment by AynonymousPrsn123 on [deleted post] 2025-01-06T23:49:51.648Z

My argument didn't even make those assumptions. Nothing in my argument "falsified" reality, nor did I "prove" the existence of something outside my immediate senses. It was merely a probabilistic, anthropic argument. Are you familiar with anthropics? I want to hear from someone who knows anthropics well.

Indeed, your video game scenario is not even really qualitatively different from my own situation. Because if I were born with 1000 HP, you could still argue "data from within the 'simulation'...is not proof of something 'without'." And you could update your "scientific" understanding of the distribution of HP to account for the fact that precisely one character has 1000 HP.

The difference between my scenario and the video game one is merely quantitative: Pr(1000 HP | I'm not in a video game) < Pr(I'm a superlative | I'm not in a simulation), though both probabilities are very low.