Posts
Comments
This is an interesting observation which may well be true, I'm not sure, but the more intuitive difference is that SSA is about actually existing observers, while SIA is about potentially existing observers. In other words, if you are reasoning about possible realities in the so-called "multiverse of possibilities," than you are using SIA. Whereas if you are only considering a single reality (e.g., the non-simulated world), you select a reference class from that reality (e.g., humans), you may choose to use use SSA to say that you are a random observer from that class (e.g., a random human in human history).
You are describing the SIA assumption to a T.
This is what I was thinking:
If simulations exist, we are choosing between two potentially existing scenarios, either I'm the only real person in my simulation, or there are other real people in my simulation. Your argument prioritizes the latter scenario because it contains more observers, but these are potentially existing observers, not actual observers. SIA is for potentially existing observers.
I have a kind of intuition that something like my argument above is right, but tell me if that is unclear.
And note: one potential problem with your reasoning is that if we take it to it's logical extreme, it would be 100% certain that we are living in a simulation with infinite invisible observers. Because infinity dominates all the finite possibilities.
I think you are overlooking that your explanation requires BOTH SSA and SIA, but yes, I understand where you are coming from.
Other people here have responded in similar ways to you; but the problem with your argument is that my original argument could also just consider only simulations in which I am the only observer. In which case Pr(I'm distinct | I'm in a simulation)=1, not 0.5. And since there's obviously some prior probability of this simulation being true, my argument still follows.
I now think my actual error is saying Pr(I'm distinct | I'm not in a simulation)=0.0001, when in reality this probability should be 1, since I am not a random sample of all humans (i.e., SSA is wrong), I am me. Is that clear?
Lastly, your final paragraph is akin to the SSA + SIA response to the doomsday paradox, which I don't think is widely accepted since both those assumptions lead to a bunch of paradoxes.
True, but that wasn't my prior. My assumption was that if I'm in a simulation, there's quite a high likelihood that I would be made to be so 'lucky' to be the highest on this specific dimension. Like a video game in which the only character has the most Hp.
But, on second thought, why are you confident that the way I'd fill the bags is not "entangled with the actual causal process that filled these bags in a general case?" It seems likely that my sensibilities reflect at least in some manner the sensibilities of my creator, if such a creator exists.
Actually, in addition, my argument still works if we only consider simulations in which I'm the only human and I'm distinct (on my aforementioned axis) from other human-seeming entities. So the 0.5 probability becomes identically 1, and I sidestep your argument. So if I assign any non-zero prior on this theory whatsoever, the observation that I'm distinct makes this theory way way way more likely.
The only part of your comment I still agree with is that SIA and SSA may not be justified. Which means my actual error may have been to set Pr(I'm distinct | I'm not in a sim)=0.0001 instead of identically 1 — since 0.0001 assumes SSA. Does that make sense to you?
But thank you for responding to me; you are clearly an expert in anthropic reasoning, as I can see from your posts.
Thank you Ape, this sounds right.
I don't understand. We should entertain the possibility because it is clearly possible (since it's unfalsifiable), because I care about it, because it can dictate my actions, etc. And the probability argument follows after specifying a reference class, such as "being distinct" or "being a presumptuous philosopher."
You are misinterpreting the PP example. Consider the following two theories:
T1 : I'm the only one that exists, everyone else is an NPC
T2 : Everything is as expected, I'm not simulated.
Suppose for simplicity that both theories are equally likely. (This assumption really doesn't matter.) If I define Presumptuous Philosopher=Distinct human like myself=1/(10,000) humans, then I get in most universes, I am indeed the only one, but regardless, most copies of myself are not simulated.
I don't appreciate your tone sir! Anyway, I've now realized that this is a variant on the standard Presumptuous Philosopher problem, which you can read about here if you are mathematically inclined: https://www.lesswrong.com/s/HFyami76kSs4vEHqy/p/LARmKTbpAkEYeG43u#1__Proportion_of_potential_observers__SIA
Thank you Anon User. I thought a little more about the question and I now think it's basically the Presumptuous Philosopher problem in disguise. Consider the following two theories that are equally likely:
T1 : I'm the only real observer
T2: I'm not the only real observer
For SIA, the ratio is 1:(8 billion / 10,000)=800,000, so indeed, as you said above, most copies of myself are not simulated.
For the SSA, the ratio is instead 10,000:1, so in most universes in the "multiverse of possibilities", I am the only real observer.
So it's just a typical Presumptuous Philosopher problem. Does this sound right to you?
Yes okay fair enough. I'm not certain about your claim in quotes, but neither am I certain about my claim which you phrased well in your second paragraph. You have definitely answered this better than anyone else here.
But still, I feel like this problem is somehow similar to the Presumtuous Philosopher problem, and so there should be some anthropic reasoning to deduce which universe I'm likely in / how exactly to update my understanding.
I suspect it's quite possible to give a mathematical treatment for this question, I just don't know what that treatment is. I suspect it has to do with anthropics. Can't anthropics deal with different potential models of reality?
The second part of your answer isn't convincing to me, because I feel like it assumes we can understand the simulators and their motivations, when in reality we cannot (these may not be the future-human simulators philosophers typically think about, mind you, they could be so radically different that ordinary reasoning about their world doesn't apply). But anyway, this latter part of your argument, even if valid, only effects the quantitative part of the initial estimates, not the qualitative part, so I'm not particularly concerned with it.
That makes sense. But to be clear, it makes intuitive sense to me that the simulators would want to make their observers so 'lucky' as I am, so I assigned 0.5 probability to this hypothesis. Now I realize this is not the same as Pr(I'm distinct | I'm in a simulation) since there's some weird anthropic reasoning going on since only one side of this probability has billions of observers. But what would be the correct way of approaching this problem? Should I have divided 0.5 by 8 billion? That seems too much. What is the correct mathematical approach?
Good questions. Firstly, let's just take as an assumption that I'm very distinct — not just unique. In my calculation, I set Pr(I'm distinct | I'm not in a simulation)=0.0001 to account for this (1 in 10,000 people), but honesty I think the real probability is much much lower than this figure (maybe 1 in a million) — so I was even being generous to your point there.
To your second question, the reason why, in my simulator's earth, I imagine the chance of uniqueness to be larger is that if I'm in a simulation then there could be what I will call "NPCs." People who seem to exist but are really just figments of my mind. (Whereas the probability of NPCs existing if I'm not in a simulation is basically 0.) At least that's my intuition. There might even be a way of formalizing that intuition; for example, saying that in a simulated world, the population of earth is an upper bound on the number of "true observers" vs NPCs, whereas in the real world, everyone is a "true observer." Is there something wrong in this intuition?
My argument didn't even make those assumptions. Nothing in my argument "falsified" reality, nor did I "prove" the existence of something outside my immediate senses. It was merely a probabilistic, anthropic argument. Are you familiar with anthropics? I want to hear from someone who knows anthropics well.
Indeed, your video game scenario is not even really qualitatively different from my own situation. Because if I were born with 1000 HP, you could still argue "data from within the 'simulation'...is not proof of something 'without'." And you could update your "scientific" understanding of the distribution of HP to account for the fact that precisely one character has 1000 HP.
The difference between my scenario and the video game one is merely quantitative: Pr(1000 HP | I'm not in a video game) < Pr(I'm a superlative | I'm not in a simulation), though both probabilities are very low.