post by [deleted] · · ? · GW · 0 comments

This is a link post for

0 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2023-04-05T22:03:25.604Z · LW(p) · GW(p)

In situations with multiple copies of an agent or some sort of anthropic uncertainty [LW · GW], asking for probability of being in a certain situation is misleading, the morally real thing is probability of those situations/worlds themselves (which acts as a degree of caring [LW · GW]), not probability of being in them. And even that probability depends on your decisions [LW · GW], to the point where your decisions in some situations can make those situations impossible.

Replies from: 314159
comment by amelia (314159) · 2023-04-06T06:37:16.480Z · LW(p) · GW(p)

That's a really useful distinction. Thank you for taking the time to make it! I also think that I made it sound like "simulator" worlds allow for objective morality. In actuality, I think a supra-universal reality might allow for simulator worlds, and a supra-universal reality might allow for objective morality (by some definitions of it), but the simulator worlds and the objective morality aren't directly related in their own right.   

comment by Jeff White (jeff-white) · 2023-04-06T10:56:50.790Z · LW(p) · GW(p)

The best argument that we are in a simulation is mine, I think: https://link.springer.com/article/10.1007/s00146-015-0620-9