0 comments
Comments sorted by top scores.
comment by avturchin · 2025-03-07T06:32:52.896Z · LW(p) · GW(p)
I explored similar ideas in these two posts:
Quantum Immortality: A Perspective if AI Doomers are Probably Right [LW · GW] - Here is the idea that only good outcomes with a large number of observers matter and I am more likely now to be in a timeline which will bring me into the future with a large number of observers because of some interpretation of SIA.
and Preventing s-risks via indexical uncertainty, acausal trade and domination in the multiverse [LW · GW] Here I explored the idea that benevolent superintelligences will try to win measure war and aggregate as much measure as possible thus making bad outcomes anthropically irrelevant.
Replies from: Will_Pearson↑ comment by Will_Pearson · 2025-03-07T13:30:06.796Z · LW(p) · GW(p)
Simulation makes things interesting too. Bad situations might be simulated for learning purposes
comment by Mitchell_Porter · 2025-03-07T01:41:30.239Z · LW(p) · GW(p)
Does it have an argument in favor of SIA?