A thought experiment
post by sisyphus (benj) · 2022-12-10T05:23:19.868Z · LW · GW · 10 commentsThis is a question post.
Contents
Answers 2 JBlack 1 PatrickDFarley None 10 comments
Say you are about to flip a quantum coin, and the coin has an equal probability of coming up heads or tails.
If it comes up heads, a machine creates 1,000 simulations of you before flipping the coin (with the same subjective experience so that you cannot tell if you yourself is a simulation) and gives all of these simulations a lollipop each after they flip the coin.
If it comes up tails, you get nothing.
Now, before you flip the coin, what is the probability that you will receive a lollipop?
Answers
There are multiple consistent answers here, because there are multiple ways to map measures onto events in ways that obey the probability axioms and are consistent with some reasonable meanings of the words in English. There are many points of ambiguity!
Most importantly, what exactly does the term you refer to in every instance where it is used in this scenario description?
Does it always refer to the same entity? For example, if the term "you" always means a physical entity, then the probability is zero because in this scenario no physical entity ever receives a lollipop. (There are some simulated entities that falsely believe they are you, who experience receiving a lollipop, but they're irrelevant)
Maybe it refers to any entity in an epistemic state immediately prior to flipping the coin in such a scenario? Then there may be 1 of these, 1001, or any other number depending upon how you count and what the rest of the universe or multiverse contains. For example, there are infinitely many Turing machines that simulate an entity in this epistemic state having subsequent experiences. I would expect that most of them by some complexity metric do not simulate the entity subsequently experiencing receipt of a lollipop. Should they be included in the probability calculation?
If the "you" can include entities being simulated, can the term "coin" include simulated coins? How are we to interpret "the coin has an equal probability of coming up heads or tails"? Is this true for simulated coins? Are they independent of one another and of whichever "you" is being discussed (some of which might not have any corresponding coins)?
So in short, what sample space are you using? Does it satisfy any particular symmetry properties that we can use to fill in the blanks "nicely"? Note that you can't just have all the nice properties, since some are inconsistent with the others.
I'm gonna be lazy and say:
If it comes up tails, you get nothing.
If that ^ is a given premise in this hypothetical, then we know for certain it is not a simulation (because in a simulation, after tails, you'd get something). Therefore the probability of receiving a lollipop here is 0 (unless you receive one for a completely unrelated reason)
↑ comment by sisyphus (benj) · 2022-12-11T05:27:56.253Z · LW(p) · GW(p)
Sorry but I think you may have misunderstood the question since your answer doesn't make any sense to me. The main problem I was puzzled about was whether or not the odds of getting a lollipop are 1:1 (as is the probability of the fair coin coming up heads) or 1001:1 (whether or not the simulations affect the self-location uncertainty). As shiminux said it is similar to the sleeping beauty problem where self-location uncertainty is at play.
10 comments
Comments sorted by top scores.
comment by Shmi (shminux) · 2022-12-10T19:08:53.319Z · LW(p) · GW(p)
Look up the Sleeping Beauty problem.
Replies from: benj↑ comment by sisyphus (benj) · 2022-12-11T05:30:01.307Z · LW(p) · GW(p)
I see, thanks. IMO the two are indeed quite similar but I think my example illustrates the problem of self-location uncertainty in a clearer way. That being said, what is your thought on the probability of getting a lollipop if you're in such a scenario? Are the odds 1:1 or 1001:1?
Replies from: shminux↑ comment by Shmi (shminux) · 2022-12-11T05:56:30.714Z · LW(p) · GW(p)
I don't see a difference between your scenario and 1000 of 1001 people randomly getting a lollipop, no coin flip needed, no simulation and no cloning.
Replies from: benj↑ comment by sisyphus (benj) · 2022-12-11T06:11:57.681Z · LW(p) · GW(p)
Right, so your perspective is that due to the multiple embeddings of yourself being in the heads scenario, it is the 1001:1 option. That line of reasoning is kind of what I thought as well, but it was against the 1:1 odds as would be suggested by my intuition. I guess this is the same as the halfer vs thirder debate, where 1:1 is the halfer position and the 1001:1 is the thirder position.
Replies from: benj↑ comment by sisyphus (benj) · 2022-12-11T06:13:57.972Z · LW(p) · GW(p)
I suppose the lollipops are indeed an unnecessary addition, so the final question can really be reframed as "what is the probability that you will see heads?"
Replies from: shminux↑ comment by Shmi (shminux) · 2022-12-11T07:01:23.925Z · LW(p) · GW(p)
You don't need a coin flip, I'm fine with lollipops randomly given to 1000 out of 1001 participants. This is not about "being in the head", this is an experimental result, assume you run a large number of experiments like that. The stipulation is that it is impossible to tell from the inside if it is a simulation or the original, so one has to use the uniform prior.
Replies from: benj↑ comment by sisyphus (benj) · 2022-12-11T11:23:43.209Z · LW(p) · GW(p)
Ah I see. Sorry for not being too familiar with the lingo but does uniform prior just mean equal probability assigned to each possible embedding?
Replies from: shminux↑ comment by Shmi (shminux) · 2022-12-11T20:01:47.201Z · LW(p) · GW(p)
Not an expert, either, hah. But yeah, what I meant is that the distribution is uniform over all instances, whether originals or copies, since there is no way to distinguish internally between the twem.
comment by Slider · 2022-12-11T11:56:27.495Z · LW(p) · GW(p)
The phrasing of the question can be taken to fight the hypothetical pretty hard.
If the mechanism is that "machine looks at you and boots 1000 new instances with your structure" then one approach is to say that those newly booted instances are not you, so the probability of getting a lollipop is 0. "If it comes up heads... before flipping the coin" also makes for weird retrocausality.
If you are supposed to be uncertain who you are then "machine creates 1000 instances of the source code" will not happen for each instance (you would get 1000*1000 except the new instances would also get the trigger so 1000*1000*1000 and it would just keep recursing. The answer would be 1 except a set of measure 0 (that is "almost surely". Getting to be the original root is infinidesimally lucky).)
If it is not your instances coin then the procedure on which coin the machine reads is a bit murky. If only one coin is ever read it will slightly break the situations symmetry (which might be undetecable). The coin reading is not in the same timestream as the instances experience it.