K-Bounded Sleeping Beauty Problem
post by Ape in the coat · 2024-11-26T17:53:26.364Z · LW · GW · 8 commentsContents
UPD: As mentioned in the comments I've misrepresented thirders reasoning, due to a mathematical mistake. So this post isn't valid. Shame on me! Will keep it till tomorrow as motivation to double check my reasoning in the future Introduction Generalizing Sleeping Beauty Demonstration Unbounded Sleeping Beauty Conclusion None 8 comments
UPD: As mentioned in the comments I've misrepresented thirders reasoning, due to a mathematical mistake. So this post isn't valid. Shame on me! Will keep it till tomorrow as motivation to double check my reasoning in the future
Introduction
I've spent quite some time arguing that Thirdism in Sleeping Beauty is incorrect, appealing to theoretical properties of probability function and sample spaces. While it has successfully dissolved the confusion for some people, for many others it didn't. So in this post I'm taking a different approach
I've used some betting arguments before, but even they turned out to be too convoluted. Thankfully, I've finally came up with the clearest case. A betting argument that demonstrates, that trying to apply Thirder reasoning, according to which one should assume that they are experiencing a random awakening among all possible awakenings - also known as Self Indexing Assumption - leads to bizarre behavior.
Generalizing Sleeping Beauty
Consider the problem that I'm calling K-Bounded Sleeping Beauty:
You are put to sleep. A fair coin is tossed either until it comes up Heads, or k Tails a row are produced. You receive undistinguishable awakenings, where n is the number of times the coin came up Tails. After each awakenings you put back to sleep and receive an amnesia drug which makes you completely forget that you had this awakening. You are awaken during such experiment. What is you credence that in this experiment the coin was tossed k times and the outcome of the k-th toss is Tails?
While the formulation may initially appear complex, it's nothing more than a certain generalization of Sleeping Beauty.
Specifically, when k=1, K-Bounded Sleeping Beauty reduces to regular Sleeping Beauty problem, where the coin is tossed only once and there are either 1 or 2 awakenings.
As we know, for it Thirders claim that:
Now let's look at more interesting cases.
When k=2, the possible outcomes of the coin tosses are H, TH and TT, leading to either 1, or 2 or 4 awakenings correspondingly.
A Thirder, reasoning about such problem, would think that the sample space consists of 7 equiprobable outcomes, corresponding to the awakenings states:
Therefore:
When k=3, the possible outcomes of the coin tosses are H, TH, TTH and TTT, leading to either 1, 2, 4 or 8 awakenings.
By the same logic as previously, a Thirders would think that:
And so on.
In a general case, according to Thirder reasoning:
Which, as we may notice, is more than 1/2 for every natural k.
This means that for any value of k, a Thirders, participating in K-Bounded-Sleeping-Beauty would gladly accept a bet on every awakening with 1:1 odds that the k-th coin toss happened and is Tails.
Which is a terrible idea.
Demonstration
Here is the implementation of K-Bounded Sleeping Beauty in python:
def k_bounded_sleeping_beauty(k):
coin = None
coins = []
num_tails = 0
while coin != 'Heads' and num_tails < k:
coin = 'Heads' if random() < 0.5 else 'Tails'
coins.append(coin)
if coin == 'Tails':
num_tails = num_tails + 1
num_awakenings = pow(2, num_tails)
return coins, num_tails, num_awakenings
And here is iterated implementation of such betting for k=10
score = 0
k = 10
n = 10000
for i in range(n):
coins, num_tails, num_awakenings = k_bounded_sleeping_beauty(k)
if num_tails == k:
score = score + num_awakenings
else:
score = score - num_awakenings
print(score/n) # -4.01031
As we can see, betting 1$ per awakening at 1:1 odds would, on average, results in 4$ lost per experiment.
The actual odds for this kind of bet are 1:5. They can be calculated using the correct Halfer model [LW · GW] for Sleeping Beauty problem in a similar manner I do here [LW · GW].
In a general case:
Which corresponds to 2:k odds.
Unbounded Sleeping Beauty
Even more bizarre behavior happens when we remove the restriction on the number of Tails in the row.
You are put to sleep. A fair coin is tossed either until it comes up Heads. You receive undistinguishable awakenings, where n is the number of times the coin came up Tails. After each awakenings you put back to sleep and receive an amnesia drug which makes you completely forget that you had this awakening. You are awaken during such experiment. What is you credence that in this experiment the coin was tossed k times and the outcome of the k-th toss is Tails?
Under such conditions, a thirder would believe with certainty approaching 100% that, for any arbitrary large number, this amount of coin tosses has happened and is Tails, therefore predictably loosing all their money in a single probability experiment.
I was going to initially talk about this case, but it allows to much wiggle room for Thirders to rationalize the absurdity of their behavior, appealing to general weirdness of the infinities. Thankfully, K-Bounded Sleeping Beauty doesn't present such an opportunity.
Conclusion
I believe, this presents a quite devastating case against Thirdism in Sleeping Beauty, in particular and Self Indexing Assumption, for cases of reasoning about problems involving amnesia, in general. This kind of reasoning simply doesn't generalize, leading to clearly wrong credence estimates in similar problems.
8 comments
Comments sorted by top scores.
comment by WilliamKiely · 2024-11-26T19:15:16.101Z · LW(p) · GW(p)
I'm a halfer, but think you did your math wrong when calculating the thirder view.
The thirder view is that the probability of an event happening is the experimenter's expectation of the proportion of awakenings where the event happened.
So for your setup, with k=2:
There are three possible outcomes: H, HT, and TT.
H happens in 50% of experiments, HT happens in 25% and TT happens in 25%.
When H happens there is 1 awakening, when HT happens there are 2 awakenings, and when TT happens there are 4 awakenings.
We'll imagine that the experiment is run 4 times, and that H happened in 2 of them, HT happened once, and TT happened once. This results in 2*1=2 H awakenings, 1*2=2 HT awakenings, and 1*4=4 TT awakenings.
Therefore, H happens in 2/(2+2+4)=25% of awakenings, HT happens in 25% of awakenings, and TT happens in 50% of awakenings.
The thirder view is thus that upon awakening Beauty's credence that the coin came up heads should be 25%.
What is you [sic] credence that in this experiment the coin was tossed k times and the outcome of the k-th toss is Tails?
Answering your question, the thirder view is that there was a 6/8=75% chance the coin was tossed twice, and a 4/6 chance that the second toss was a tails conditional on it being the case that two tosses were made.
Unconditionally, the thirder's credence is 4/8=50% chance that it is both true that the coin was tossed two times and that the second toss was a tails.
↑ comment by Ape in the coat · 2024-11-26T19:38:11.262Z · LW(p) · GW(p)
Thank you!
It seems that I've been sloppy and therefore indeed misrepresented thirders reasoning here. Shame on me. Will keep this post available till tomorrow, as a punishment for myself and then back to the drawing board.
comment by simon · 2024-11-26T18:22:09.528Z · LW(p) · GW(p)
I've been trying to make this comment a bunch of times, no quotation from the post in case that's the issue:
No, a thirder would not treat those possibilities as equiprobable. A thirder would instead treat the coin toss outcome probabilities as a prior, and weight the possibilities accordingly. Thus H1 would be weighted twice as much as any of the individual TH or TT possibilities.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2024-11-26T18:32:04.273Z · LW(p) · GW(p)
A thirder would instead treat the coin toss outcome probabilities as a prior, and weight the possibilities accordingly
But then they will "update on awakening" and therefore weight the probabilities of each event by the number of awakenings that happen in them.
Every next Tails outcome, decreases the probability two fold, but it's immediately compensated by the fact that twice as many awakenings are happening when this outcome is Tails.
Replies from: simon↑ comment by simon · 2024-11-26T18:51:26.960Z · LW(p) · GW(p)
Hmm, you're right. Your math is wrong for the reason in my above comment, but the general form of the conclusion would still hold with different, weaker numbers.
The actual, more important issue relates to the circumstances of the bet:
If each awakening has an equal probability of receiving the bet, then receiving it doesn't provide any evidence to Sleeping Beauty, but the thirder conclusion is actually rational in expectation, because the bet occurs more times in the high-awakening cases.
If the bet would not be provided equally to all awakenings, then a thirder would update on receiving the bet.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2024-11-26T19:08:28.584Z · LW(p) · GW(p)
Your math is wrong for the reason in my above comment
What exactly is wrong? Could you explicitly show my mistake?
If each awakening has an equal probability of receiving the bet, then receiving it doesn't provide any evidence to Sleeping Beauty, but the thirder conclusion is actually rational in expectation, because the bet occurs more times in the high-awakening cases.
The bet is proposed on every actual awakening, so indeed no update upon its receiving. However this "rational in expectation" trick doesn't work anymore as shown by the betting argument. The bet does occur more times in high-awakening cases but you win the bet only when the maximum possible awakening happened. Until then you lose, and the closer the number of awakenings to the maximum, the higher the loss.
Replies from: WilliamKiely↑ comment by WilliamKiely · 2024-11-26T19:18:32.653Z · LW(p) · GW(p)
What exactly is wrong? Could you explicitly show my mistake?
See my top-level comment.
comment by Rafael Harth (sil-ver) · 2024-11-26T19:17:16.617Z · LW(p) · GW(p)
As simon has already said [LW(p) · GW(p)], your math is wrong because the cases aren't equiprobable. For k=2 you can fix this by doubling the cases of H since they're twice as likely as the others (so the proper state space is with .) For you'd have to quadruple H and double HT, which would give 4x H and 4x HT and 4x HTT and 8x TTT I believe, leading to probability of TTT. (Up from 1x H, 2x HT, 4x HTT, 8x TTT.) In general, I believe the probability of only Ts approaches 0, as does the probability of H.
Regardless, are these the right betting odds? Yup! If we repeat this experiment for any and you are making a bet every time you wake up, then these are the odds according to which you should take or reject bets to maximize profit. You can verify this by writing a simulation, if you want.
If you make the experiment non-repeating, then I think this is just a version of the presumptuous philospoher argument which (imo) shows that you have to treat logical uncertainty differently from randomness (I addressed this case here [LW(p) · GW(p)]).