Sleeping Julia: Empirical support for thirder argument in the Sleeping Beauty Problem
post by Alex V (alex-v) · 2020-11-03T00:21:48.306Z · LW · GW · 13 commentsContents
13 comments
I've created an emulation of the Sleeping Beauty Problem in the Julia programming language which supports the thirder solution.
For those unfamiliar with the problem, I recommend this explanation by Julia Galef: https://www.youtube.com/watch?v=zL52lG6aNIY
In this explanation, I'll briefly explain the current situation with regard to this problem's status in academia, how the emulation works, and how we can formalize the intuitions gleaned from this experiment. Let's start with the code.
Originally I wrote this in Julia (hence the name), and that code can be found on GitHub: https://github.com/seisvelas/SleepingJulia/blob/main/sleeping.jl.ipynb
Here I'll do the same thing, but in Python, as that language is likely grokked by a broader audience of LessWrong readers. First, I create a class to run the experiment and track the state of various sleeping beauty experiments:
import random
class SleepingBeautyExperiment:
def __init__(self):
self.wakeups = 0
self.bets = {
'heads' : { 'win' : 0, 'loss' : 0},
'tails' : { 'win' : 0, 'loss' : 0},
}
def run(self, bet):
coin = ('heads', 'tails')
coin_toss = random.choice(coin)
win_or_loss = 'win' if coin_toss == bet else 'loss'
self.bets[bet][win_or_loss] += 1
# Tuesday, in case of tails
if coin_toss == 'tails':
self.bets[bet][win_or_loss] += 1
def repeat(self, bet, times):
for i in range(times):
self.run(bet)
def reset(self):
self.__init__()
I apologize for the lack of code highlighting. I tried to write code that self-documents as much as possible, but if I failed, just leave a comment and I'll clarify to the best of my ability. The key observation is that in the case of tails, we wake SB twice. Ie, for every 100 experiments, there will be 150 wakeups. We don't care how many whole experiments SB summarily wins (if we did, though, the halfer interpretation would be the correct one!).
Let's see the code in action:
>>> heads_wins = Sb.bets['heads']['win']
>>> heads_losses = Sb.bets['heads']['loss']
>>> heads_wins / (heads_wins + heads_losses)
0.33378682085242317
>>> # As percent rounded to two decimal places:
>>> int(heads_wins / (heads_wins + heads_losses) * 10000) / 100
33.37
There ya go, a nice right dose of thirderism. In the lay rationality community the SB problem is often treated as an open debate, with larger and smaller menshevik and bolshevik factions, but this has not been the case for some time. I made these emulations originally to prove halferism before reading up on academic work from decision theorists such as Silvia Milano's wonderful Bayesian Beauty paper. In academia, SB problem is resoundingly considered solved.
For a lay level overview of the situation, we can do some Bayesian mini-algebra to simply summarize what halfers get wrong.
A = heads
B = SB awoken
Halfers believe these priors:
P(B | A) = 1
P(A ) = 1/2
P(B) = 1
Therefore, following Bayes' theorem:
P(A | B) = (P(B | A) * P(A)) / P(B) = (1 * 1/2) / 1 = 1/2
But for every tails flip, SB is awoken twice (once on Monday then again on Tuesday), so the probable number of wakeups per experiment is 1.5, therefore P(B) = 1.5. If we run the math again with this new prior:
P(A | B) = (P(B | A) * P(A)) / P(B) = (1 * 1/2) / 1.5 = .5 / 1.5 = 1/3
QED
Readers learning about this problem from the rationality community or Wikipedia are given an outdated sense of the problem's openness. Perhaps some enterprising spirit would like to review the academic literature and give Wikipedia's article a makeover. I'll do it one day, if no one else beats me to it.
13 comments
Comments sorted by top scores.
comment by Donald Hobson (donald-hobson) · 2020-11-03T15:21:25.915Z · LW(p) · GW(p)
The question was never about what that particular piece of code did. It is about whether that code is a good interpretation of the problem?
But for every tails flip, SB is awoken twice (once on Monday then again on Tuesday), so the probable number of wakeups per experiment is 1.5, therefore P(B) = 1.5
A halfer would question a probability that is >1. They would deny that the number of wakeups is important. They would point out that the answer would be 1/2 if asked after the experiment is over.
They would claim that the outcomes you should assign probability to are "heads" and "tails". It is about whether we should assign probability to observer moments, or to worlds that contain many observers.
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2020-11-03T08:45:23.367Z · LW(p) · GW(p)
In academia, SB problem is resoundingly considered solved.
I am extremely skeptical. For one thing, you haven't provided any evidence for this claim; you just asserted it. You link to one forthcoming paper that argues for thirderism but you haven't given any reason to think academia resoundingly agrees with it.
For another, your argument here seems to be saying nothing new? Your claim that we should set P(B) = 1.5 is begging the question, no? Wouldn't halfers object to that part of the argument? (And for now at least I'm inclined to agree with them on that point --- probabilities are supposed to be between 0 and 1, by definition.)
comment by interstice · 2020-11-03T00:52:10.255Z · LW(p) · GW(p)
"Empirical" evidence of thirder/halfer positions doesn't seem to prove much, since it's inevitably based around a choice of decision problem/utility function with no deeper justification than the positions themselves.
I think a more useful attitude towards problems like this is that anthropic 'probability' is meaningless except in the context of a specific decision problem, see e.g. https://arxiv.org/abs/1110.6437. tl;dr, agents maximizing different types of utility function will behave as if having different 'probabilities', this explains conflicting answers on problems like this and some others.
Replies from: Charlie Steiner, alex-v↑ comment by Charlie Steiner · 2020-11-03T06:20:39.985Z · LW(p) · GW(p)
I agree on the first bit, but on the second, anthropic uncertainty can actually be resolved into regular uncertainty just fine - just treat yourself as "known," and the rest of the universe as unknown. When I'm not sure what time it is, I look at a clock - we could call this "locating myself," but we could equally well call it "locating the universe." Similarly, we can cash out Sleeping Beauty's anthropic uncertainty into empirical questions about the universe - e.g. what will I see if I look at the calendar? Or in the version with copies, is there a copy of me next door that I can go talk to?
These are perfectly good empirical questions, and all the standard reasons for why probabilities are good things to have (Cox's theorem in particular here) apply.
Replies from: interstice↑ comment by interstice · 2020-11-03T13:15:29.845Z · LW(p) · GW(p)
I'm not so sure -- what about cases where your sense experiences are perfectly compatible with 2 different underlying embedding? You might say "you will eventually find some evidence disambiguating, just use your distribution over that", but that's begging the question -- different distributions over the future copies will report seeing different things. Also, you might need to make decisions before getting more evidence.
As a particularly clear example, take the simulation argument. I think that any approach to epistemology based on counting -- even Solomonoff induction -- would probably conclude from our experiences that we are in a future simulation of some sort. But it's still correct for us to make decisions as if we are in the 'base' universe, because that's where we can have the most influence.
ETA: if you're arguing that we will end up having some sort of distribution over our future experiences by Cox, that might be true -- I'm not sure.(what would you say about Beauty having 'effective probabilities' for the coin flip that change before and after going to sleep?) What I'm saying is that, even if that's the case, our considerations in creating this effective distribution will mainly be about the consequences of our actions, not epistemics.
Replies from: Charlie Steiner↑ comment by Charlie Steiner · 2020-11-03T13:45:42.912Z · LW(p) · GW(p)
We can distinguish two cases - one case where there is some physical difference out there that you could find if you looked for it, and another case (e.g. trying to put probability distributions over what the universe looks like outside our lightcone) where you have different theories that really don't have any empirical consequence.
In the first case, I don't think it's begging the question at all to say that you should have some probability distribution over those future empirical results, because the best part of probability distributions is how the capture what we expect about future empirical results. This should not stop working just because there might be someone next door who has the same memories as me. And absolutely this is about epistemics. We can phrase the Sleeping Beauty problem entirely in terms of ordinary empirical questions about the outside world - if you can give me a probability distribution over what time it is and just call it ordinary belief, Sleeping Beauty can give me a probability distribution over what day it is and just call it ordinary belief. You can use the same reasoning process.
In the case where there is no empirical difference, then yes, I think it's ultimately about Solomonoff induction, which is significantly more subjective (allowing a choice of "programming language" that can change what you think is likely, with no empirical evidence to ever change your mind). But again this isn't about practical consequences. If we're in a simulation (I'm somewhat doubtful on the ancestor simulation premise, myself), I don't think the right answer is "somehow fool ourselves into thinking we're not in a simulation so we can take good actions." I'd rather correctly guess whether I'm in a simulation and then take good actions anyhow.
Replies from: interstice↑ comment by interstice · 2020-11-03T18:36:42.740Z · LW(p) · GW(p)
Sleeping Beauty can give me a probability distribution over what day it is and just call it ordinary belief
But the whole question is about how Beauty should decide on her probabilities before seeing any evidence, right? What I'm saying is that she should do that with reference to her intended goals(or, just decide probabilities aren't useful in this context)
I'm taking a behaviorist/decision-theoretic view on probability here -- I'm saying that we can define an agent's probability distribution over worlds in terms of its decision function and utility function. An agent definitionally believes an event will occur with probability p if it will sacrifice a resource worth <p utilons to get a certificate paying out 1 utilon if the event comes to pass.
I’d rather correctly guess whether I’m in a simulation and then take good actions anyhow.
But what does 'correctly' actually mean here? It can't mean that we'll eventually see clear signs of a simulation, as we're specifically positing there's no observable differences. Does it mean 'the Solomonoff prior puts most of the weight for our experiences inside a simulation'? But we would only say this means 'correctly' because S.I. seems like a good abstraction of our normal sense of reality. But 'UDT, with a utility function weighted by the complexity of the world' seems like just as good of an abstraction, so it's not clear why we should prefer one or the other. (Note the 'effective probability' derived from UDT is not the same as the complexity weighting)
I actually think there is an interesting duality here -- within this framework, as moral actors agents are supposed to use UDT, but as moral patients they are weighted by Solomonoff probabilities. I suspect there's an alternative theory of rationality that can better integrate these two aspects, but for now I feel like UDT is the more useful of the two, at least for answering anthropic/decision problems.
↑ comment by Alex V (alex-v) · 2020-11-03T02:52:06.356Z · LW(p) · GW(p)
agents maximizing different types of utility function will behave as if having different 'probabilities', this explains conflicting answers on problems like this and some others.
But there is a real, objective probability that can be proven, and it has nothing to do with SB's subjective, anthropic probability. Rather, the number of wakeups is greater than the number of experiments. If we go based on # of experiments (ie, how many experiments does SB entirely win or lose) halfers would be right. If we go per wakeup, then thirders are right.
In the original formulation of the problem, we assume we are on an unknown wakeup and SB should bet. In that case, tails is more likely than heads. That's what this experiment hopes to demonstrate - that the Sleeping Beauty problem is not really a 'de se' problem so much as is often assumed, and that most of the anthropic principle stuff can be discarded.
There aren't really conflicting answers, if you check out more recent papers you'll find the consensus has solidified to the point where the debate is largely between 'thirderism is provably and objectively the correct position' and 'well there are still philosophical issues with that' and then new papers come out to address those issues, and so on.
At least, that's the overview I've gotten of the situation in the brief process of making these simulations, so please do school me if I'm totally wrong!
Replies from: interstice↑ comment by interstice · 2020-11-03T05:59:56.056Z · LW(p) · GW(p)
I haven't really followed what the consensus in academia is, I've mostly picked up my views by osmosis from LessWrong/thinking on my own.
My view is that probabilities are ultimately used to make decisions. In particular, we can define an agent's 'effective probability' that an event has occurred by the price at which it would buy a coupon paying out $1 if the event occurs. If Beauty is trying to maximize her expected income, her policy should be to buy a coupon for Tails for <$0.50 before going to sleep, and <$0.66 after waking(because her decision will be duplicated in the Tails world). You can also get different 'effective probabilities' if you are a total/average utilitarian towards copies of yourself(as explained in the paper I linked).
Once you've accepted that one's 'effective probabilities' can change like this, you go on to throw away the notion of objective probability altogether(as an ontological primitive), and instead just figure out the best policy to execute for a given utility function/world. See UDT [LW · GW] for an quasi-mathematical formalization of this, as applied to a level IV multiverse.
But there is a real, objective probability that can be proven, and it has nothing to do with SB’s subjective, anthropic probability
But why? In this view, there doesn't have to be an 'objective probability', rather there are only different decision problems and the best algorithms to solve them.
comment by FeepingCreature · 2020-11-03T07:05:41.950Z · LW(p) · GW(p)
Easily solved: The original experiment tries to make it about anthropic bias by talking about probability, which is ambiguous. But simply pay Sleeping Beauty money based on the correctness of her answer, and the anthropic bias will disappear as it becomes clear that the only relevant question is "are you paying per question or for the whole experiment."
Beliefs should pay rent!
comment by Ericf · 2020-11-03T04:56:09.722Z · LW(p) · GW(p)
Of you taboo "Probability" you are left with two options: "Good morning. If we ran this experiment 100 times, how many times would we flip Tails?" And "Good morning. If we ran this experiment 100 times, how many times would the correct answer to the question "what was the flip?" be Tails"
I can see how it would make sense for the convention to be that "Probability" means the second thing. Is there any deeper reasoning for picking that one, other than "it makes the math easier"?
comment by JonCB · 2020-11-03T11:02:18.671Z · LW(p) · GW(p)
In case it helps... i created a gist for the python code that should give some code highlighting help.
https://gist.github.com/JonCB/d582582d43ad60242dc47355ed86125d
comment by alahonua · 2020-11-04T06:47:37.149Z · LW(p) · GW(p)
I think that OP is confusing expected value with probability.
The expected value formula is the probability of an event multiplied by the amount of times the event happens: (P(x) * n).
This explains the P(B) = 1.5 the OP put above-- he means the expectation is 1.5, because P(waking with any one coin flip result) = 1/2 and the times it occurs is 3.
So the halfers believe the expectation is 1/2 for waking with heads and 1/2 for waking with tails.
The thirders have the expected values right: 1 for waking with heads, 2 for waking with tails.