The Closed Eyes Argument For Thirding

post by omnizoid · 2024-03-31T22:11:44.036Z · LW · GW · 22 comments

Contents

  "In her house at R'lyeh sleeping beauty waits dreaming."
None
22 comments

Don't you know when your eyes are closed
You see the world from the clouds along with everybody else?
Don't you know when your eyes are closed
You see the world from the clouds along with everybody else?

Close Your Eyes by The Midnight Club

"In her house at R'lyeh sleeping beauty waits dreaming."

Crossposted on my blog.  

Monstrum | How Cthulhu Transcended its Creator, H.P. Lovecraft | Season 4 |  Episode 10 | PBS

 

The sleeping beauty problem is one of the most hotly debated topics in decision theory. It’s one of the topics like Newcomb’s problem where everyone seems to find their answer obvious, yet people don’t agree about it. The first paper on it (which settled the issue) was by Adam Elga, and described it thusly:

The Sleeping Beauty problem: Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

There are two main answers: 1/2 and 1/3. Halfers say that you should start out 50/50 before waking up, and because you’ll wake up either way, you’ll remain 50/50 (this is false for a rather subtle reason). Thirders say that, because there are twice as many wakings if the coin comes up tails as heads, upon waking up you should think tails are twice as likely as heads.

Now imagine that we modify the scenario slightly. You’re put to sleep. A fair coin is flipped. If it comes up heads, you’ll wake up in the laboratory once, be put back to sleep, be put in your bed, have your memory erased, and be woken up again. If the coin comes up tails you’ll be woken up, put back to sleep, have your memory erased, and woken up a second time in the lab. Suppose that you wake up in the lab—what should your credence be in the coin having come up tails?

I submit that halfers in the original sleeping beauty problem should say 1/2. After all, both theories predict with equal confidence that you’ll wake up in the lab—you haven’t learned anything new. Furthermore, in the original sleeping beauty problem, presumably after you aren’t woken up again, you’ll be sent home. So halfers in the original sleeping beauty problem think that if you will be sent home on the second day, then after waking up in lab conditions, you should think there’s a 50% chance the coin came up tails. The only difference between that and this case is that in this case, when you are sent home, after you go to sleep, your memory is erased. But surely that shouldn’t make a difference—whether you wake up in your bed with memories or without on the second day, you still have experiences incompatible with the coin having come up tails. If heads means you’ll wake up twice, one of the times in a way incompatible with the coin having come up tails, it shouldn’t matter what the second wakeup looks like as long as it remains incompatible with the coin having come up tails.

So from the halfer view, it follows that in the scenario where you’re put asleep in your room without memories on the second day if the coin comes up heads, if you wake up in the experiment day, you should think there’s a 50% chance the coin came up tails. Now let me show why you shouldn’t think that, and so the halfer view must be false.

Imagine that when you wake up, before you know which room you’re in, you think about anthropics while your eyes are closed. You reason: both theories predict I’ll wake up twice. Awakening gives me no evidence for either theory, and because my eyes are closed, I don’t know if I’m in my room or not. So, therefore, right now I should think there’s a 50% chance that the coin came up tails. However, if the coin came up tails, I must be in the lab room, while if it came up heads, there’s only a 50% chance I’m in the lab room now, so if I am in the lab room, I should think there’s a 2/3 chance that the coin came up tails.

Then you open your eyes and find yourself in the lab room. By the above reasoning, your credence in the coin having come up tails should be 2/3. So, therefore, in the case where you’re woken up twice in the lab room if the coin comes up tails, if you have time to think about anthropics before finding out what room you’re in, you should think the odds that the coin came up tails are 2/3.

Here’s my last claim: the odds you should give to the coin having come up tails in this scenario shouldn’t depend on whether you think about anthropics with your eyes closed! Surely whether you happened to think about anthropics when your eyes were closed isn’t relevant to the rational credence in some event given some anthropic evidence. Anthropic data that you update on shouldn’t be sensitive to whether you actually thought about the anthropic situation before observing that data.

But from this it follows that in the scenario where you’re awoken twice in the lab if the coin came up tails and once in the lab and once in your room if the coin came up heads, if you wake up in the lab, you should think that there’s a 2/3 chance the coin came up tails. But that’s the claim that halfers in sleeping beauty should deny, for the reasons I gave before. So halfing is the wrong answer in sleeping beauty.

This chain of reasoning is a bit tricky to spell out. I’ll model it with arrows where A—>B means B follows from B.

Halfing being right in sleeping beauty—>halfing being right in the scenario where you’re awoken twice in the lab if the coin came up tails and once in the lab and once in your room if the coin came up heads—>halfing being right in the scenario where you’re awoken twice in the lab if the coin came up tails and once in the lab and once in your room if the coin came up heads and where, before knowing which room you’re in, when your eyes are closed, you think about anthropics. However halfing is not right in the scenario where you’re awoken twice in the lab if the coin came up tails and once in the lab and once in your room if the coin came up heads and where, before knowing which room you’re in, when your eyes are closed, you think about anthropics, so therefore halfing in sleeping beauty is wrong.

 

 


 

22 comments

Comments sorted by top scores.

comment by JBlack · 2024-04-05T07:24:11.381Z · LW(p) · GW(p)

Generally I think the 1/3 argument is more appealing, just based on two principles:

  1. Credences should follow the axioms of a probability space (including conditionality);
  2. The conditional credence for heads given that it is Monday, is 1/2.

That immediately gives P(Heads & Monday) = P(Tails & Monday). The only way this can be compatible with P(Heads) = 1/2 is if P(Tails & Tuesday) = 0, and I don't think anybody supports that!

P(Tails & Tuesday) = P(Tails & Monday) isn't strictly required by these principles, but it certainly seems a highly reasonable assumption and yields P(Heads) = 1/3.

I don't think anybody disagrees with principle (2).

Principle (1) is somewhat more dubious though. Since all credences are conditional on epistemic state, and this experiment directly manipulates epistemic state (via amnesia), it is arguable that "rational" conditional credences might not necessarily obey probability space rules.

Replies from: omnizoid, Ape in the coat
comment by omnizoid · 2024-04-05T13:01:25.775Z · LW(p) · GW(p)

Lots of people disagree with 2.  

Replies from: JBlack
comment by JBlack · 2024-04-07T02:26:04.642Z · LW(p) · GW(p)

They ... what? I've never read anything suggesting that. Do you have any links or even a memory of an argument that you may have seen from such a person?

Edit: Just to clarify, conditional credence P(X|Y) is of the form "if I knew Y held, then my credence for X would be ...". Are you saying that lots of people believe that if they knew it was Monday, then they would hold something other than equal credence for heads and tails?

Replies from: omnizoid
comment by omnizoid · 2024-04-07T04:06:10.938Z · LW(p) · GW(p)

Yes--Lewis held this, for instance, in the most famous paper on the topic. 

Replies from: JBlack
comment by JBlack · 2024-04-07T23:15:17.167Z · LW(p) · GW(p)

Good point! Lewis' notation P_+(HEADS) does indeed refer to the conditional credence upon learning that it's Monday, and he sets it to 2/3 by reasoning backward from P(HEADS) = 1/2 and using my (1).

So yes, there are indeed people who believe that if Beauty is told that it's Monday, then she should update to believing that the coin was more likely heads than not. Which seems weird to me - I have a great deal more suspicion that (1) is unjustifiable than that (2) is.

Replies from: omnizoid
comment by omnizoid · 2024-04-10T02:02:43.664Z · LW(p) · GW(p)

If you half and don't think that your credence should be 2/3 in heads after finding out it's Monday you violate the conservation of evidence.  If you're going to be told what time it is, your credence might go up but has no chance of going down--if it's day 2 your credence will spike to 100, if it's day 1 it wont' change. 

Replies from: JBlack, Ape in the coat
comment by JBlack · 2024-04-10T08:06:01.010Z · LW(p) · GW(p)

Is conversation of expected evidence a reasonably maintainable proposition across epistemically hazardous situations such as memory wipes (or false memories, self-duplicates and so on)? Arguably, in such situation it is impossible to be perfectly rational since the thing you do your reasoning with is being externally manipulated.

comment by Ape in the coat · 2024-04-10T08:39:31.335Z · LW(p) · GW(p)

You would violate conservation of expected evidence if 

P(Monday) + P(Tuesday) = 1 

However this is not the case because P(Monday) = 1 and P(Tuesday) = 1/2

comment by Ape in the coat · 2024-04-10T08:38:55.509Z · LW(p) · GW(p)

I'm a bit surprised that you think this way, considering that you've basically solved the problem yourself in this comment [LW(p) · GW(p)].

P(Heads & Monday) = P(Tails & Monday) = 1/2

P(Tails & Monday) = P(Tails&Tuesday) = 1/2

Because Tails&Monday and Tails&Tuesday are the exact same event.

The mistake that everyone seem to be making is thinking that Monday/Tuesday mean "This awakening is happening during Monday/Tuesday". But such events are ill-defined in the Sleeping Beauty setting. On Tails both Monday and Tuesday awakenings are supposed to happen in the same iteration of probability experiment and the Beauty is fully aware of that, so she can't treat them as individual mutual exclusive outcomes. 

You can only lawfully talk about "In this iteration of probability experiment Monday/Tuesday awakening happens".

In this post [LW · GW] I explain it in more details.

comment by Dagon · 2024-03-31T22:41:13.203Z · LW(p) · GW(p)

At any given point, you don't know whether it's the first or second wakening.  The betting argument depends on what the wager is (and more generally, what future experience is being predicted by the probability).  If it's "on wednesday, you'll be paid $1 if your predicion(s) were correct, and lose $1 if they were incorrect (and voided if somehow there are two wakenings and you make different predictions)", you should be indifferent to heads or tails as your prediction.  If it's "for each wakening, you'll win $1 if it's correct, and lose $1 if incorrrect", you should NOT be indifferent - you lose twice if you bet heads and are wrong, and the reverse if you bet tails.

Replies from: malentropicgizmo, omnizoid
comment by Malentropic Gizmo (malentropicgizmo) · 2024-04-02T18:37:13.685Z · LW(p) · GW(p)

If it's "on wednesday, you'll be paid $1 if your predicion(s) were correct, and lose $1 if they were incorrect (and voided if somehow there are two wakenings and you make different predictions)", you should be indifferent to heads or tails as your prediction.

I recommend setting aside around an hour and studying this comment [LW(p) · GW(p)] closely.

In particular, you will see that just because the text I quoted from you is true, that is not an argument for believing that the probability of heads is 1/2. Halfers are actually those who are NOT indifferent between heads and tails when they are awakened in this setup, they will change their mind about their randomized strategy!

Consider randomized strategies: before the experiment you decide that you will bet Heads with q probability and tails with 1-q. (Before the experiment, both halfers and thirders agree that all qs are equally good)

Thirder wakes up: 

Expected value of betting heads: P(Heads)*1$ + P(Tails&Monday)*P(you will bet heads on Tuesday)*(-1$) +  P(Tails&Tuesday)*P(you bet heads on Monday)*(-1$) = 1/3*1$ + 1/3*q*(-1$) +1/3*q*(-1$) = 1/3 - 2/3*q

Expected value of betting tails: P(Heads)*(-1$) + P(Tails&Monday)*P(you will bet tails on Tuesday)*1$ +  P(Tails&Tuesday)*P(you bet tails on Monday)*1$ = 1/3*(-1$) + 1/3*(1-q)*1$ +1/3*(1-q)*1$ = 1/3 - 2/3*q

Exactly equal for all q!!!!

Halfer wakes up: 

Expected value of betting heads: P(Heads)*1$ + P(Tails&Monday)*P(you will bet heads on Tuesday)*(-1$) +  P(Tails&Tuesday)*P(you bet heads on Monday)*(-1$) = 1/2*1$ + 1/4*q*(-1$) +1/4*q*(-1$) = 1/2 - 1/2*q

Expected value of betting tails: P(Heads)*(-1$) + P(Tails&Monday)*P(you will bet tails on Tuesday)*1$ +  P(Tails&Tuesday)*P(you bet tails on Monday)*1$ = 1/2*(-1$) + 1/4*(1-q)*1$ +1/4*(1-q)*1$ = -1/2*q

For all q halfers believe betting heads has higher expected value and so they are not indifferent between the two. (Because of your example's payoffs you can't get positive expected value even with randomized strategies, and so halfers won't fare worse by departing from their random strategy than thirders do staying with their randomized one, but that's just a coincidence. See the linked comment for an example where halfers' false beliefs DO lead them to make worse decisions! (that example has the same structure as yours but with different numbers))

Replies from: Ape in the coat, Dagon
comment by Ape in the coat · 2024-04-10T09:29:51.286Z · LW(p) · GW(p)

You are already aware of this but, for the benefits of other readers all mention it anyway. 

In this post [LW · GW] I demonstrate that the narrative of betting arguments validating thirdism is generally wrong and is just a result of the fact that the first and therefore most popular ha;fer model is wrong. 

Both thirders and halfers, following the correct model [LW · GW], make the same bets in Sleeping Beauty, though for different reasons. The disagreement is about how to factorize the product of probability of event and utility of event.

And if we investigate a bit deeper, halfer way to do it makes more sense, because its utilities do not shift back and forth during the same iteration of the experiment.

Replies from: malentropicgizmo
comment by Malentropic Gizmo (malentropicgizmo) · 2024-04-10T12:20:58.819Z · LW(p) · GW(p)

Yes, I basically agree: My above comment is only an argument against the most popular halfer model. 

However, in the interest of sparing reader's time I have to mention that your model doesn't have a probability for 'today is Monday' nor for 'today is Tuesday'. If they want to see your reasoning for this choice, they should start with the post you linked second instead of the post you linked first.

comment by Dagon · 2024-04-02T19:30:47.129Z · LW(p) · GW(p)

I'm probably not going to spend an hour on this, but at first glance, it appears that both that comment and yours are making very clear betting arguments.  I FULLY agree that the terms and resolution mechanism for the bets (aka the experience definition for the prediction) are the definition of probability, and control what probability Beauty should use.

Replies from: malentropicgizmo
comment by Malentropic Gizmo (malentropicgizmo) · 2024-04-02T19:37:54.188Z · LW(p) · GW(p)

But do you also agree that there isn't any kind of bet with any terms or resolution mechanism which supports the halfer probabilities? While you did not say it explicitly, your comment's structure seems to imply that one of the bet structure you gave (the one I've quoted) supports the halfer side. My comment is an analysis showing that that's not true (which was apriori pretty surprising to me).

Replies from: Dagon
comment by Dagon · 2024-04-02T20:41:36.937Z · LW(p) · GW(p)

I'm not sure where the error is, but I love that you've shown how thirders are the true halfers!   

Oh, it may also be because the randomization interacts with the "if they don't match, bets are off" stipulation, which was intended to acknowledge that the wakings on monday and tuesday are identical, to Beauty.  It turns out that disfavors tails, which is the only opportunity for a mismatch.  The fix is to either disallow randomization, or to say that "if there are two wagers which disagree, we'll randomize between them as to which is binding".  

Amusingly, this means that both halfer and thirder are indifferent between heads and tails.  Making it very clear that it's an incomplete question.  In fact, I don't mean to "support the halfer side", I mean that having a side, without specifying precisely what future experience(s) are being predicted, is incorrect.

Thank you for making it further clear that the problem is deeply rooted in intuitions of identity and the confusion between there being one entity on Sunday, one OR two on Monday, and one again on Wednesday.  I do think that it's purely a modeling choice whether to consider heads&tuesday to be 0.25 probability that just doesn't happen to have Beauty awake in it, or whether to distribute that probability among the others.  

Replies from: malentropicgizmo
comment by Malentropic Gizmo (malentropicgizmo) · 2024-04-02T20:52:20.126Z · LW(p) · GW(p)

I'm not sure where the error is in your calculations (I suspect in double-counting tuesday, or forgetting that tuesday happens even if not woken up, so it still gets it's "matches Monday bet" payout), but I love that you've shown how thirders are the true halfers!  

To be precise, I've shown that in a given betting structure (which is commonly used as an argument for the halfer side even if you didn't use it that way now) using thirder probabilities leads to correct behaviour. In fact my belief is that in ANY kind of setup using thirder probabilities leads to correct behaviour, while using the halfer probabilities leads to worse or equivalent results. I wouldn't characterize this as ''thirders are the true halfers!'. I disagree that there is a mistake, is the only reason you think there is a mistake that the result of the calculation disagrees with your prior belief?

I don't mean to "support the halfer side", I mean that having a side, without specifying precisely what future experience(s) are being predicted, is incorrect.

But if every reasonable way to specify precisely what future experiences are being predicted gives the same set of probabilities, couldn't we say that one side is correct?

comment by omnizoid · 2024-04-01T13:12:54.424Z · LW(p) · GW(p)

I didn't make a betting argument. 

Replies from: Dagon
comment by Dagon · 2024-04-01T14:59:46.332Z · LW(p) · GW(p)

Not directly, but all probability is betting.  Or at least the modeling part is the same, where you define what the prediction is that your probability assessment applies to.

Sleeping beauty problems are interesting because they mess with the number of agents making predictions, and this very much confuses our intuitions.  The confusion is in how to aggregate the two wakings (which are framed as independent, but I haven't seen anyone argue that they'll ever be different). 

I think we all agree that post-amnesia, on Wednesday, you should predict 50% that the experimenter will reveal heads, and you were awoken once, and 50% tails, twice.  When woken and you don't know if it's Monday or Tuesday, you should acknowledge that on Wednesday you'll predict 50%.  If right now you bet 1/3, it's because you're predicting something different than you will on Wednesday.  

Replies from: JBlack
comment by JBlack · 2024-04-02T06:19:19.139Z · LW(p) · GW(p)

Of course you're predicting something different. In all cases you're making a conditional prediction of a state of the world given your epistemic state at the time. Your epistemic state on Wednesday is different from that on Monday or Tuesday. On Tuesday you have a 50% chance of not being asked anything at all due to being asleep, which breaks the symmetry between heads and tails.

By Wednesday the symmetry may have been restored due to the amnesia drug - you may not know whether the awakening you remember was Monday (which would imply heads) or Tuesday (which would imply tails). However, there may be other clues such as feeling extra hungry due to sleeping 30+ hours without eating.

Replies from: Dagon
comment by Dagon · 2024-04-02T15:51:08.592Z · LW(p) · GW(p)

you're making a conditional prediction of a state of the world given your epistemic state at the time.

I think this is a crux. IMO, you can't predict the state of the world, since you have no access to that except via your perceptions/experiences.  You're making a prediction of a future epistemic state (aka experience), given (of course) your current epistemic state, conditional on which prediction you make (what will happen if you guess either way, and if you're right/wrong). 

It's perfectly reasonable to bet 1/3 if the reveal/payout is instantaneous and multiple, and to bet 1/2 if the reveal/payout is post-merge and singular.  Each is correct, for predicting different future experiences.

comment by Malentropic Gizmo (malentropicgizmo) · 2024-04-05T09:48:41.478Z · LW(p) · GW(p)

B follows from B

Typo