Has anyone actually changed their mind regarding Sleeping Beauty problem?

post by Ape in the coat · 2024-01-30T08:34:43.904Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    13 Dagon
    5 pathos_bot
    3 Rafael Harth
    2 Malentropic Gizmo
    2 the gears to ascension
    2 Greg D
    1 Kristin Lindquist
None
No comments

https://en.wikipedia.org/wiki/Sleeping_Beauty_problem

If that happened what was the argument that did it for you?

I'm interested in situations where a person used to think that the correct answer was 1/2 and then, on a reflection, decided that it's actually 1/3 , or vice versa, not when the resulting belief is that the question is meaningless or both answers are valid.

Answers

answer by Dagon · 2024-01-30T17:51:47.770Z · LW(p) · GW(p)

I flipped a few times between 1/2 and 1/3 before realizing that they are both valid answers to different questions.  

comment by JeffJo · 2024-02-05T16:04:16.561Z · LW(p) · GW(p)

Say I ask you to draw a card and then, without looking at it, show it to me. I tell you that it is an Ace, and ask you for the probability that you drew the Ace of Spades. Is the answer 1/52, 1/4, or (as you claim about the SB problem) ambiguous?

I think it is clear that I wanted the conditional probability, given the information you have received. Otherwise, what was the point of asking after giving the information?

The "true" halfer position is not that ambiguity; it is that the information SB has received is null, so the conditional probability is the same as the prior probability. The thirder position is that there are four possible observation opportunities of the coin, all equally likely, and one has been eliminated. To see this better, always wake SB on Tuesday, but instead of asking about the coin, take her on a shopping spree if the coin landed on Heads. If she is asked the question, she knows that one observation opportunity is eliminated, and the answer is clearly 1/3.

The difference between the halfer and thirder, is that the halfer thinks that sleeping thru Tuesday removes Tuesday from the sample space of the full experiment, while the thirder sees it as something that contradicts observation.

Replies from: SaidAchmiz, ben-lang
comment by Said Achmiz (SaidAchmiz) · 2024-02-05T16:33:12.342Z · LW(p) · GW(p)

Say I ask you to draw a card and then, without looking at it, show it to me. I tell you that it is an Ace, and ask you for the probability that you drew the Ace of Spades. Is the answer 1⁄52, 1⁄4, or (as you claim about the SB problem) ambiguous?

Correct answer depends on the reward structure. Absent a reward structure, there is no such thing as a correct answer. See this post [LW · GW].

In your card-drawing scenario, there is only one plausible reward structure (reward given for each correct answer). In the Sleeping Beauty problem, there are two plausible reward structures [LW(p) · GW(p)]. Of those two reward structures, one results in the correct answer being one-third, the other results in the correct answer being one-half.

Replies from: JeffJo
comment by JeffJo · 2024-02-06T12:21:50.581Z · LW(p) · GW(p)

If the context of the question includes a reward structure, then the correct solution has to be evaluated within that structure. This one does not. Artificially inserting one does not make it correct for a problem that does not include one.

The actual problem places the probability within a specific context. The competing solutions claim to evaluate that context, not a reward structure. One does so incorrectly. There are simple ways to show this.

Replies from: Radford Neal, SaidAchmiz
comment by Radford Neal · 2024-02-06T15:18:38.253Z · LW(p) · GW(p)

Actually, there is no answer to the problem as stated. The reason is that the evidence I (who drew the card) have is not "the card is an Ace", but rather "JeffJo said the card is an Ace". Even if I believe that JeffJo never lies, this is not enough to produce a probability for the card being the Ace of Spades. I would need to also consider my prior probability that JeffJo would say this conditional on it being the Ace of Space, the Ace of Hearts, the Ace of Diamonds, or the Ace of Clubs. Perhaps I believe the JeffJo would never say the card is an Ace if it is a Space. In that case, the right answer is 0.

However, I agree that a "reward structure" is not required, unless possible rewards are somehow related to my beliefs about what JeffJo might do.

For example, I can assess my probability that the store down the street has ice cream sundaes for sale when I want one, and decide that the probability is 3/4. If I then change my mind and decide that I don't want an ice cream sundae after all, that should not change my probability that one is available.

Replies from: JeffJo
comment by JeffJo · 2024-02-07T20:43:15.696Z · LW(p) · GW(p)

"I would need to also consider my prior probability that JeffJo would say this conditional on it being the Ace of Space, the Ace of Hearts, the Ace of Diamonds, or the Ace of Clubs. Perhaps I believe the JeffJo would never say the card is an Ace if it is a Space. In that case, the right answer is 0."

And in the SB problem, what if the lab tech is lazy, and doesn't want a repeat waking? So they keep re-flipping the "fair coin" until it finally lands on Heads? In that case, her answer should be 1.

The fact is that you have no reason to think that such a bias favors any one card value, or suit, or whatever, different than another.

Replies from: Radford Neal
comment by Radford Neal · 2024-02-07T21:39:10.076Z · LW(p) · GW(p)

You may think the difference between "the card is an Ace" and "JeffJo says the card is an Ace" is just a quibble.  But this is actually a very common source of error.  

Consider the infamous "Linda" problem, in which researchers claim that most people are irrational because they think "Linda is a bank teller" is less likely than "Linda is a bank teller and active in the feminist movement".  When you think most people are this blatantly wrong, you maybe need to consider that you might be the one who's confused...

Replies from: JeffJo
comment by JeffJo · 2024-02-09T16:22:08.987Z · LW(p) · GW(p)

Yes, the fact that someone had to chooses the information is an common source of error, but that is not what you describe. I choose a single card and a single value to avoid that very issue. With very deliberate thought. Your example is a common misinterpretation of what probability means, not how to use it correctly according to Mathematics.

A better example, of what you imply, is the infamous Two Child Problem. And its variation, the Child Born on Tuesday Problem.

  1. I have exactly two children. At least one is a boy. What are the chances that I have two boys?
  2. I have exactly two children. At least one is a boy who was born on a Tuesday. What are the chances that I have two boys?

(BTW, both "exactly" and "at least" are necessary. If I had said "I have one" and asked about the possibility of two, it implies that any number I state carries an implicit "at least.")

Far too many "experts" will say that the answers are 1/3 and 13/27, respectively. Of the 4 (or 196) possible combinations of the implied information categories, there are 3 (or 27) that fit the information as specified, and of those 1 (or 13) have two boys.

Paradox: How did the added information change the probability from 1/3 to 13/27?

The resolution of this paradox is that you have to include the choice I made of what to tell you, between what most likely is two sets of equivalent information. If I have a Tuesday Boy and a Thursday Girl, couldn't I have used the girl's information in either question? Since you don't know how this choice is made, a rational belief can only be based on assuming I chose randomly.

So in 2 (or 26) of the 3 (or 27) combinations where the statement I made is true, there is another statement that is also true. And I'd only make this one in half of them. So the answers are 1/(3-2/1)=1/2 and (13-12/2)/(27-26/2)=7/14=1/2. And BTW, this is also how the Monty Hall Problem is solved correctly. That problem originated as Martin Gardner's Three Prisoners Problem, which he introduced in the same article where he explained why 1/3 is not correct for #1 above.

In my card drawing problem, there is only one card rank I can report. If you choose to add information, as done with Linda the Bank Teller, you are not a rational solver.

Replies from: Radford Neal
comment by Radford Neal · 2024-02-11T22:49:47.155Z · LW(p) · GW(p)

Interesting.  I hadn't heard of the Child Born on Tuesday Problem.  I think it's actually quite relevant to Sleeping Beauty, but I won't go into that here...

Both problems (your 1 and 2) aren't well-defined, however. The problem is that in real life we do not magically acquire knowledge that the world is in some subset of states, with the single exception of the state of our direct sense perceptions. One could decide to assume a uniform distribution over possible ways in which the information we are supposedly given actually arrives by way of sense perceptions, but uniform distributions are rather arbitrary (and will often depend on arbitrary aspects of how the problem is formulated).

Here's a boys/girls puzzle I came up with to illustrate the issue:

 A couple you've just met invite you over to dinner, saying "come by around 5pm, and we can talk for a while before our three kids come home from school at 6pm".

You arrive at the appointed time, and are invited into the house. Walking down the hall, your host points to three closed doors and says, "those are the kids' bedrooms".  You stumble a bit when passing one of these doors, and accidentally push the door open.  There you see a dresser with a jewelry box, and a bed on which a dress has been laid out.  "Ah", you think to yourself, "I see that at least one of their three kids is a girl".

Your hosts sit you down in the kitchen, and leave you there while they go off to get goodies from the stores in the basement.  While they're away, you notice a letter from the principal of the local school tacked up on the refrigerator.  "Dear Parent", it begins, "Each year at this time, I write to all parents, such as yourself, who have a boy or boys in the school, asking you to volunteer your time to help the boys' hockey team..."  "Umm", you think, "I see that they have at least one boy as well".

That, of course, leaves only two possibilities:  Either they have two boys and one girl, or two girls and one boy.  What are the probabilities of these two possibilities?

 The symmetrical summaries of what is learned are intentionally misleading (it's supposed to be a puzzle, after all).  The way in which you learned they have at least one girl is not the same as the way you learned that they have at least one boy. And that matters.

Replies from: JeffJo
comment by JeffJo · 2024-02-12T15:07:15.284Z · LW(p) · GW(p)

Your problem is both more, and less, well-posed than you think.

The defining feature of the "older child" version of the Two Child Problem has nothing to do with age. It is that you have ordered the children independently of gender, and identified the gender of a child in a position within that order. Age works well here, since it is easy to show why BB, BG, GB, and GG must be equiprobable by examining the event of the second birth.

But any gender-independent ordering works. It could be alphabetizing names, their seats around the dinner table (clockwise from Mother), or which bedroom each child has. You picked a specific child in an order by looking in a specific room, so the genders of the other two are independent of it and each other. So gBB, gBG, gGB, and gGG are equiprobable at that point in your acquisition of knowledge.

But your second acquisition depends on whether similar help is needed for other sports, and how many gender-specific sports there are. And why there isn't one for girls' sports, since we know there is a girl.

My problems are well-posed for what I intended. You didn't "stumble upon" the information, a source with absolute knowledge told it to you, with no hint of any discrimination between genders. There is an established solution in such cases; it's called Bertrand's Box Paradox. That name did not, originally, refer to a problem; it referred to the following solution. It is best illustrated using a different probability than what I asked for:

  1. I know Mr. Abbot's two children. At least one is a boy.
  2. I know Mrs. Baker's two children. At least one is a girl.
  3. I know the Curry's two children. At least one has the gender that I have written inside this sealed envelope.

In each case, what is the probability that the family has a boy and a girl?

Clearly, the answers A1 and A2 must be the same. This is not using uniform distributions, although that is a valid justification. Probability is not about what is true in a specific instance of this disclosure of information - that's a naive mistake. It is about what we can deduce from the information alone. It is a property of our knowledge of a world where it happens, not the world itself. Since our information is equivalent in Q1 and Q2, that means A1=A2.

But you have no significant information about genders in Q3, so A3 must be 1/2. And that can be used to get A1 and A2. Bertrand argued simply that if the envelope were opened, A3 had to equal A1 and A2 regardless of what it said, so you didn't need to open it. Any change would be a  paradox. But there is a more rigorous solution.

If W represents what is written in the envelope, the Law of Total Probability says:

A3 = Pr(W="Boy")*A1 + Pr(W="Girl")*A2

A3 = Pr(W="Boy")*A1 + Pr(W="Girl")*A1

A3 = [Pr(W="Boy") + Pr(W="Girl")]*A1

A3 = A1 = A2 (which all equal 1/2).

This solution is also used for the famous Monty Hall Problem, even if those using it do not realize it. The most common solution uses the assertion that "your original probability of 1/3 can't change." So, since the open door is revealed to not have the car, the closed door that you didn't pick must now have a 2/3 probability.

The assertion is equivalent to my sealed envelope. You see the door that gets opened, which equivalent to naming one gender in Q1 and Q2. Since your answer must be the same regardless of which door that is, it is the same as when you ignore which door is opened.

comment by Said Achmiz (SaidAchmiz) · 2024-02-06T12:58:22.397Z · LW(p) · GW(p)

If there is no reward structure, then neither answer is meaningfully more “correct” than the other. Beliefs are for actions.

comment by Ben (ben-lang) · 2024-02-05T17:31:55.792Z · LW(p) · GW(p)

Echoing Said's comment, what does it mean to be "correct" in this context? If we ask Beauty to pick between heads or tails, and she picks heads, then sometimes this will be correct, and sometimes not.

In order for Beauty to give a (correct) probabilistic answer to the question (1/3 or 1/2) we need to introduce the idea of some proportion of trials. We need to at least imagine running the situation many times, and talk about some proportion of those imagined repeats. These imagined trials don't need to actually happen, they are imaginary. But they are an indispensable fiction.

Now, we imagine 100 repeats. 50 heads, 50 tails. Beauty is awoken a total 150 times. For 100 awakenings it was a head that was flicked, for 50 awakenings a tail.

>For 1/3rd of the awakenings the coin was tails. For 1/2 of the trials the coin was tails.

I don't think anyone (halfer or thirder) disputes the line directly above (with the >). There is agreement on what proportion of awakenings tails was tossed, and on what proportion of trials a tails was tossed. We can all see that one of the two proportions is 1/3 and the other is 1/2. Which of the two proportions is picked out by the word "probability" is the entire argument.

The rewards structure @Said Achmiz [LW · GW] is talking about is a nice way of making people either aim to be right in as many guesses as possible or in as many trials as possible, which demand different strategies. 

comment by Ape in the coat · 2024-01-30T17:53:25.482Z · LW(p) · GW(p)

What was the initial position? Do you remember the arguments that made you flip?

Replies from: Dagon
comment by Dagon · 2024-01-30T22:34:41.752Z · LW(p) · GW(p)

I don't remember which was my initial position.  I got to the point that I could be confident in 1/2 because there's no new information: waking and being asked is GUARANTEED to be experienced, subjectively, once (even if it's twice to an observer, Beauty experiences it for the first time both times).  And confident in 1/3 via bet-resolution framing and number of timelines that make predictions.

After a few iterations where the framing makes it insanely obvious in both directions, I deconstructed it to the point that I realize it depends on what actual question is being asked.  Probability is purely subjective (modulo quantum measures, perhaps) - the truth is 0 or 1, only Beauty's expectation/prediction changes, based on her framing.

answer by pathos_bot · 2024-01-30T23:08:10.221Z · LW(p) · GW(p)

The wording of the question is ambiguous. It asks for your determination on the likelihood it was heads when you were "first awakened", but by your perception any wakening is you being first awakened. If it is really asking about your determination given you have the information that the question is being asked on your first wakening regardless of your perception, then it's 1/2. If you know the question will be asked on your first or second wakening (though the second one will in the moment feel like the first), then it's 1/3.

comment by Radford Neal · 2024-02-01T01:02:52.728Z · LW(p) · GW(p)

The wording may be bad, but I think the second interpretation is what is intended. Otherwise the discussion often seen of "How might your beliefs change if after awakening you were told it is Monday?" would make no sense, since your actual first awakening is always on Monday (though you may experience what feels like a first awakening on Tuesday).

comment by Ape in the coat · 2024-02-01T07:39:04.478Z · LW(p) · GW(p)

You should probably use "last awakening" instead of "first awakening" in your attempt to disambiguation. See Radford Neal's comment for the reason why.

Replies from: JeffJo
comment by JeffJo · 2024-02-14T21:09:30.712Z · LW(p) · GW(p)

It is my contention that:

  1. The problem, as posed, is not ambiguous and so needs no "disambiguation."
  2. "When you are first awakened" refers to the first few moments after you are awakened. That is, before you/SB might learn information that is not provided to you/SB by the actual problem statement. It does not refer to the relative timing of (potentially) two awakenings.
  3. Any perceived ambiguity is caused by the misinterpretation of Elga's solution, which artificially introduces such information for the purpose of updating the probability space from the permissible form to a hypothetical one that should have a more obvious solution.
  4. Any argument that "when you are first awakened" refers to such relative timing, which is impossible for the subject to assess without impermissible information, is obfuscation with the intent to justify a solution that requires such information.

So any comment about first/last/relative awakenings is irrelevant.

Does this help? I know I can't prove that #2 is correct, but it can be. Nothing else can.

Replies from: JeffJo
comment by JeffJo · 2024-02-14T22:16:43.926Z · LW(p) · GW(p)

There are several, valid solutions that do not always introduce the details that are misinterpreted as ambiguities. The two-coin version is one, which says the answer is 1/3.

Here's another, that I think also proves the answer is 1/3, but I'm sure halfers will disagree with that. But it does prove that 1/2 can't be right.

  • Instead of leaving SB asleep on Tuesday, after Heads, we wake her but do not interview her. We do something entirely different, like take her on a $5000 shopping spree on Rodeo Drive. (She can get maybe one nice dress.)

This way, when she is first wakened - which can only mean before she learns if it is for an interview or a shopping spree, since she can't know about any prior/subsequent waking - she is certain that the probability of Heads and Tails are each 50%. But when she is interviewed, she knows that something that only happens after a Heads has been "eliminated." So the probability of Heads must be reduced, and the probability of Tails must be increased. I think that she must add "Heads and it is Tuesday" to the sample space Elga used, and each observation has a probability of 25%. Which makes the conditional probability of Heads, given that she is interviewed, 1/3.

BUT IT DOES NOT MATTER WHAT HAPPENS ON "HEADS AND IT IS TUESDAY." The "ambiguity" is created by ignoring that "HEADS and it is Tuesday" happens even if SB sleeps through it.

OR, we could use four volunteers but only one coin. Let each on sleep through a different combination of "COIN and it is DAY." Ask each for the probability that the coin landed on the side where she might sleep through a day. On each day, three will be wakened. For two of them, the coin landed on the side that means waking twice. For one, it is the side for waking once.

All three will be brought into a room where they can discuss the answer, but not share their combination. Each has the same information that defines the correct answer. Each must give the same answer, but only one matches the condition. That answer is 1/3.

+++++

Yes, these all are just different ways of presenting the same information: that, in the popular version, Tuesday after Heads still happens, but cannot be observed. This is what is wrong in all the debates; they treat it as if Tuesday after Heads does not happen.

In my card example, the "prior" sample space includes all 52 cards. The probability distribution is 1/52 for each card. When I say that one is an Ace, it does not mean that it was impossible for a Seven of Clubs to have been drawn, it means that an observation was made, and in that observation the card wasn't the Seven of Clubs.

In the popular version of the SB problem, there are four possible states that can occur. Three can be observed; and because of the amnesia drug, they are all independent to SB as an observer. Regardless of whether she can know the day, it is part of the observation. Since she is awake, (Heads, Tuesday) is eliminated - as an observation, not from the experiment as a whole - and the updated probability - for this observation, not the experiment as a whole - is 1/3.

Now, you can use this problem to evaluate epistemic probability. It isn't really am epistemic problem, but I supposed you can apply it. The answer is 1/3, and the correct epistemic solution is the one that says so.

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-15T06:31:51.806Z · LW(p) · GW(p)

You keep repeating the same points and they are all based on faulty assumptions. Which you would have already seen if you properly evaluated my example with balls in a box. Let me explicitly do it for you:

Two coins are tossed. Then if it's not Heads Heads one ball is put into a box. Then the second coin is turned to the other side and again if it's not Heads Heads a ball is put into a box. After this procedure is done, you are given a random ball from the box. What is the probability that the first coin is Heads after you've got the ball? 

The correct answer here is unambiguosly 1/2, which we can check by running the experiment multiple times. On every iteration you get only one ball and on 1/2 of them the first coin is Heads. Getting a ball is not evidence in favor of anything because you get it regardless of the outcome of the coin toss.

But if we reason about this problem the same way you try to reason about Sleeping Beauty we inevitably arrive to the conclusion that it has to be 1/3. After all, there are four equiprobable possible states {HH, TT, HT, TH}. The ball you've just got couldn't be put in the box on HH, so we have to update to three equiprobable states {HT, TH, TT} and the only one of them where the first coin is Heads is HT. P(HT) = 1/3.

This show that such reasoning method can't generally produce correct answers. So when you applied it to Two-Coin-Toss version of Sleeping Beauty you didn't actually show that 1/3 is the correct answer.

Replies from: JeffJo, Throwaway2367
comment by JeffJo · 2024-02-16T21:03:19.092Z · LW(p) · GW(p)

I keep repeating, because you keep misunderstanding how my example is very different than yours.

In yours, there is one "sampling" of the balls (that is, a check on the outcome and a query about it). This one sampling is done only after two opportunities to put a ball into box have occurred. The probability you ask about depends on what happened in both. Amnesia is irrelevant. She is asked just once.

In mine, there are two "samplings." The probability in each is completely independent of the other. Amnesia is important to maintain the independence.

SPECIFICALLY: SB's belief is based entirely one what happens in these steps:

  1. Two coins are randomly arranged so that each of the four combinations {HH, HT, TH, TT} has a 25% chance to be the outcome.
  2. If the random combination is HH, one option happens, that does not involve asking for a probability. Otherwise, another option happens, and it does involve asking for a probability.
  3. SB has full knowledge of these three steps, and knows that the second option was chosen. She can assign a probability based ENTIRELY on these three steps.

This happens twice. What you seem to ignore, is that the method used to arrange the coins is different in the first pass through these three steps, and the second. In the first, it is flipping the coins. In the second, it is a modification of this flips. But BECAUSE OF AMNESIA, this modification does no, in any way, affect SB's assessment that sample space is {HH, HT, TH, TT}, or that each has a 25% chance to be the outcome.

Her answer is unambiguously 1/3 anytime she is asked.

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-17T06:31:09.992Z · LW(p) · GW(p)

you keep misunderstanding how my example is very different than yours.

I understand that they are different, that's the whole point. They are different in such a way that we can agree that the answer to my problem is clearly 1/2, while we can't agree to the answer to your problem. 

But none of their differences actually affect the mathematical argument you have constructed, so the way you arrive to an answer 1/3 in your problem, would arrive to the same answer in mine.

Amnesia is irrelevant

What amnesia does in Sleeping Beauty is ensuring that the Beauty can't order the outcomes. So when she is awaken she doesn't know whether it's the first awakening or the second. She is unable to observe the event "I've been awaken twice in this experiment". The similar effect is achieved by the fact that she is given a random ball from the box. She doesn't know whether it's the first ball or the second. And she can't directly observe whether there are two balls in the box or only one.

In mine, there are two "samplings."

Which is completely irrelevant to your mathematical argument about four equiprobable states because you've constructed it in such a manner, that the same probabilities are assigned to all of them regardless of whether the Beauty is awake or not. All your argument is based on "There are four equiprobable states and one of them is incompatible with the observations", it is not dependent on the number of observations.

Now, there is a different argument that you could've constructed that would take advantage of two awakening. You could've said that when the first coin is Tails there are twice as many awakenings as when it's Heads and claim that we should interpret it as  but it's very much not the argument you were talking about. In a couple of days, in my next post I'm explicitly exploring both of them.

The probability in each is completely independent of the other.

This is wrong. And it's very easy to check. You may simulate your experiment, a large number of times, writing down the states of the coins on every awakening and notice that there is a clear way to predict the next token beter than chance: 

if i-th token == TH and i-1-th token != TT then
	i+1-th token = TT

elseif i-th token == TT and i-1-th token != TH:
	i+1-th token = TH 

Which is absolutely not the case in a situation where your argument actually works:

Two coins are tossed on Heads Heads event doesn't happen, on every other outcome it does. Event has happened, what is the probability that the first coin came Heads?

What you seem to ignore, is that the method used to arrange the coins is different in the first pass through these three steps

Ignore? On the contrary. This is the exact reason for why your argument doesn't work. You treat correlated events as independant. That's what I'm trying to explain to you the whole time and why I brought up the problem with balls being put in the box, because there this kind of mistake is more obvious there.

But BECAUSE OF AMNESIA, this modification does no, in any way, affect SB's assessment that sample space is {HH, HT, TH, TT}, or that each has a 25% chance to be the outcome

I suppose this is our crux.

I see two possible avenues for disagreement: about the territory and about the map

First is, whether having an amnesia actually modify the statistical properties of the experiment you are participating in. Do we agree that it's not the case?

Second, statement about the map, which, correct me if I'm wrong, you actually hold, is that the Beauty should reason about her awakenings as independent because this represents her new state of knowledge due to amnesia?

This would be a correct statement if the Beauty was made to forgot that the events are correlated, if on the awakenings she had different information about which experiment she is participating in, than before she was put to sleep.

But in our case, the Beauty remembers the design of the experiment, she is aware that the states of the coins are not independent between two passes and if she reasons as if they are - she makes a mistake.

Her answer is unambiguously 1/3 anytime she is asked.

Her answer is 1/3 to the question "What is the probability that the coin is Heads in a random awakening throughout multiple iterations of such experiment".

Her answer is 1/2 to the question "What is the probability that the coin is Heads in this particular experiment".

But I don't think that ambiguity is really the problem here.

comment by Throwaway2367 · 2024-02-15T09:55:24.271Z · LW(p) · GW(p)

Why couldn't the ball I've just got been put into the box on HH? On HH, after we turn the second coin we get HT which is not HH, so a ball is put into the box, no?

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-15T10:10:03.058Z · LW(p) · GW(p)

Well, sure but then it would mean that the ball wasn't put into the box on HH, it was put into the box on HT.

If this explanation still feels confusing as if something unlawful is going on - it's because it is. It's the exact kind of slight of hand that JeffJo uses to show that 1/3 is the correct answer to Two-Coin-Toss version of Sleeping Beauty. If you are able to spot the mistake here you should be able to spot it in his reasoning as well.

Replies from: Throwaway2367
comment by Throwaway2367 · 2024-02-15T10:18:25.747Z · LW(p) · GW(p)

I see, I haven't yet read that one. But yes, we should be clear what we denote with HH/HT/TT/TH, the coins before, or after the turning of second coin.

comment by JeffJo · 2024-02-05T16:32:41.067Z · LW(p) · GW(p)

The same problem statement does not mention Monday, Tuesday, or describe any timing difference between a "mandatory" waking and an "optional" one. (There is another element that is missing, that I will defer talking about until I finish this thought.) It just says you will be wakened once or twice. Elga added these elements as part of his solution. They are not part of the problem he asked us to solve.

But that solution added more than just the schedule of wakings. After you are "first awakened," what would change if you are told that the day is Monday? Or that the coin landed on Tails (and you consider what day it is)? This is how Elga avoided any consideration, given his other additions, of what significance to attach to Tuesday, after Heads. That was never used in his solution, yet could be the crux of the controversy.

I have no definitive proof, but I suspect that Elga was already thinking of his solution. He included two hints to the solution: One was "two days," although days were never mentioned again, and that "when first awakened." Both apply to the solution, not the problem as posed. I think "first awakened" simply meant before you could learn information.

+++++

You point out that, as you are trying to interpret it, SB cannot make the determination whether this is a "first awakening." But the last element that is usually included in the problem, but was not in what Elga actually asked, is that the question is posed to you before you are first put to sleep. So the issue you raise - essentially, whether the question is asked on Tuesday, after Heads - is moot. The question already exists, as you wake up. It applies to that moment, regardless of how many times you are wakened.

answer by Rafael Harth · 2024-01-30T17:49:45.751Z · LW(p) · GW(p)

I exchanged a few PMs with a friend who moved my opinion from to , but it was when I hadn't yet thought about the problem much. I'd be extremely surprised if I ever change my mind now (still on ). I don't remember the arguments we made.

comment by Ape in the coat · 2024-01-30T18:30:15.369Z · LW(p) · GW(p)

Is you current certanity in the correctness of thirdism based on some specific arguments that you remember? I know that there are a lot of arguments for thirdism, but I'd like to find the strongest ones.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2024-01-30T18:52:02.218Z · LW(p) · GW(p)

After the conversation, I went on to think about anthropics a lot and worked out a model in great detail. It comes down to something like ASSA (absolute self-sampling assumption). It's not exactly the same and I think my justification was better, but that's the abbreviated version.

Replies from: Ape in the coat
comment by Ape in the coat · 2024-01-30T18:57:09.572Z · LW(p) · GW(p)

Thanks! I'll look more into that.

answer by Malentropic Gizmo · 2024-02-18T17:11:30.116Z · LW(p) · GW(p)

Initially, I had a strong feeling/intuition that the answer was 1/3, but felt that because you can also construct a betting situation for 1/2, the question was not decided. In general, I've always found betting arguments the strongest forms of arguments: I don't much care how philosophers feel about what the right way to assign probabilities is, I want to make good decisions in uncertain situations for which betting arguments are a good abstraction. "Rationality is systematized winning" and all that.

Then, I've read this comment [LW(p) · GW(p)], which showed me that I made a mistake by accepting the halfer betting situation as an argument for 1/2. In retrospect, I could have avoided this by actually doing the math, but it's an understandable mistake, people have finite time. In particular, this sentence on the Sleeping Beauty Paradox tag page [? · GW] also makes the mistake: "If Beauty's bets about the coin get paid out once per experiment, she will do best by acting as if the probability is one half." No, as the linked comment shows, it is advantageous to bet 1:1 in some interpretations, but that's exactly because the actual probability is 1/3. Note: there is no rule/axiom that a bet's odds should always correspond with the event's probability, that is something that can be derived in non-anthropic situations assuming rational expected money-maximizing agents. It's more accurate to call what the above situation points to a scoring rule, you can make up situations with other scoring rules too: "Sleeping Beauty, but Omega will kick you in nuts/vulva if you don't say your probability is 7/93." In this case it is also advantageous "to behave as if" the probability is 7/93 in some respect, but the probability in your mind should still be the correct one.

comment by Ape in the coat · 2024-02-19T15:36:15.226Z · LW(p) · GW(p)

Thank you for bringing this to my attention. As a matter of fact in the linked comment Radford Neal is dealing with a weak-man, while conveniently assuming that other alternatives "are beyond the bounds of rational discussion", which is very much not the case. 

But it is indeed a decent argument that deserves a detailed rebuttal. And I'll make sure to provide it in the future.

Replies from: malentropicgizmo
comment by Malentropic Gizmo (malentropicgizmo) · 2024-02-19T15:43:17.795Z · LW(p) · GW(p)

Please do so in a post, I subscribed to those

answer by the gears to ascension · 2024-02-16T21:42:53.329Z · LW(p) · GW(p)

I confidently switched from 1/3 to 1/2 and then back to 1/3 and then noticed the inconsistency and am now not certain that the question makes sense as posed at all, and I'm not sure what would fix it but maybe specifying better why one wants to know the answer so that it can be answered by decision theory rather than some objective "which one is true".

comment by Ape in the coat · 2024-02-17T03:49:55.048Z · LW(p) · GW(p)

Can you remember which argument switched you from thirdism to halfism and back?

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2024-02-17T18:36:01.446Z · LW(p) · GW(p)

from third to half: the betting argument that Greg D expresses, more or less. Mostly the first paragraph, I didn't expand it to his second paragraph.

from half back to third: @Tamsin Leake's sequentialized version: you go to sleep. you are woken once on monday and twice on tuesday. each time, your memory is reset. given that you observed yourself wake, is it monday or tuesday?

but wait, I'm not sure any of this makes sense: the anthropic decision theory paper.

except now I'm not sure, in retrospect, whether maybe I found the anthropic decision paper theory before hearing tamsin's argument, and so in fact never really switched back to third, just would have done so if I had still accepted the framing at all?

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-19T15:47:17.550Z · LW(p) · GW(p)

Thanks!

you go to sleep. you are woken once on monday and twice on tuesday. each time, your memory is reset. given that you observed yourself wake, is it monday or tuesday?

Oh, that's a good one! I think I see how it can prompt thirders intuition, but do you by chance have a link to the argument as a whole?

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2024-02-19T23:03:15.130Z · LW(p) · GW(p)

No, it was in person. But I think that was more or less the extent of the argument.

answer by Greg D · 2024-02-02T00:25:02.910Z · LW(p) · GW(p)

I was an inveterate thirder until I read a series of articles on repeated betting, which pointed out that in many cases, maximizing expected utility leads to a “heavy tailed” situation in which a few realizations of you have enormous utility, but most realizations of you have gone bankrupt. The mean utility across all realizations is large, but that’s useless in the vast majority of cases because there’s no way to transfer utility from one realization to another. This got me thinking about SB again, and the extent to which Beauties can or can not share or transfer utility between them. I eventually convinced myself of the halfer position.

Here’s the line of reasoning I used. If the coin comes up H, we have one awakening (experience A). If the coin comes up T, we have two awakenings - either in series or in parallel depending on the variant, but in any case indistinguishable. By Bayes, Pr(H|A) = Pr(A|H)Pr(H)/Pr(A). The core insight is that Pr(A|H) = Pr(A|T) = Pr(A) = 1, since you have experience A no matter what the coin flip says. SB is akin to drawing a ball from one of two jars, one of which contains one red ball, and the other of which contains two red balls. Having drawn a red ball, you learn nothing about which jar you drew from.

What about making bets, though? Say that SB is offered a chance to buy a ticket worth $1 if the coin was T, and $0 if it was H. To maintain indistinguishability between the “three Beauties,” each time she is awakened, she must be offered the same ticket. In this case, SB should be willing to pay up to $2/3 for such a ticket. But this is not because the probability of T is really 2/3 - it is because the payoff for T is larger since the bet is made twice in sequence. In the “clones” variant, SB’s valuation of the ticket depends on how she values the welfare of her clone-sister: if she is perfectly selfish she values it at $1/2, whereas if she is perfectly altruistic she values it at $2/3. Again, this is because of variations in the payout - obviously SB’s estimate of the probability of a coin flip cannot depend on whether she is selfish or not!

A lot of anthropic arguments depend on simply “counting up the observers” and using that as a proxy for probability. This is illegitimate, because conditional probabilities must always be normalized to sum to one. Pr(Monday|T) + Pr(Tuesday|T) = 1/2 + 1/2. Any time you use conditional probability you have to be very careful: Pr(Monday|T) != Pr(Monday and T).

comment by JeffJo · 2024-02-05T16:54:43.484Z · LW(p) · GW(p)

I try to avoid any discussion of repeated betting, because of the issues you raise. Doing so addresses the unorthodox part of an unorthodox problem, and so can be used to get either solution you prefer.

But that unorthodox part is unnecessary. In my comment to pathos_bot, I pointed out that there are significant differences between the problem as Elga posed it, and the problem as it is used in the controversy. It the posed problem, the probability question is asked before you are put to sleep, and there is no Monday/Tuesday schedule. In his solution, Elga never asked the question upon waking, and he used the Monday/Tuesday schedule to implement the problem but inadvertently created the unorthodox part.

There is a better implementation, that avoids the unorthodox part.

Before being put to sleep, you are told that two coins will be flipped after you are put to sleep, C1 and C2. And that, at any moment during the experiment, we want to know the degree to which you believe that coin C1 came up Heads. Then, if either coin is showing Tails (but not if both are showing Heads):

  1. You will be wakened.
  2. Remember what we wanted to know? Tell us your degree of belief.
  3. You will be put back to sleep with amnesia.

Once this is either skipped or completed, coin C2 is turned over to show its other side. And the process is repeated.

This implements Elga's problem exactly, and adds less to it than he did. But now you can consider just what has happened between looking at the coins to see if either is showing Tails, and now. When examined, there were four equiprobable combinations of the two coins: HH, HT, TH, and TT. Since you are awake, HH is eliminated. Of the three combinations that remain, C1 landed on Heads in only one.

Replies from: Ape in the coat, Ape in the coat
comment by Ape in the coat · 2024-02-05T17:12:18.152Z · LW(p) · GW(p)

So, could you answer the initial question?

Were you always a thirder? Or is this two coin version of Sleeping Beauty what changed your mind to become one? Would you change your mind if the two coin case was found to be flawed?

Replies from: JeffJo
comment by JeffJo · 2024-02-05T18:59:13.954Z · LW(p) · GW(p)

I skipped answering the initial question because I've always been a thirder. I'm just trying to comment on the reasons people have given. Mostly how many will try to use fuzzy logic - like "isn't the question just asking about the coin flip?" in order to make the answer that they find intuitive sound more reasonable. I find that people will tend to either not change their answer because they don't want to back down from their intuition, or oscillate back and forth, without recalling why they picked an answer a few weeks later. Many of those will end up with "it depends on what you think the question is."

comment by Ape in the coat · 2024-02-06T06:48:30.264Z · LW(p) · GW(p)

Suppose we have the same two coin setting but instead of steps 1, 2, 3 a ball is put into the box.

Then, after the procedure is done and there are either one or two balls in the box, you are to be given random balls from it as long as there any. You've just gotten a random ball. Should you, by the same logic, assume that probability to get a second ball is 2/3?

Replies from: JeffJo
comment by JeffJo · 2024-02-06T12:31:53.152Z · LW(p) · GW(p)

You'll need to describe that better. If you replace (implied by "instead") step 1, you are never wakened. If you add "2.1 Put a ball into the box" and "2.2 Remove balls from the box. one by one, until there are no more" then there are never two balls in the box.

Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-06T13:48:59.471Z · LW(p) · GW(p)

I mean that there are no sleeping or awakenings, instead there are balls in a box that follow the same logic:

Two coins are tossed, if both are Heads, nothing happens, otherwise a ball is put into a box. Then the second coin is placed the other side and once again, the ball is placed into the box unless both coins are Heads. Then you are randomly given a ball from the box.

Should you reason that there is another ball in a box with probability 2/3? After all, there are four equiprobable combinations: HH, TT, HT, TH. Since the ball, you were given, was put into the box, it couldn't happen when the outcome was HH, so we are left with HT, TH and TT.

Replies from: JeffJo
comment by JeffJo · 2024-02-12T14:12:28.361Z · LW(p) · GW(p)

This variation of my two-coin is just converting my version of the problem Elga posed back into the one Elga solved. And if you leave out the amnesia step (you didn't say), it is doing so incorrectly.

The entire point of the two-coin version was that it eliminated the obfuscating details that Elga added. So why put them back?

So please, before I address this attempt at diversion in more detail, address mine.

  1. Do you think my version accurately implements the problem as posed?
  2. Do you think my solution, yielding the unambiguous answer 1/3, is correct? If not, why not?
Replies from: Ape in the coat
comment by Ape in the coat · 2024-02-12T14:46:39.382Z · LW(p) · GW(p)

Your Two Coin Toss version is isomorphic to classical Sleeping Beauty problem with everything this entails. 

The problem Elga solved in his paper isn't actually Sleeping Beauty problem - more on it in my next post.

Likewise, the solution you propose to your Two Coin Toss problem is actually solving a different problem:

Two coins are tossed if the outcome is HH you are not awakened, on every other outcome you are awakened. You are awakened. What is the probability that the first coin came Heads?

Here your reasoning is correct. There are four equiprobable possible outcomes and awakening illiminates one of them. Person who participates in the experiment couldn't be certain to experience an awakening and that's why it is evidence in favor of Tails. 1/3 is unambiguously correct answer.

But in Two Coin Toss version of Sleeping Beauty this logic doesn't apply. It would proove too much. And to see why it's the case, you may investigate my example with balls being put in the box, instead of awakenings and memory erasure.

Replies from: JeffJo
comment by JeffJo · 2024-02-16T21:05:29.850Z · LW(p) · GW(p)

My problem setup is an exact implementation of the problem Elga asked. Elga's adds some detail that does not affect the answer, but has created more than two decades of controversy.

The answer of 1/3.

answer by Kristin Lindquist · 2024-02-18T19:00:16.881Z · LW(p) · GW(p)

I have weak intuitions for these problems, and in net they make me feel like my brain doesn't work very well. With that to disclaim my taste, FWIW I think your posts are some of the most interesting content on modern day LW. 

It'd be fun to hear you debate anthropic reasoning with Robin Hanson esp. since you invoke grabby aliens [LW · GW]. Maybe you could invite yourself on to Robin & Agnes' podcast.

comment by Ape in the coat · 2024-02-19T15:41:42.339Z · LW(p) · GW(p)

FWIW I think your posts are some of the most interesting content on modern day LW. 

Thank you for such a high praise! It was unexpected and quite flattering.

No comments

Comments sorted by top scores.