Posts

Comments

Comment by Radford Neal (radford-neal) on Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning · 2019-03-20T19:47:55.084Z · LW · GW

Sleeping Beauty with cookies is an almost-realistic situation. I could easily create an analogous situation that is fully realistic (e.g., by modifying my Sailor's Child problem). Beauty will decide somehow whether or not to eat a cookie. If Beauty has no rational basis for making her decision, then I think she has no rational basis for making any decision. Denial of the existence of rationality is of course a possible position to take, but it's a position that by its nature is one that it is not profitable to try to discuss rationally.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning · 2019-03-18T15:21:37.516Z · LW · GW

Regarding cloning, we have very good reason to think that good-enough memory erasure is possible, because this sort of thing happens in reality - we do forget things, and we forget all events after some traumas. Moreover, there are plausible paths to creating a suitable drug. For example, it could be that newly-created memories in the hippocampus are stored in molecular structures that do not have various side-chains that accumulate with time, so a drug that just destroyed the molecules without these side-chains would erase recent memories, but not older ones. Such a drug could very well exist even if consciousness has a quantum aspect to it that would rule out duplication.

I don't see how your argument that the first person "me" perspective renders probability statements "invalid" can apply to Sleeping Beauty problems without also rendering invalid all uses of probability in practice. When deciding whether or not to undergo some medical procedure, for example, all the information I have about its safety and efficacy is of "third-person" form. So it doesn't apply to me? That can't be right.

It also can't be right that Beauty has "no rational way of deciding to eat the cookie or not". The experiment iis only slightly fantastical, requiring a memory erasing drug that could well be invented tomorrow, without that being surprising. If your theory of rationality can't handle this situation, there is something seriously wrong with it.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning · 2019-03-17T02:23:24.742Z · LW · GW

I don't think the issue of whether "cloning" is possible is actually crucial for this discussion, but since this relates to a common lesswrong sort of assumption, I'll elaborate on it. I do think that making a sufficiently accurate copy is probably possible in principle (but obviously not now, and perhaps never, in practice). However, I don't think this has been established. It seems conceivable that quantum effects are crucial to consciousness - certainly physics gives us no fundamental reason to rule this out. If this is true, then "cloning" (not the usual use of the word) by measuring the state of someone's body and then constructing a duplicate will not work - the measurement will not be adequate to produce a good copy This possibility is compatible with there being some very good memory-erasing drug, which need only act on the quantum state of the person in a suitable way, without "measuring" it in its entirety. So I don't agree with your statement that "if sleeping beauty problem is physically possible the cloning example must be as well". And even if true in principle, there is a vast difference in practice between developing a slightly better amnesia drug - I wouldn't be surprised if this was done tomorrow - and developing a way of measuring the state of someone's brain accurately enough to produce a replica, and then also developing a way of constructing such a replica from the measurement data - my best guess is that this will never be possible.

This practical difference relates to a different sense in which your cloning example is "fantastic". Even if we were sure that it was possible principle to "clone" people, we should not be sure that the methods of reasoning that we have evolved (biologically and culturally) will be adequate in a situation where this sort of thing happens. It would be like asking a human from 100,000 years ago to speculate on the social effects of Twitter. With social experience confined to a tribe of a dozen closely-related people, with occasional interactions with a few other tribes, not to mention a total ignorance of the relevant technology, they would be utterly incompetent to reason about how billions of people will interact when reacting to online text and video postings.

In this discussion, I get the impression that considering fantastical things like cloning leads you to discard common-sense realism. Uncrictially apply our current common-sense notions might indeed be invalid in a world where you can be duplicated - with the duplicate having perhaps first had its memories maliciously edited. There are lots of interesting, and difficult, issues here. But these are not issues that need to be settled in order to settle the Sleeping Beauty problem!

In your cloning example, you abandon common sense realism for no good reason. Since you talk about an original versus the clone, I take it that you see the experimenters as measuring the state of the original, without substantially disturbing it, and then creating a copy (as opposed to using a destructive measurement process, and then creating two copies, since then there is obviously no "original" left). In this situation, the distinction between the original and the copy is completely clear to any observer of the process. When they wake up, both the copy and the original do not know whether or not they are the original, but nevertheless one is the original and one is not. They can find out simply by asking the observer (and of course there are other possible ways - as is true for any fact about the world). Before they find out, they can assess the probability that they are the original, if that amuses them, or is necessary for some purpose. Nothing about this situation justifies abandoning the usual idea that probabilities regarding facts about the world are meaningful.

Regarding the cookies, you say "there is no strategy to maximize a "beauty in the moments" overall pleasure". So once again, I ask: How is Beauty supposed to decide whether to eat a cookie or not?

Comment by Radford Neal (radford-neal) on Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning · 2019-03-13T14:35:15.624Z · LW · GW

"The probability of me being a man" in the anthropic sense means the probability of me being born into this world as a human male. Or it can been seen as the probability of my soul getting embodied as a human male. ... even though "I'm a man" is a valid statement "the probability of me being a man" does not exist

Here, you have imported some highly questionable ideas, which would seem to be not at all necessary for analysing the Sleeping Beauty problem. This is my core objection to how Sleeping Beauty is used - it's an almost-non-fantastical problem that people take to have implications for these sorts of anthropic arguments, but when correctly analysed, it does not actually say anthing about such issues, except in a negative sense of revealing some arguments to be invalid.

You should also note that your use of "probability" here does not correspond to any use of this word in normal reasoning. To see this, consider "the probability of my having blue eyes". It take this to be in the same class as "the probability of me being a man", but it allows for less-ridiculous thought experiments. Suppose you are a member of an isolated desert tribe. There are no mirrors, and no pools of water in which you could see your reflection. The tribe also has a strong taboo against telling anyone what colour their eyes are. So you don't know what colour your eyes are. Do you maintain that "the probability that my eyes are blue" does not exist? Can't you look at the other members of the tribe, see what fraction have blue eyes, and take that as the probability that you have blue eyes? Note that this may have practical implications regarding how much care you should take to avoid sun exposure, to reduce your chance of developing glaucoma.

I assume that you do think "the probability that my eyes are blue" is meaningful in this scenario. You seem to have in mind only something like prior probabilties, not conditional on any observations. But all actual practical uses of probability are conditional on observations, so your discussion is reminescent of the proverbial question of "how many angels can dance on the head of a pin?".

I also agree maximizing someone's own earning would force the decisions to reflect the probability.

I'm not sure what exactly you're agreeing about here. Do you maintain that "the probability that it is Monday" does not exist, until Beauty happens to remember the other experiment, at which point it suddenly becomes meaningful? If so, why can't Beauty just magine that there is some such practical reason to want to know whether it is Monday, calculate what the probability is, and then take that to be the probability of it being Monday even though she doesn't actually need to make a decision for which that probability would be needed? Seems better than claiming that the probability doesn't exist, even though this procedure gives it a well-defined value...

The whole reason sleeping beauty problem is related to anthropic reasoning is because it involves an observer duplication. ... So they should have distinct rewards. Monday beauty's correct decision should benefit Monday Beauty alone...

There's a methodological issue here. I've presented a variation on Sleeping Beauty that I claim shows that "the probability that it's Monday" has to be a meaningful concept for Beauty. You say, "but if I look at a different variation, that arguement doesn't go through." Why should that matter, though? If my variation shows that the probability is meaningful, that should be enough. If this shows that Sleeping Beauty is not related to anthropic reasoning, so be it.

However, there's no problem making the reward be for "Beauty in the moment". Suppose that when Beauty wakes up, she sees a plate of cookies. She recognizes them as being freshly baked by a bakery she knows. She also knows that on Mondays, but not Tuesdays, they put an ingredient in the cookies to which she is mildly allergic, causing immediate, painful stomach cramps. She also knows that the cookies are quite delicious. Should she eat a cookie? Adjust the magnitudes of possible pleasure and pain as desired to make the question interesting. Shouldn't the probability of it being Monday be meaningful?

Suppose when you go to sleep tonight a clone of you would be created and put into an identical room. The clone is highly accurate it retains the memory good enough so he fully believes he is the one fall asleep yesterday.

Note that this is now a completely fantastical thought experiment, in contrast to the usual Sleeping Beauty problem. It may be impossible in principle, given the quantum no-cloning theorem. I also don't know how this is supposed to work in conjunction with your previous reference to "souls". I don't think this extreme variation actually shows anything interesting, but if it did, you'd need to ask yourself whether the need to resort to this fantasy indicates that you're in "angels dancing on the head of a pin" territory.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning · 2019-03-11T17:33:15.939Z · LW · GW

I'm afraid this makes no sense to me. I think this comes from my not understanding how the concept of a "reference class" can possible work. So I have no idea what it could mean to "observe the world from the perspective of any human that is male", if observing from that "perspective" is supposed to change the probability (or render the probability meaningless) of some statement that I would take to be about the actual, real, world.

As I've pointed out before, the Sleeping Beauty problem is only barely a thought experiment - with a slight improvement over current memory-affecting drugs, it would be possible to actually run the experiment. It's not like a thought experiment involving hypothetical computer simulation of people's brains, or some such, in which one might perhaps think that common sense reasoning is not applicable.

So consider an actual run of the experiment. Suppose that at the time Beauty agrees to take part in the experiment, she fails to remember that she had already agreed to participate in a different experiment on Monday afternoon. The Sleeping Beauty experimenters have promised to pay her $120 if she completes their experiment, while the other experimenters have promised to pay her $120+X, and her motivation is to maximize the expected amount of her earnings. On some awakening during the Sleeping Beauty experiment, Beauty realizes that she had forgotten about the other experiment, and considers leaving to go participate in it. Of course, she then wouldn't get the $120 for participating in the Sleeping Beauty experiment, but if it's Monday, she would get the $120+X for participating in the other experiment. Now if it's Tuesday, the other experiment has already been cancelled. So she needs to consider the probability that it's Monday in order to make a good decision.

It's not actually relevant to my point, but here is how it seems to me the probabilities work out. Suppose that Beauty has probability p of remembering the other experiment whenever she awakens, and suppose that this is independent for two awakenings (as is consistent with the assumption that her mental state is reset before a second awakening). To simplify matters, let's suppose (and suppose that Beauty also supposes) that p is quite small, so the probability of Beauty remembering the other experiment on both awakenings (if two happen) is negligible.

Since p is small, Beauty's probability for it being Monday given that she has woken and remembered the other experiment should be essentially as usual for this problem, with the answer depending on whether she is a Halfer or a Thirder. (If p were not small, she might need to downgrade the probability of Tuesday because there might be a non-negligible chance that she would have left the experiment on Monday, eliminating the Tuesday awakening.)

If she's a Thirder, when she wakes and remembers the other experiment, she will consider the probability that it is Monday to be 2/3, and will leave for the other experiment if (2/3)(120+X) is greater than 120, that is, if X is greater than 60. If she is a Halfer, it's harder to say, since Halferism is wrong, but let's suppose that she splits the 1/2 probability of two awakenings equally, and hence thinks the probability of it being Monday is 3/4. She will then leave if (3/4)(120+X) is greater than 120, that is, if X is greater than 40. We can also look at things from a frequentist perspective, and ask what her expected payment is if she always decides to leave when she remembers the other experiment. It will be (1-p)120 + p(120+X) conditional on the coin landing Heads, and (1-p)(1-p)120 + p(120+X) conditional on the coin landing Tails, for a total expectation of (1-(3/2)p)120 + p(120+X), ignoring the p-squared term as being negligible. This simplifies to (1-p/2)120 + pX, which is greater than 120 if X is greater than 60, in agreement with the Thirder reasoning.

In any case, though, if I've understood you correctly, you deny that there is any meaning to "the probability that it's Monday" in this situation. So how is Beauty supposed to decide what to do?

Comment by Radford Neal (radford-neal) on Sleeping Beauty Problem, Anthropic Principle and Perspective Reasoning · 2019-03-10T18:10:36.068Z · LW · GW

This is because in the context of sleeping beauty problem the probabilities of “today being Monday/Tuesday” do not exist. In another word “what’s the probability of today being Monday/Tuesday” are invalid questions.

I think here you depart from common-sense realism, in favour of what I am not sure.

From a common-sense standpoint, it is meaningful for Beauty to consider the probability of it being Monday because she can decide to batter down the door to her room, go outside, and ask someone she encounters what day of the week it is. That she actually has no desire to do this does not render the question meaningless.

Comment by Radford Neal (radford-neal) on The Sleeping Beauty Problem and The Doomsday Argument Can Be Explained by Perspective Inconsistency · 2018-08-15T16:29:44.977Z · LW · GW

What I mean by "someone with those memories exists" is just that there exists a being who has those memories, not that I in particular have those memories. That's the "non-indexical" part of FNC. Of course, in ordinary life, as ordinarily thought of, there's no real difference, since no one but me has those memories.

I agree that one could imagine conditioning on the additional piece of "information" that it's me that has those memories, if one can actually make sense of what that means. But one of the points of my FNC paper is that this additional step is not necessary for any ordinary reasoning task, so to say it's necessary for something like evaluating cosmological theories is rather speculative. (In contrast, some people seem to think that SSA is just a simple extension of the need to account for sampling bias when reasoning about ordinary situations, which I think is not correct.)

Comment by Radford Neal (radford-neal) on The Sleeping Beauty Problem and The Doomsday Argument Can Be Explained by Perspective Inconsistency · 2018-08-13T20:57:23.075Z · LW · GW

I can sort of see what you're getting at here, but to me needing to ask "what question was being asked?" in order to do a correct analysis is really a special case of the need to condition on all information. When we know "the older child in that family is a boy", we shouldn't condition on just that fact when we actually know more, such as "I asked a neighbour whether the older child is a boy or girl, and they said 'a boy'", or "I encountered a boy in the family and asked if they were the older one, and they said 'yes'". Both these more detailed descriptions of what happened imply (assuming truthfulness) that the older child is a boy, but they contain more information than that statement alone, so it is necessary to condition on that information too.

For Technicolor Beauty, the statement (from Beauty's perspective) "I woke up and saw a blue piece of paper" is not the complete description. She actually knows sometime like "I woke up, felt a bit hungry, with an itch in my toe, opened my eyes, and saw a fly crawling down the wall over a blue piece of paper, which fluttered at bit because the air conditioning was running, and I remembered that the air duct is above that place, though I can't see it behind the light fixture that I can see there, etc.". I argue that she should then condition on the fact that somebody has those perceptions and memories, which can be seen as a third-person perspective fact, though in ordinary life (not strange thought experiments involving AIs, or vast cosmological theories) this is equivalent to a first-person perspective fact. So one doesn't get different answers from different perspectives, and one needn't somehow justify disagreeing with a friend's beliefs, despite having identical information.

Comment by Radford Neal (radford-neal) on The Sleeping Beauty Problem and The Doomsday Argument Can Be Explained by Perspective Inconsistency · 2018-08-10T12:56:40.347Z · LW · GW

I'm not sure what you're saying in this reply. I read your original post as using the island problem to try to demonstrate that there are situations in which using probabilities conditional on all the available information gives the wrong answer - that to get the right answer, you must instead ignore "ad hoc" information (though how you think you can tell which information is "ad hoc" isn't clear to me). My reply was pointing out that this example is not correct - that if you do the analysis correctly, you do get the right answer when you use all the information. Hence your island problem does not provide a reason not to use FNC, or to dismiss the Technicolor Beauty argument.

In the Technicolor Beauty variation, the red and blue pieces of paper on the wall aren't really necessary. Without any deliberate intervention, there will just naturally be numerous details of Beauty's perceptions (both of the external world and of her internal thoughts and feeling) which will distinguish the days. Beauty should of course reason correctly given all this information, but I don't see that there are any subtle aspects to "how" she obtains the information. She looks at the wall and sees a blue piece of paper. I assume show knows that the experimenter puts a red or blue piece of paper on the wall. What is supposed to be the issue that would make straightforward reasoning from this observation invalid?

Comment by Radford Neal (radford-neal) on The Sleeping Beauty Problem and The Doomsday Argument Can Be Explained by Perspective Inconsistency · 2018-08-08T17:38:46.693Z · LW · GW

As you may know, my Full Nonindexical Conditioning (FNC) approach (see http://www.cs.utoronto.ca/~radford/anth.abstract.html) uses the third-person perspective for all inference, while emphasizing the principle that all available information should be used when doing inference. In everyday problems, a third-person approach is not distinguishable from a first-person approach, since we all have an enormous amount of perceptions, both internal and external, that are with very, very high probability not the same as those of any other person. This approach leads one to dismiss the Doomsday Argument as invalid, and to adopt the Thirder position for Sleeping Beauty.

You argue against approaches like FNC by denying that one should always condition on all available information. You give an example purporting to show that doing so is sometimes wrong. But your example is simply mistaken - you make an error somewhat analogous to that made by many people in the Monte Hall problem.

Here is your example (with paragraph breaks added for clarity):

Imagine you are on an exotic island where all families have two children. The island is having their traditional festival of boys' day. On this day it is their custom for each family with a boy to raise a flag next to their door. Tradition also dictates in case someone knocks on the door then only a boy can answer. You notice about 3/4 of the families has raised a flag as expected. It should be obvious that if you randomly choose a family with flags then the probability of that family having two boys is 1/3. You also know if

you knock on the door a boy would come to answer it so by seeing him there is no new information.

But not so fast. When you see the boy you can ask him "are you the older or the younger child?". Say he is the older one. Then it can be stated that the older child of the family is a boy. This is new information since I could not know that just by seeing the flag. If both children are boys then the conditional probability of the older kid being a boy is one. If only one child is a boy then the conditional probability of the older kid being a boy is only half. Therefore this new evidence favours the former. As a result the probability of this family having 2 boys can be calculated by bayesian updating to increase to be 1/2. If the child is the younger kid the same reason can still be applied due to symmetry. Therefore even before knocking on the door I should conclude the randomly chosen family's probability of having 2 boys is 1/2 instead of 1/3.

This is absurd. This shows specifying the child base on ad hoc details is clearly wrong. For the same reason I should not specify today or this awakening by ad hoc details observed after waking up, such as the color of the paper.

Your mistake here is in asking the boy "are you the older or younger child?" and then reasoning as if an "older" answer to this question is the same as a "yes" answer to the question "is the older child in this family a boy?".

If you actually ask a neighbor "is the older child in that family a boy?", and get the answer "yes", then it WOULD be correct to infer that the probability of the younger child also being a boy is 1/2. But you didn't do that, or anything equivalent to that, as can be seen from the fact that the question you actually asked cannot possibly tell you that the older child is a girl.

The correct analysis is as follows. Before knowing anything about the family, there are four equally likely possibilities, which we can write as BB, BG, GB, GG, where the first B or G is the sex of the younger child, and the second is the sex of the older child. When you see the flag on the family's house, the GG possibility is eliminated, leaving BB, BG, GB, all having probability 1/3. When a boy answers the door, the probabilities stay the same. After you ask whether the boy is the younger or older child, and get the answer "older", the likelihood function over these three possibilities is 1/2, 0, 1, which when multiplied by 1/3, 1/3, 1/3 and renormalized gives probability 1/3 to BB and probability 2/3 to GB, with zero probability for BG (and GG). If instead the answer is "younger", the result is probability 1/3 for BB and 2/3 for BG.

There is nothing odd or absurd here. Conditioning on all available information is always the right thing to do (though one can ignore information if one knows that conditioning on it won't change the answer).

Comment by Radford Neal (radford-neal) on Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. · 2018-07-13T00:23:41.230Z · LW · GW

But the thing is you can't call it "0.5 credence" and have your credence be anything like a normal probability. The Halfer will assign probability 1/2 for Heads and Monday, 1/4 for Tails and Monday, and 1/4 for Tails and Tuesday. Since only the guess on Monday is relevant to the payoff, we can ignore the Tuesday possibility (in which the action taken has no effect on the payoff), and see that a halfer would have a 2:1 preference for Heads. In contrast, a Thirder would give 1/3 probability to Heads and Monday, 1/3 to Tails and Monday, and 1/3 to Tails and Tuesday. Ignoring Tuesday, they're indifferent between guessing Heads or Tails.

With a slight tweak to payoffs so that Tails are slightly more rewarding, the Halfer will make a definitely wrong decision, while the Thirder will make the right decision.

Comment by Radford Neal (radford-neal) on Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. · 2018-07-13T00:16:22.229Z · LW · GW

Well of course. If we know the right action from other reasoning, then the correct probabilities better lead us to the same action. That was my point about working backwards from actions to see what the correct probabilities are. One of the nice features about probabilities in "normal" situations is that the probabilities do not depend on the reward structure. Instead we have a decision theory that takes the reward structure and probabilities as input and produces actions. It would be nice if the same nice property held in SB-type problems, and so far it seems to me that it does.

I don't think there has ever been much dispute about the right actions for Beauty to take in the SB problem (i.e., everyone agrees about the right bets for Beauty to make, for whatever payoff structure is defined). So if just getting the right answer for the actions was the goal, SB would never have been considered of much interest.

Comment by Radford Neal (radford-neal) on Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. · 2018-07-12T23:43:46.436Z · LW · GW

A big reason why probability (and belief in general) is useful is that it separates our observations of the world from our decisions. Rather than somehow relating every observation to every decision we might sometime need to make, we instead relate observations to our beliefs, and then use our beliefs when deciding on actions. That's the cognitive architecture that evolution has selected for (excepting some more ancient reflexes), and it seems like a good one.

Comment by Radford Neal (radford-neal) on Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. · 2018-07-12T23:34:36.111Z · LW · GW

The linked post by ata is simply wrong. It presents the scenario where

Each interview consists of Sleeping Beauty guessing whether the coin came up heads or tails. After the experiment, she will be given a dollar if she was correct on Monday.
In this case, she should clearly be indifferent (which you can call “.5 credence” if you’d like, but it seems a bit unnecessary).

But this is not correct. If you work out the result with standard decision theory, you get indifference between guessing Heads or Tails only if Beauty's subjective probability of Heads is 1/3, not 1/2.

You are of course right that anyone can just decide to act, without thinking about probabilities, or decision theory, or moral philosophy, or anything else. But probability and decision theory have proven to be useful in numerous applications, and the Sleeping Beauty problem is about probability, presumably with the goal of clarifying how probability works, so that we can use it in practice with even more confidence. Saying that she could just make a decision without considering probabilities rather misses the point.

Comment by Radford Neal (radford-neal) on Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. · 2018-07-12T23:18:50.658Z · LW · GW

The standard question is what probability Beauty should assign to Heads after being woken (on Monday or Tuesday), and not being told what day it is, given that she knows all about the experimental setup. Of course if you change the setup so that she's asked a question on Monday that she isn't on Tuesday, then she will know what day it is (by whether the question was asked or not) and the answer changes. That isn't an interesting sense in which the answer 1/2 is correct. Neither is it interesting that 1/2 is the answer to the question of what probability the person flipping the coin should assigns to Heads, nor to the question of what is seven divided by two minus three...

Comment by Radford Neal (radford-neal) on Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. · 2018-07-12T23:09:35.181Z · LW · GW

We agree about what the right actions are for the various reward structures. We can then try to work backwards from what the right action is to what probability Beauty should assign to the coin landing Heads after being wakened, in order that this probability will lead (by standard decision theory) to her taking the action we've decided is the correct one.

For your second scenario, Beauty really has to commit to what to do before the experiment, which means this scheme of working backwards from correct decision to probability of Heads after wakening doesn't seem to work. Guessing either Heads or Tails is equally good, but only if done consistently. Deciding after each wakening without having thought about it beforehand doesn't work well, since with the two possibilities being equally good, Beauty might choose differently on Monday and Tuesday, with bad results. Now, if the problem is tweaked with slightly different rewards for guessing Heads correctly than Tails correctly, we can avoid the situation of both guesses being equally good. But the coordination problem still seems to confuse the issue of how to work backwards to the appropriate probabilities (for me at least).

I think it ought to be the case that, regardless of the reward structure, if you work backwards from correct action to probabilities, you get that Beauty after wakening should give probability 1/3 to Heads. That seems to be what happens for all the reward structures where Beauty can decide what to do each day without having to know what she might do or have done the other day.

Comment by Radford Neal (radford-neal) on Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. · 2018-07-12T21:36:43.958Z · LW · GW

Your second scenario introduces a coordination issue, since Beauty gets nothing if she guesses differently on Monday and Tuesday. I'm still thinking about that.

If you eliminate that issue by saying that only Monday guesses count, or that only the last guess counts, you'll find that Beuaty has to assign probability 1/3 to Heads in order to do the right thing by using standard decision theory. The details are in my comment on the post at https://www.lesswrong.com/posts/u7kSTyiWFHxDXrmQT/sleeping-beauty-resolved#aG739iiBci9bChh5D

Or you can say that the payoff for guessing Tails correctly is $0.50 while guessing Heads correctly gives $1.00, so the total payoff is the same from always guessing Heads as from always guessing Tails. In that case, you can see that you get indifference to Heads versus Taills when the probability of Heads is 1/3, by computing the expected return for guessing Heads at one particular time as (1/3) 1.00 versus the expected return for guessing Tails at one particular time of (2/3) 0.5. Clearly you don't get indifference if Beauty thinks the probability of Heads is 1/2.

Comment by Radford Neal (radford-neal) on Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem. · 2018-07-12T17:30:50.883Z · LW · GW

Probability is meant to be a useful mental construct, that helps in making good decisions. There's a standard framework for doing this. If you apply it, you find that Beauty makes good decisions only if she assigns a probability of 1/3 to Heads when she is woken. There is no sense in which 1/2 is the correct answer, unless you choose to redefine what probabilities mean, along with the method of using the to make decisions, which would be nothing but a pointless semantic distraction.

Comment by Radford Neal (radford-neal) on The Fermi Paradox: What did Sandberg, Drexler and Ord Really Dissolve? · 2018-07-09T01:12:54.673Z · LW · GW

I agree that the word "Paradox" was some sort of hype. But I don't think anyone believed it. Nobody plugged their best guesses for all the factors in the Drake equation, got the result that there should be millions of advanced alien races in the galaxy, of which we see no sign, and then said "Oh my God! Science, math, and even logic itself are broken!" No. They instead all started putting forward their theories as to which factor was actually much smaller than their initial best guess.

Comment by Radford Neal (radford-neal) on The Beauty and the Prince · 2018-06-28T13:13:52.981Z · LW · GW

I've noticed something that may explain some of the confusion. You say above:

...halvers don't believe that you can answer: "If Sleeping Beauty is awake, what are the chance that the coin came up heads?" without de-indexicalising the situation first.

But in the Sleeping Beauty problem as usually specified, the question is what probability Beauty should assign to Heads, not what some external observer should think she should be doing. Beauty is in no doubt about who she is (eg, she's the person who just stubbed her toe on this bedpost here) even though she doesn't know what day of the week it is.

Comment by Radford Neal (radford-neal) on The Beauty and the Prince · 2018-06-27T22:34:41.354Z · LW · GW

You've swapped Monday and Tuesday compared to the usual description of the problem, but other than that, your description is what I am working with. You just have a mistaken intuition regarding how the probabilities relate to decisions - it's slightly non-obvious (but maybe not obvious that it's non-obvious). Note that this is all using completely standard probability and decision theory - I'm not doing anything strange here.

In this situation, as explained in detail in my reply above, Beauty gets the right answer regarding how to bet only if she gives probability 1/3 to Heads whenever she is woken, in which case she is indifferent to guessing Heads versus Tails (as she should be - as you say, it's just a coin flip), whereas if she gives probability 1/2 to Heads, she will have a definite preference for guessing Heads. If we give guessing Heads a small penalty (say on Monday only, to resolve how this works if her guesses differ on the two days), in order to tip the scales away from indifference, the Thirder Beauty correctly guesses Tails, which does indeed maximizes her expected reward, whereas the Halfer Beauty does the wrong thing by still guessing Heads.

Comment by Radford Neal (radford-neal) on The Beauty and the Prince · 2018-06-27T18:30:30.177Z · LW · GW

No, you get the wrong answer in your second scenario (with -1, 0, or +1 payoff) if you assign a probability of 1/2 to Heads, and you get the right answer if you assign a probability of 1/3.

In this scenario, guessing right is always better than guessing wrong. Being right rather than wrong either (A) gives a payoff of +1 rather than -1, if you guess only once, or (B) gives a payoff of +1 rather than 0, if you guess correctly another day, or (C) gives a payoff of 0 rather than -1, if you guess incorrectly another day. Since the change in payoff for (B) and (C) are the same, one can summarize this by saying that the advantage of guessing right is +2 if you guess only once (ie, the coin landed Heads), and +1 if you guess twice (ie, the coin landed Tails).

A Halfer will compute the difference in payoff from guessing Heads rather than Tails as (1/2)*(+2) + (1/2)*(-1) = 1/2, and so they will guess Heads (both days, presumably, if the coin lands Tails). A Thirder will compute the difference in payoff from guessing Heads rather than Tails as (1/3)*(+2) + (2/3)*(-1) = 0, so they will be indifferent between guessing Heads or Tails. If we change the problem slightly so that there is a small cost (say 1/100) to guessing Heads (regardless of whether this guess is right or wrong), then a Halfer will still prefer Heads, but a Thirder will now definitely prefer Tails.

What will actually happen without the small penalty is that both the Halfer and the Thirder will get an average payoff of zero, which is what the Thirder expects, but not what the Halfer expects. If we include the 1/100 penalty for guessing Heads, the Halfer has an expected payoff of -1/100, while the Thirder still has an expected payoff of zero, so the Thirder does better.

---

If today you choose chocolate over vanilla ice cream today, and yesterday you did the same, and you're pretty sure that you will always choose chocolate over vanilla, is your decision today really a decision not for one ice cream cone but for thousands of cones? Not by any normal idea of what it means to "decide".

Comment by Radford Neal (radford-neal) on The Beauty and the Prince · 2018-06-27T03:29:39.867Z · LW · GW

What do you mean by "scoring it twice"? You seem to have some sort of betting/payoff scheme in mind, but you haven't said what it is. I suspect that as soon as you specify some scheme, it will be clear that assigning probability 1/3 to Heads gives the right decision when you apply standard decision theory, and that you don't get the right decision if you assign probability 1/2 to Heads and use standard decision theory.

And remember, Beauty is a normal human being. When a human being makes a decision, they are just making one decision. They are not simultaneously making that decision for all situations that they will ever find themselves in where the rational decision to make happens to be the same (even if the rationale for making that decision is also the same). That is not the way standard decision theory works. It is not the way normal human thought works.

Comment by Radford Neal (radford-neal) on The Beauty and the Prince · 2018-06-27T02:12:07.074Z · LW · GW

What makes you think that you are "Chris Leong"?

Anyway, to the extent that this approach works, it works just as well for Beauty. Beauty has unique experiences all the time. You (or more importantly, Beauty herself) can identify Beauty-at-any-moment by what her recent thoughts and experiences have been, which are of course different on Monday and Tuesday (if she is awake then). There is no difficulty in applying standard probability and decision theory.

At least there's no problem if you are solving the usual Sleeping Beauty problem. I suspect that you are simply refusing to solve this problem, and instead are insisting on solving only a different problem. You're not saying exactly what that problem is, but it seems to involve something like Beauty having exactly the same experiences on Monday as on Tuesday, which is of course impossible for any real human.

Comment by Radford Neal (radford-neal) on The Beauty and the Prince · 2018-06-27T01:03:47.594Z · LW · GW

So in every situation in which someone asks you the same question twice, standard probability and decision theory doesn't apply? Seems rather sweeping to me. Or it's only a problem if you don't happen to remember that they asked that question before? Still seems like it would rule out numerous real-life situations where in fact nobody thinks there is any problem whatever in using standard probability and decision theory.

There is one standard form of probabability theory and one standard form of decision theory. If you need a "trivial modification" of your decision theory to justify assigning a probability of 1/2 rather than 1/3 to some event, then you are not using standard probability and decision theory. I need only a "trivial modification" of the standard mapping from colour names to wavelengths to justify saying the sky is green.

Comment by Radford Neal (radford-neal) on The Beauty and the Prince · 2018-06-27T00:28:50.834Z · LW · GW

Right. I see no need to extend standard probability, because the mildly fantastic aspect of Sleeping Beauty does not take it outside the realm of standard probability theory and its applications.

Note that all actual applications of probability and decision theory involve "indexicals", since whenever I make a decision (often based on probabilities) I am concerned with the effect this decision will have on me, or on things I value. Note all the uses of "I" and "me". They occur in every application of probability and decision theory that I actually care about. If the occurrence of such indexicals was generally problematic, probability theory would be of no use to me (or anyone).

Comment by Radford Neal (radford-neal) on The Beauty and the Prince · 2018-06-27T00:22:29.560Z · LW · GW

But nothing in the specification of the Sleeping Beauty problem justifies treating it that way. Beauty is a ordinary human being who happens to have forgotten some things. If Beauty makes two betting decisions at different times, they are separate decisions, which are not necessarily the same - though it's likely they will be the same if Beauty has no rational basis for making different decisions at those two times. There is a standard way of using probabilities to make decisions, which produces the decisions that everyone seems to agree are correct only if Beauty assigns probability 1/3 to the coin landing Heads.

You could say that you're not going to use standard decision theory, and therefore are going to assign a different probability to Heads, but that's just playing with words - like saying the sky is green, not blue, because you personally have a different scheme of colour names from everyone else.

Comment by Radford Neal (radford-neal) on The Beauty and the Prince · 2018-06-26T23:12:44.439Z · LW · GW

But if we're talking about an ordinary Sleeping Beauty problem, there are no repeats - no multiple instances of Beauty with exactly the same memories. Whatever betting scheme may have been defined, when Beauty decides what to bet, her decision is made at a single moment in time, and applies only for that time, affecting her payoff according to whatever the rules of the betting scheme may be. She is allowed to make a different decision at a different time (though of course she may in fact make the same decision), and again that will affect her payoff (or not) according to the rules of the scheme. There is no scope for any unusual relationship between probability and betting odds.

Comment by Radford Neal (radford-neal) on The Beauty and the Prince · 2018-06-26T22:49:24.688Z · LW · GW

My impression is that they have in mind something more than just looking at betting odds (for some unmentioned bet) rather than probabilities. Probability as normally conceived provides a basis for betting choices, using ordinary decision theory, with any sort of betting setup. In Sleeping Beauty with real people, there are no issues that could possibly make ordinary decision theory inapplicable. But it seems the OP has something strange in mind...

Comment by Radford Neal (radford-neal) on The Beauty and the Prince · 2018-06-26T15:09:22.617Z · LW · GW

Your argument at that link is interesting, but I can see why Halfers would just say it's a different problem.

For Beauty and the Prince, I start with a version where Beauty and the Prince can talk to each other, which obviously isn't the same as the usual Sleeping Beauty problem. Supposing that it's agreed that they should both assess the probability of Heads as 1/3 in this version, we then go on to a version where Beauty can see the Prince, but not talk with him. But she knows perfectly well what he would say anyway, so does that matter? And if we then put a curtain between Beauty and the Prince, so she can't see him, though she knows he is there, does that change anything? If we move the Prince to a different room a thousand miles away, would that change things? Finally, does getting rid of the Prince altogether matter?

If none of these steps from Beauty and the Prince to the usual Sleeping Beauty matters, then the answers should be the same. So Halfers would have to claim that one or more steps does matter (or that the answer is 1/2 for the full Beauty and the Prince problem, but I see that as less likely). Perhaps they will claim one of these steps matters, but I see problems with this. For instance, if a Halfer thinks getting rid of the Prince altogether is different from him being in a room a thousand miles away, it seems that they would be committed to the 1/2 answer being sensitive to all sorts of details of the world that one would normally consider irrelevant (and which are assumed irrelevant in the usual problem statement).

Comment by Radford Neal (radford-neal) on The Beauty and the Prince · 2018-06-26T14:38:43.654Z · LW · GW

I don't understand your argument. What does it mean for a situation to "count"?

I'm Beauty. I'm a real person. I've woken up. I can see the Prince sitting over there, though if you like, you can suppose that we can't talk. The Prince is also a real person. I'm interested in the probability that the result of a flip of a actual, real coin is Heads.

How does whether something "counts" or not have anything to do with this question?

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-21T13:47:08.720Z · LW · GW

a) You know, it has not actually been demonstrated that human consciousness can be mimicked by Turing-equivalent computer. In any case, the only role of mentioning this in your argument seems to be to push your thinking away from Beauty as a human towards a more abstract notion of what the problem is in which you can more easily engage in reasoning that would be obviously fallacious if your thoughts were anchored in reality.

b) Halfer reasoning is invalid, so it's difficult to say how this invalid reasoning would be applied in the context of this decision problem. But if one takes the view that probabilities do not depend on what decision problem they will be used for, it isn't possible for possibilities 5) and 6) to have probability 1/4 while possibilities 3) and 4) have probability zero. One can imagine, for example, that Beauty is told about the balls from the beginning, but is told about the reward for guessing correctly, and how the balls play a role in determining that reward, only later. Should she change her probabilities for the six possibilities simply because she has been told about this reward scheme? I suspect your answer will be yes, but that is simply absurd. It is totally contrary to normal reasoning, and if applied to practical problems would be disastrous. Remember! Beauty is human, not a computer program.

c) You are still refusing to approach the Sallor's Child problem as one about real people, despite the fact that the problem has been deliberately designed so that it has no fantastic aspects and could indeed be about real people, as I have emphasized again and again. Suppose the child is considering searching for their possible sibling, but wants to know the probability that the sibling exist before deciding to spend lots of money on this search. The child consults you regarding what the probability of their having a sibling is. Do you really start by asking, "what process did your mother use in deciding what name to give you"? The question is obviously of no relevance whatsoever. It is also obvious that any philosophical debates about indexicals in probability statements are irrelevant - one way or another, people solve probability problems every day without being hamstrung by this issue. There is a real person standing in front of you asking "what is the probability that I have a sibling". The answer to this question is 2/3. There is no doubt about this answer. It is correct. Really. That is the answer.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-21T03:23:54.633Z · LW · GW

a) But Beauty is actually a human being. If your argument depends on replacing Beauty by a computer program, then it does not apply to the usual Sleeping Beauty problem. Why are you so reluctant to actually address the usual, only-mildly-fantastic Sleeping Beauty problem?

In any case, why is it relevant that she has the same expected payoff for both interviews (which will indeed likely be the case, since she is likely to make the same decision)? Lots of people make various decisions at various times that happen to have the same expected payoff. That doesn't magically make these several decisions be actually one decision.

b) If I understand your setup, if the coin lands Heads, Beauty gets one dollar if she correctly guesses on Monday, which is the only day she is woken. If the coin lands Tails, a ball is drawn from a bag with equal numbers of balls labeled "M" and "T", and she gets a dollar if she makes a correct guesses on the day corresponding to the ball drawn, with her guess the other day being ignored. For simplicity, suppose that the ball is drawn (and then ignored) even if the coin lands Heads. There are then six possible situations of coin/ball/day when Beauty is considering her decision:

1) H M Monday

2) H T Monday

3) T M Monday

4) T T Monday

5) T M Tuesday

6) T T Tuesday

If Beauty is a Thirder, she considers all of these to be equally likely (probability 1/6 for each). In situations 4 and 5, her action has no effect, so we can ignore these in deciding on the best action. In situations 1 and 2, guessing Heads results in a dollar reward. In situations 3 and 6, guessing Tails results in a dollar reward. So she is indifferent to guessing Heads or Tails.

c) Really, can you actually not suppose that in the Sailor's Child problem, which is explicitly designed to be a problem that could actually occur in real life, the child has not been given a name? And if so, do you also think that if the child gets cancer, as in the previous discussion, that they should refuse chemotherapy on the grounds that since their mother did not give them a name, they are unable to escape the inapplicability of probability theory to statements with indexicals? I'm starting to find it hard to believe that you are actually trying to understand this problem.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-21T02:07:04.493Z · LW · GW

Let's suppose you can buy a coupon that pays $1 if the coin comes up heads and $0 otherwise. Generally, the fair price is p, where p is the probability of heads. However, suppose there's a bug in the system and you will get charged twice if the coin comes up tails.

In the scenarios that Ata describes, Beauty simply gets paid for guessing correctly. She does not have to pay anything. More generally, in the usual Sleeping Beauty problem where the only fantastic feature is memory erasure, every decision that Beauty takes is a decision for one point in time only. If she is woken twice, she make two separate decisions. She likely makes the same decision each time, since she has no rational basis for deciding differently, but they are nevertheless separate decisions, each of which may or may not result in a payoff. There is no need to combine these decisions into one decision, in which Beauty is deciding for more than one point in time. That just introduces totally unnecessary confusion, as well as being contrary to what actually happens.

The easiest way to do this in the standard sleeping beauty is to randomly choose only one interview to "count" in the case where there are multiple interviews.

That's close to Ata's second scenario, in which Ata incorrectly concludes that Beauty should assign probability 1/2 to Heads. It is of course a different decision problem than when payoffs are added for the two days, but the correct result is again obtained when Beauty considers the probability of Heads to be 1/3, if she applies decision theory correctly. The choice of decision problem has no effect on how the probabilities should be calculated.

If you are the patient, we can remove the indexical by asking about whether Radford Neal will survive.

Yes indeed. And this can also be done for the Sailor's Child problem, giving the result that the probability of Heads (no sibling) is 1/3.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-21T00:46:34.025Z · LW · GW

But the fact that she is interviewed twice is of no relevance to her calculations regarding what to guess in one of the interviews, since her payoffs from guessing in the two interviews are simply added together. The decision problems for the two interviews can be solved separately; there is no interaction. One should not rescale anything according to the number of repetitions.

When she is making a decision, it is either Monday or Tuesday, even though she doesn't know which, and even though she doesn't know whether she will be interviewed once or twice. There is nothing subtle going on here. It is no different from anybody else making a decision when they don't know what day of the week it is, and when they aren't sure whether they will face another similar decision sometime in the future, and when they may have forgotten whether or not they made a similar decision sometime in the past. Not knowing the day of the week, not remembering exactly what you did in the past, and not knowing what you will do in the future are totally normal human experiences, which are handled perfectly well by standard reasoning processes.

The probabilities I assign to various possibilities on one day when added to the probabilities I assign to various possibilities on another day certainly do not have to add up to one. Indeed, they have to add up to two.

The event of Beauty being woken on Monday after the coin lands Tails and the event of Beauty being woken on Tuesday after the coin lands Tails can certainly both occur. It's totally typical in probability problems that more than one event occurs. This is of course handled with no problem in the formalism of probability.

If the occurrence of an indexical in a probability problem makes standard probability theory inapplicable, then it is inapplicable to virtually all real problems. Consider a doctor advising a cancer patient. The doctor tells the patient that if they receive no treatment, their probability of survival is 10%, but if they undergo chemotherapy, their probability of survival is 90%, although there will be some moderately unpleasant side effects. The patient reasons as follows: Those may be valid probability statements from the doctor's point of view, but from MY point of view they are invalid, since I'm interested in the probability that I will survive, and that's a statement with an indexical, for which probability theory is inapplicable. So I might as well decline the chemotherapy and avoid its unpleasant side effects.

In the Sailor's Child problem, there is no doubt that if the child consulted a probabilist regarding the chances that they have a sibling, the probabilist would advice them that the probability of them having a sibling is 2/3. Are you saying that they should ignore this advice, since once they interpret it as being about THEM it is a statement with an indexical?

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-20T21:37:40.593Z · LW · GW

Ah! I see I misread what you wrote. As you point out, it is implausible in real life that the set of possible experiences on Monday is exactly the same as the set of possible experiences on Tuesday, or at least it's implausible that the probability distributions over possible experiences on Monday and on Tuesday are exactly the same. I think it would be fine to assume for a thought experiment that they are the same, however. The reason it would be fine is that you could also not assume they are the same, but just that they are very similar, which is indeed plausible, and the result would be that at most Beauty will obtain some small amount of information about whether it is Monday or Tuesday from what her experiences are, which will change her probability of the coin having landed Heads by only a small amount. Similarly, we don't have to assume PERFECT memory erasure. And we don't have to assume (as we usually do) that Beauty has exactly ZERO probability of dying after Monday and before she might have been woken on Tuesday. Etc, etc.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-20T16:33:48.383Z · LW · GW

Each time Beauty guesses, it is either Monday or Tuesday. She doesn't know for sure which, but that's only because she has decided to follow the rules of the game.. If instead she takes out her ax, smashes a hole in the wall of her room, goes outside, and asks a passerby what day of the week it is, she will find out whether it is Monday or Tuesday. Ordinary views about the reality of the physical world say she should regard it as being either Monday or Tuesday regardless of whether she actually knows which it is. For each decision, there are no "repeats". She either wins a dollar as a result of that decision or she does not.

This should all be entirely obvious. That it is not obvious to you indicates that you are insisting on addressing only some fantastic version of the problem, in which it can somehow be both Monday and Tuesday at the same time, or something like that. Why are you so reluctant to figure out the answer to the usual, only mildly-fantastic version? Don't you think that might be of some interest?

Similarly, you seem to be inexplicably reluctant to admit that the answer for the Sailors Child problem is 1/3. Really, unless I've just made some silly mistake in calculation (which I highly doubt), the answer is 1/3. Your views regarding indexicals are not relevant. The Sailor's Child problem is of the same sort as are solved every day in numerous practical applications of probability, The standard tools apply. They give the answer 1/3.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-20T15:45:53.806Z · LW · GW

Well, I guess I'll have to wait for the details, but off-hand it doesn't seem that this will work. If action A is "have another child", and the issue is that you don't want to do that if the child is going to die if the Earth is destroyed soon in a cataclysm, then the action A is one that can be taken by a wide variety of organisms past and present going back hundreds of millions of years. But many of these you would probably not regard as having an appropriate level of sentience, and some of them that you might regard as sentient seem so different from humans that including them in the reference class seems bizarre. Any sort of line drawn will necessarily be vague, leading to vagueness in probabilities, perhaps by factors of ten or more.

FNC = Full Non-indexical Conditioning, the method I advocate in my paper.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-20T14:29:41.759Z · LW · GW

Well, I think the whole "reference class" thing is a mistake. By using FNC, one can see that all non-fantastical problems of everyday life that might appear to involve selection effects for which a "reference class" is needed can in fact be solved correctly using standard probability theory, if one doesn't ignore any evidence. So it's only the fantastical problems where they might appear useful. But given the fatal flaw that the exact reference class matters, but there is no basis for chosing a particular reference class, the whole concept is of no use for fantastical problems either.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-20T14:19:37.787Z · LW · GW

You write: My position is very similar to Ata's. I don't believe that the term "probability" is completely unambiguous once we start including weird scenarios that fall outside the scope which standard probability was intended to address.

The usual Sleeping Beauty problem is only mildly fantastic, and does NOT fall outside the scope which standard probability theory addresses. Ata argues that probabilities are not meaningful if they cannot be used for a decision problem, which I agree with. But Ata's argument that Sleeping Beauty is a situation where this is an issue seems to be based on a simple mistake.

Ata gives two decision scenarios, the first being

Each interview consists of Sleeping Beauty guessing whether the coin came up heads or tails, and being given a dollar if she was correct. After the experiment, she will keep all of her aggregate winnings.

In this situation, Beauty makes the correct decision if she gives probability 1/3 to Heads, and hence guesses Tails. (By making the payoff different for correct guesses of Heads vs. Tails, it would be possible to set up scenarios that make clear exactly what probability of Heads Beauty should be using.)

The second scenario is

Each interview consists of Sleeping Beauty guessing whether the coin came up heads or tails. After the experiment, she will be given a dollar if she was correct on Monday.

In this scenario, it makes no difference what her guess is, which Ata says corresponds to a probability of 1/2 for Heads. But this is simply a mistake. To be indifferent regarding what to guess in this scenario, Beauty needs to assign a probability of 1/3 to Heads. She will then see three equally likely possibilities:

Heads and it's Monday

Tails and it's Monday

Tails and it's Tuesday

The differences in payoff for guessing Heads vs. Tails for these possibilities are +1, -1, and 0. Taking the expectation with respect to the equal probabilities for each gives 0, so Beauty is indifferent. In contrast, if Beauty assigns probability 1/2 to Heads, and hence probability 1/4 to each of the other possibilities, the expected difference in payoff is +1/4, so she will prefer to guess Heads. By using different payoffs for correct guesses of Heads vs. Tails, it is easy to construct an "only Monday counts" scenario in which Beauty makes a sub-optimal decision if she assigns any probability other than 1/3 to Heads. See also my comment on an essentially similar variation at https://www.lesswrong.com/posts/u7kSTyiWFHxDXrmQT/sleeping-beauty-resolved

I think that 1/3 probablity for Heads in fact leads to the correct decision with any betting scheme in the usual Sleeping Beauty problem. There is no difficulty in applying standard probability and decision theory. 1/3 is simply the correct answer. Other answers are the result of mistakes in reasoning. Perhaps something more strange happens in more fantastic versions of Sleeping Beauty, but when the only fantastic aspect is memory erasure, the answer is quite definitely 1/3.

That the answer is 1/3 is even more clear for the Sailor's Child problem. But you say: I don't agree that the probability for Sailor's child is necessarily 1/3. It depends on whether you think this effect should be handled in the probability or the decision theory. I would like to emphasize again that the Sailor's Child problem is completely non-fantastical. It could really happen. It involves NOTHING that should cause any problems in reasoning out the answer using standard methods. If standard probability and decision theory can't be unambiguously applied to this problem, then they are flawed tools, and the many practical applications of probability and decision theory in fields ranging from statistics to error-correcting codes would be suspect. Saying that the answer to the Sailor's Child problem depends on some sort of subjective choice of whether "this effect should be handled in the probability or the decision theory" is not a reasonable position to take.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-20T12:27:19.873Z · LW · GW

Just to clarify... ageing by one day may well be one reason Beauty's experiences are different on Tuesday than on Monday, but we assume that other variation swamps ageing effects, so that Beauty will not be able to tell that it is Tuesday on this basis.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-20T12:24:02.609Z · LW · GW

No, I very definitely do NOT assume that Beauty's experiences are identical on Monday and Tuesday. I think one should solve the Sleeping Beauty problem with the ONLY fantastical aspect being the memory erasure. In every other respect, Beauty is a normal human being. If you then want to make various fantastic assumptions, go ahead, but thinking about those fantastic versions of the problem without having settled what the answer is in the usual version is unwise.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-20T02:13:10.736Z · LW · GW

I'm still baffled. Why aren't we just talking about what probabilities Beauty assigns to various possibilities, at various times? Beauty has nothing much else to do, she can afford to think about what the probabilities should be every time, not just when she observes 1, 1,1, or a coin comes up Heads, or whatever. I suspect that you think her "guessing" (why that word, rather than "assigning a probability"?) only some of the time somehow matters, but I don't see how...

I'd rather that Beautify not be a computer program. As my original comment discusses, that is not the usual Sleeping Beauty problem. If your answer depends on Beauty being a program, not a person, then it is not an answer to the usual problem.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-20T01:44:44.491Z · LW · GW

If the memory wipe is the only fantastic aspect of this situation, then when the child you see when you wake says they were born on Tuesday (and I assume you know that both children will always say what day they were born on after you wake up), you should consider the probability that the other was also born on Tuesday to be 1/7. The existence of another wakening, which will of course be different in many respects from this one (e.g., the location of dust specs on the mirror in the room), is irrelevant, since you can't remember it (or it hasn't occurred yet).

I've no idea what you mean by "guessing only when you met a boy born on Tuesday". Guessing what? Or do you mean you are precommitted to not thinking about what the probability of both being born on the same day is if the boy doesn't say Tuesday? (Could you even do that? I assume you've heard the joke about the mother who promises a child a cookie if he doesn't think of elephants in the next ten minutes...) I think you may be following some strange version of probability or decision theory that I've never heard of....

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-20T01:06:45.390Z · LW · GW

As I said, I'm not sure what point you're trying to make, but if updating from 1/7 to 1/13 on any of the statements "at least one was born on Tuesday", "at least one was born on Wednesday", etc. is part of the point, then I don't see any model of what you are told for which that is the case.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-20T00:03:24.006Z · LW · GW

You write: A man has two sons. What is the chance that both of them are born on the same day if at least one of them is born on a Tuesday?

Most people expect the answer to be 1/7, but the usual answer is that 13/49 possibilities have at least one born on a Tuesday and 1/49 has both born on Tuesday, so the chance in 1/13. Notice that if we had been told, for example, that one of them was born on a Wednesday we would have updated to 1/13 as well. So our odds can always update in the same way on a random piece of information if the possibilities referred to aren't exclusive as Ksvanhorn claims.

I don't know what the purpose of your bringing this up is, but your calculation is in any case incorrect. It is necessary to model the process that leads to our being told "at least one was born on Tuesday", or "at least one was born on Wednesday", etc. The simplest model would be that someone will definitely tell us one of these seven statements, choosing between valid statements with equal probabilities if more than one such statement is true. With this model, the probability of them being born on the same day is 1/7, regardless of what statement you are told. There are 13 possibilities with non-zero probabilities after hearing such a statement, but the possibility in which they are born on the same day has twice the probability of the others, since the others might have resulted in a different statement.

You'll get an answer of 1/13, if you assume a model in which someone precommits to telling you whether the statement "at least one was born on Tuesday" is true or false, before they find out the answer, and they later say it is true.

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-19T23:52:56.111Z · LW · GW

You write: However, let's suppose you picked two sequences 000 and 001 and pre-committed to bet if you saw either of those sequences. Then the odds of betting if tails occurs and the observations are independent would become: 1/4+1/4-1/16 = 7/16. This would lead the probability ratio to become 4/7. Now, the other two probabilities (always different, always the same) remain the same, but the point is that the probability of heads depends on the number of sequences you pre-commit to guess. If you pre-committed to guess for any sequences, then the probability becomes 1/2.

This makes no sense to me. What do you mean by "the odds of betting"? Betting on what? And why are we trying to assign probabilities to Beauty making bets? As a rational agent, she usually makes the correct bets, rather than randomly choosing a bet. And whatever Beauty is betting on, what is the setup regarding what happens if she makes different betting decisions on Monday and Tuesday?

Comment by Radford Neal (radford-neal) on Sleeping Beauty Not Resolved · 2018-06-19T23:45:45.151Z · LW · GW

I'll split my comments on this into muliple replies, since they address somewhat unconnected issues.

Here, some meta-comments. First, it is crucial to be clear on what Sleeping Beauty problem is being solved. What I take to be the USUAL problem is one that is only mildly fantastic - supposing that there is a perfect (or at least very good) memory erasing drug, that ensures that if Beauty is woken on Tuesday, she will not have memories of a Monday wakening that would allow her to deduce that it must be Tuesday. That is the ONLY fantastic aspect in this version of the problem. Beauty is otherwise a normal human, who has normal human experiences whenever she is awake, which include a huge number of bits of both sensory information and internal perceptions of her state of mind, which are overwhelmingly unlikely to be the same for a Tuesday awakening as for a Monday awakening. Furthermore, although Beauty is assumed to be a normally rational and intelligent person, there is no guarantee that she will make the correct decision in all circumstances, and in particular there is at least some small probability that she will decide differently on Monday and Tuesday, despite there being no rational grounds for making different decisions. When she makes a decision, she is NOT deciding for both days, only one.

Some people may be interested in a more fantastic version of the problem, in which Beauty's experiences are somehow guaranteed to be identical on Monday and Tuesday, or in which she only gets two bits of sensory input on either day. But those who are interested in these versions should FIRST be interested in what the answer is to the usual version. If the answer is different for the more fantastic versions, that is surely of central importance when trying to learn anything from such fantastic thought experiments. And for the usual, only-mildly-fantastic version we should be able to reach consensus on the answer (and the answer's justification), since it is only slightly divorced from the non-fastastic probability problems that people really do solve in everyday life (correctly, if they are competent).

As you know, I think the answer to the usual Sleeping Beauty problem is that the probability of Heads is 1/3. Answers to various decision problems then follow from this (I don't see anything unusual about how decision theory relates to probability here, though you do have to be careful not to make mistakes). There are three ways I might be wrong about this. One is that the probability is not 1/3. Another is that the probability is 1/3, but my argument for this answer is not valid. The third is that the probability is 1/3, but normal decision theory based on this probability is invalid (contrary to my belief).

In my paper (see http://www.cs.utoronto.ca/~radford/anth.abstract.html for the partially revised version), I offer multiple arguments for 1/3, and for the decisions that would normally follow from that, some of which do not directly relate to my FNC justification for 1/3. So separating "1/3" from "the FNC argument for 1/3" seems important.

One auxiliary argument that is related to FNC is my Sailor's Child problem. This is a completely non-fantatical analogue of Sleeping Beauty. I think it is clear that the answer for it is 1/3. Do you agree that the answer to the Sailor's Child problem is 1/3? Do you agree that it is a valid analogue of the usual Sleeping Beauty problem? If the answer to both of these is "yes", then you should agree that the answer to the usual Sleeping Beauty problem is 1/3, without any equivocation about it all depending on how we choose to extend probability theory, or whatever.

Finally, please note that in your post you attribute various beliefs and arguments to me that I do not myself always recognize as mine.

Comment by Radford Neal (radford-neal) on Anthropics made easy? · 2018-06-15T23:51:25.860Z · LW · GW

I agree that the possibility of serious but less than catastrophic effects renders the issue here moot for many problems (which I think includes nuclear war.) I tried to make the interstellar planet example one where the issue is real - the number of such planets seems to me to be unrelated to how many asteroids are in the solar system, and might collide with less-catastrophic effects (or at least we could suppose so), whereas even a glancing collision with a planet-sized object would wipe out humanity. However, I may have failed with the mutated insect example, since one can easily imagine less catastrophic mutations.

I'm unclear on what your position is regarding such catastrophes, though. Something that quickly kills me seems like the most plausible situation where an argument regarding selection effects might be valid. But you seem to have in mind things that kill me more slowly as well, taking long enough for me to have lots of thoughts after realizing that I'm doomed. And you also seem to have in mind things that would have wiped out humanity before I was born, which seems like a different sort of thing altogether to me.

Comment by Radford Neal (radford-neal) on Anthropics made easy? · 2018-06-15T13:15:01.361Z · LW · GW

A problem with this line of reasoning is that it would apply to many other matters too. It's thought that various planet-sized object are wandering in interstellar space, but I think no one has a clear idea how many there are. One of them could zip into the solar system, hit earth, and destroy all life on earth. Do you think that the fact that this hasn't happened for a few billion years is NO EVIDENCE AT ALL that the probability of it happening in any given year is low? The same question could be asked about many other possible catostrophic events, for some of which there might be some action we could take to mitigate the problem (for instance, a mutation making some species of insect become highly aggressive, and highly prolific, killing off all mammals, for which stockpiling DDT might be prudent). Do you think we should devote large amounts of resources to preventing such eventualities, even though ordinary reasoning would seem to indicate that they are very unlikely?