Probability is a model, frequency is an observation: Why both halfers and thirders are correct in the Sleeping Beauty problem.

post by Shmi (shminux) · 2018-07-12T06:52:19.440Z · LW · GW · 34 comments

Contents

  Probability is not a feature of the world. It is an observer-dependent model of the world. It predicts the observed frequency of some event, which is also observer-dependent.
None
34 comments

This post was inspired by a yet another post talking about the Sleeping Beauty problemRepeated (and improved) Sleeping Beauty problem [? · GW]. It is also related to Probability is in the Mind [LW · GW].

(Updated: As pointed out in the comments, If a tree falls on Sleeping Beauty... [LW · GW] is an old post recasting this as a decision problem, where the optimal action depends on the question asked: " just ask for decisions and leave probabilities out of it to whatever extent possible")

It is very common in physics and other sciences that different observers disagree about the value of a certain quantity they both measure. For example, for a person in a moving car, the car is stationary relative to them (v=0), yet to a person outside the car is moving (v=/=0). Only very few select measurable quantities are truly observer-independent. The speed of light is one of the better known examples. The electron charge. Some examples of non-invariant quantities in physics are quite surprising. For example, does a uniformly accelerating electric charge radiate? Do black holes evaporate? The answer depends on the observer! Probability is one of those.

In fact, the situation is worse than that. Probability is not directly measurable.

Probability is not a feature of the world. It is an observer-dependent model of the world. It predicts the observed frequency of some event, which is also observer-dependent.

When you say that a coin is unbiased, you model what will happen when the coin is thrown multiple times and predict that, given a large number of throws, the ratio of heads to tails approaches 1. This might or might not be a sufficiently accurate model of the world you find yourself in. As any model, it is only an approximation, and can miss something essential about your future experiences. For example, someone might try to intentionally distract you every time the coin lands heads, and sometimes they will be successful, so your personal counts of heads and tails will be a bit off, and the ratio of heads to tails will be statistically significantly below 1 even after many many throws. Someone who had not been distracted, would record a different ratio. So you would conclude that the coin is biased. Who is right, you or them?

If you remember that frequency is not an invariant quantity, it depends on the observer, then the obvious answer “they are right, I was maliciously distracted and my counts are off” is not a useful one. If you don’t know that you had been distracted, your model of the coin as biased toward tails is actually a better model of the world you find yourself in.

The Sleeping Beauty problem is of that kind: because she is woken up twice per throw that comes up tails, but only once per throw that comes up heads, then she will see twice as many tails as heads on average. So for her this is equivalent to the coin being biased and the heads:tails ratio being 1:2 (the Thirder position). If she is told the details of the experiment, she can then conclude that for the person throwing the coin it is expected to come up heads 50% of the time (the Halfer position), because it is a fair coin. But for the Sleeping Beauty herself this fair coin is observed to come up heads half as often as tails, that’s just how this specific fair coin behaves in her universe.

This is because probability is not an invariant objective statement about the world, it is an observer-dependent model of the world, predicting what a given observer is likely to experience. The question “but what is the actual probability of the coin landing heads?” is no more meaningful than asking “but what is the actual speed of the car?” — it all depends on what kind of observations one makes.

34 comments

Comments sorted by top scores.

comment by Charlie Steiner · 2018-07-13T01:29:59.170Z · LW(p) · GW(p)

(Edit: Apologies for how harsh this sounds. No meanness was meant, though it is basically all criticism, which is hard to do collegially on the internet.)

I disagree with this entire genre of posts. I think Probability is Subjectively Objective [LW · GW] (when we aren't worrying about logical uncertainty). Probabilities are objective in the sense that they express how what you know can be used to predict what you don't know - a logic problem that you can compute the answer to in your brain, but not the sort of thing you can change the answer to by wishing.

There's an analogy with the idea of the selfish gene, where you realize that genes don't become widespread by only caring about the reproduction of the organism they're in at the time, a gene just gets evaluated on if it increases the number of copies of itself by any means. Probabilities should be scored from the perspective of a state of information - a particular state of information implies some probability, and it's the probability that best reflects the state of the unknown that is compatible with that state of information.

So since I start with thoughts like these, I feel like you rather miss the point. Yes, probability depends on who calculates it (or rather what they know), but we're clearly talking about Sleeping Beauty. Yes, probability is not directly measurable, but neither is the square of the number of apples on a table - it's a function of sense data and knowledge. Even framing this as if we were worrying about the bias of the coin feel like you're not quite exploiting the knowledge that sometimes the "bias" or "fairness" is in the state of knowledge of the calculator.

comment by Chris_Leong · 2018-07-12T07:46:25.571Z · LW(p) · GW(p)

I think it's worth linking to If a tree falls on Sleeping Beauty [LW · GW] as it argues quite far along the same lines.

Replies from: shminux
comment by Shmi (shminux) · 2018-07-12T15:09:38.272Z · LW(p) · GW(p)

Yes, thank you for the reminder, been years since I read this post.

comment by Peter Gerdes (peter-gerdes) · 2018-07-15T03:12:24.123Z · LW(p) · GW(p)

While I agree with your conclusion in some sense you are using the wrong notion of probability. The people who feel there is a right answer to the sleeping beauty case aren't talking about the kind of formally defined count over situations in some formal model. If that's the only notion of probability then you can't even talk about the probabilities of different physical theories being true.

The people who think there is a sleeping beauty paradox believe there is something like the rational credence one should have in a proposition given your evidence. If you believe this then you have a question to answer. What kind of credence should sleeping beauty have in the coin landing heads given she has the evidence of remembering being enrolled in the experiment and waking up this morning.

In my analysis of the issue I ultimately come to essentially the same conclusion that you do (it's an ill-posed problem) but an important feature of this account is that it requires **denying** that there is a well-defined notion that we refer to when we talk about rational credence in a belief.

This is a conclusion that I feel many rationalists will have a hard time swallowing. Not the abstract view that should shut up about probability and just look at decisions. Rather, the conclusion that we can't insist the person who (despite strong evidence to the contrary) that it's super likely that god exists is being somehow irrational because there isn't even necessarily a common notion of what kind of possible outcomes count for making decisions, e.g., if they only value being correct in worlds where there is a deity they get an equally valid notion of rational credence which makes their belief perfectly rational.

comment by Gordon Seidoh Worley (gworley) · 2018-07-12T17:57:54.587Z · LW(p) · GW(p)

This is a great reminder and one that people continue to get tripped up on. We are probably looking at a case where people mistake what they know (the ontological) for what is (the ontic). Or, put another way, they mistake ontology for metaphysics.

Replies from: avturchin, shminux
comment by avturchin · 2018-07-13T09:46:28.763Z · LW(p) · GW(p)

One important problem with probabilities appears when they are applied to one-time future event, which thus have no frequency. Most important examples are AI appearance and global catastrophe. As they don't have frequency, a probability of such event should mean something else.

Replies from: shminux, SaidAchmiz
comment by Shmi (shminux) · 2018-07-14T06:34:25.430Z · LW(p) · GW(p)

The frequencies do not necessarily have to be actual, they can be calculated over the simulated worlds, as well. As long as the observer is well defined.

comment by Said Achmiz (SaidAchmiz) · 2018-07-13T17:40:38.402Z · LW(p) · GW(p)

What reasons do we have to believe that probabilities of one-time events mean anything?

Replies from: avturchin, jimrandomh
comment by avturchin · 2018-07-13T21:59:12.992Z · LW(p) · GW(p)

We need something like probability to make informed decisions about one-tine events, like the probability of FAI vs. UFAI

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-07-13T23:04:52.655Z · LW(p) · GW(p)

That… doesn’t actually answer the question—especially since it begs the question!

comment by jimrandomh · 2018-07-14T06:39:56.665Z · LW(p) · GW(p)

This objection applies equally to all models, regardless of whether they involve probabilities or not; a model may fail to accurately represent the thing that it's trying to represent. But this doesn't make it meaningless.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-07-14T15:46:42.307Z · LW(p) · GW(p)

Yes, my question is: in the case of “probabilities of one-time events”, what is the thing that the model is trying to represent?

Replies from: jimrandomh
comment by jimrandomh · 2018-07-15T16:53:47.215Z · LW(p) · GW(p)
what is the thing that the model is trying to represent?

There are a bunch things "probability" could be representing in a model for a one-shot event, which (with the exception of a few notable corner cases) all imply each other and fit into mathematical models the same way. We might mean:

* The frequency that will be observed if this one-shot situation is transformed into an iterated one

* The odds that will give this the best expected value if we're prompted to bet under a proper scoring rule

* The number that will best express our uncertainty to someone calibrated on iterated events

* The fraction of iterated events which would combine to produce that particular event

...or a number of other things. All of these have one thing in common, which is that they share a mathematical structure which satisfies the axioms of Cox's theorem, which means that we can calculate and combine one-shot probabilities, including one-shot probabilities with different interpretations, without needing to be precise about what we mean philosophically. It's only in corner cases, like Sleeping Beauty, where the isomorphism between definitions breaks down and we have to stop using the word and be more precise.

Replies from: Benito, SaidAchmiz
comment by Ben Pace (Benito) · 2018-07-15T20:59:22.014Z · LW(p) · GW(p)

Thanks, operationalising this in the four different ways and explaining that they have the same mathematical structure helped me understand what's going on with probability a great deal better than I did before.

comment by Said Achmiz (SaidAchmiz) · 2018-07-15T20:03:25.481Z · LW(p) · GW(p)

The odds that will give this the best expected value if we’re prompted to bet under a proper scoring rule

Isn’t this one incoherent unless we already have a notion of probability?

The number that will best express our uncertainty to someone calibrated on iterated events

This seems weirdly indirect. What are the criteria for “best” here?

The fraction of iterated events which would combine to produce that particular event

I’m not sure what you mean. Could you elaborate?

The frequency that will be observed if this one-shot situation is transformed into an iterated one

Is there some principled method for determining a unique (or unique up to some sort of isomorphism) way of transforming a one-shot event into an iterated one?

comment by Shmi (shminux) · 2018-07-14T06:26:52.794Z · LW(p) · GW(p)

Also known as the Mind Projection Fallacy.

comment by Radford Neal (radford-neal) · 2018-07-12T17:30:50.883Z · LW(p) · GW(p)

Probability is meant to be a useful mental construct, that helps in making good decisions. There's a standard framework for doing this. If you apply it, you find that Beauty makes good decisions only if she assigns a probability of 1/3 to Heads when she is woken. There is no sense in which 1/2 is the correct answer, unless you choose to redefine what probabilities mean, along with the method of using the to make decisions, which would be nothing but a pointless semantic distraction.

Replies from: shminux, SaidAchmiz, FeepingCreature, Chris_Leong
comment by Shmi (shminux) · 2018-07-12T21:58:46.062Z · LW(p) · GW(p)
There is no sense in which 1/2 is the correct answer

Sure, there is! If the question to the SB is "what heads:tails ratio does the person flipping the coin see?" then the answer is 1:1, not 1:2, and she can provide the correct answer. Or if SB is only asked the original question on Mondays. Each question corresponds to a different observer, and so may result in a different answer.

Replies from: radford-neal
comment by Radford Neal (radford-neal) · 2018-07-12T23:18:50.658Z · LW(p) · GW(p)

The standard question is what probability Beauty should assign to Heads after being woken (on Monday or Tuesday), and not being told what day it is, given that she knows all about the experimental setup. Of course if you change the setup so that she's asked a question on Monday that she isn't on Tuesday, then she will know what day it is (by whether the question was asked or not) and the answer changes. That isn't an interesting sense in which the answer 1/2 is correct. Neither is it interesting that 1/2 is the answer to the question of what probability the person flipping the coin should assigns to Heads, nor to the question of what is seven divided by two minus three...

comment by Said Achmiz (SaidAchmiz) · 2018-07-12T18:41:20.635Z · LW(p) · GW(p)

Doesn’t this depend entirely on what decisions are good?

Like, let’s say that you decide to incentivize Beauty to guess correctly. The way you’re going to do this is as follows: each time Beauty is woken, you ask her how the coin landed. If she guessed right, you give her a prize immediately (the same prize each time; let’s say, $1).

Now let’s leave probabilities out of it and consider only possible scenarios:

Beauty guesses heads, always.

  1. Coin landed heads (and it’s a Monday); Beauty wins $1.
  2. Coin landed tails (and it’s a Monday); Beauty wins $0.
  3. Coin landed tails (and it’s a Tuesday); Beauty wins $0.

Total winnings across all scenarios: $1. Average winnings per experiment iteration, if repeated: $0.50.

Beauty guesses tails, always.

  1. Coin landed heads (and it’s a Monday); Beauty wins $0.
  2. Coin landed tails (and it’s a Monday); Beauty wins $1.
  3. Coin landed tails (and it’s a Tuesday); Beauty wins $1.

Total winnings across all scenarios: $2. Average winnings per experiment iteration, if repeated: $1.

Aha! Beauty wins twice as much money by guessing tails than heads. Clearly, the thirder position is correct.

But wait! What if we change the incentive structure? Now, instead of rewarding Beauty on each day, we instead reward her at the end of the experiment, if and only if she guessed right each time she was woken. Let’s consider the scenarios:

Beauty guesses heads, always.

  1. Coin landed heads (and it’s a Monday); Beauty guesses heads (1/1 correct answers), and wins $1 at the end.
  2. Coin landed tails (and it’s a Monday); Beauty guesses heads (0/2 correct answers), and so will now win $0 regardless of what she guesses on Tuesday.
  3. Coin landed tails (and it’s a Tuesday); Beauty guesses heads (but this is now irrelevant).

Total winnings across all scenarios: $1. Average winnings per experiment iteration, if repeated: $0.50.

Beauty guesses tails, always.

  1. Coin landed heads (and it’s a Monday); Beauty guesses tails (0/1 correct answers), and wins nothing.
  2. Coin landed tails (and it’s a Monday); Beauty guesses tails (1/2 correct answers).
  3. Coin landed tails (and it’s a Tuesday); Beauty guesses tails (2/2 correct answers), and wins $1.

Total winnings across all scenarios: $1. Average winnings per experiment iteration, if repeated: $0.50.

Now Beauty wins the same amount of money by guessing heads as by guessing tails. Clearly, the halfer position is correct…?

Replies from: SaidAchmiz, radford-neal
comment by Said Achmiz (SaidAchmiz) · 2018-07-12T18:52:23.259Z · LW(p) · GW(p)

To put it another way: the reason that Beauty should guess tails in my first scenario above, is that we’re rewarding her twice as much for correctly guessing tails than for correctly guessing heads! It’s just exactly the same thing as if I offered you a reward for guessing the outcome of a simple coin flip, and paid you $2 if the coin landed tails (and you guessed right), but only $1 if it landed heads (and you guessed right). Of course you should guess tails!

comment by Radford Neal (radford-neal) · 2018-07-12T21:36:43.958Z · LW(p) · GW(p)

Your second scenario introduces a coordination issue, since Beauty gets nothing if she guesses differently on Monday and Tuesday. I'm still thinking about that.

If you eliminate that issue by saying that only Monday guesses count, or that only the last guess counts, you'll find that Beuaty has to assign probability 1/3 to Heads in order to do the right thing by using standard decision theory. The details are in my comment on the post at https://www.lesswrong.com/posts/u7kSTyiWFHxDXrmQT/sleeping-beauty-resolved#aG739iiBci9bChh5D

Or you can say that the payoff for guessing Tails correctly is $0.50 while guessing Heads correctly gives $1.00, so the total payoff is the same from always guessing Heads as from always guessing Tails. In that case, you can see that you get indifference to Heads versus Taills when the probability of Heads is 1/3, by computing the expected return for guessing Heads at one particular time as (1/3) 1.00 versus the expected return for guessing Tails at one particular time of (2/3) 0.5. Clearly you don't get indifference if Beauty thinks the probability of Heads is 1/2.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-07-12T22:22:07.171Z · LW(p) · GW(p)

Your second scenario introduces a coordination issue, since Beauty gets nothing if she guesses differently on Monday and Tuesday.

I guess I’m not sure why this is an “issue”, or what exactly it means to say that it’s a “coordination issue”. The point I am making is that we can say: “if the researchers offer reward structure X, then the answer that makes sense is Y”. For the first reward structure I described, either answer is equally profitable for Beauty. For the second reward structure I described, “tails” is the more profitable answer.

Saying “let’s instead make the reward structure different in this-and-such way” misses the point. For any given reward structure, there is some way for Beauty to answer that maximizes her reward. For a different reward structure, the best answer might be something else. That’s all.

(All of this is described, and even better than I’m doing, in this old post [LW · GW] that is linked from the OP.)

If you eliminate that issue by saying that only Monday guesses count, or that only the last guess counts, you’ll find that Beuaty has to assign probability 1⁄3 to Heads in order to do the right thing by using standard decision theory.

Right, this is an example of what I’m saying: change the reward structure, and the profit-maximizing answer may well change. None of this seems to motivate the idea that there’s some answer which is simple “correct”, in a way that’s divorced from some goal you’re trying to accomplish, some reason why you care about the answer, etc. (Decision theory without any value/goals/rewards is nonsense, after all!)

Edit: Corrected quoting failure

Replies from: SaidAchmiz, radford-neal
comment by Said Achmiz (SaidAchmiz) · 2018-07-12T22:59:44.145Z · LW(p) · GW(p)

Here’s another reframing of my point—borrowing from one of my favorite essays in the Sequences, “Newcomb’s Problem and Regret of Rationality”, where Eliezer says:

“But,” says the causal decision theorist, “to take only one box, you must somehow believe that your choice can affect whether box B is empty or full—and that’s unreasonable! Omega has already left! It’s physically impossible!”

Unreasonable? I am a rationalist: what do I care about being unreasonable? I don’t have to conform to a particular ritual of cognition. I don’t have to take only box B because I believe my choice affects the box, even though Omega has already left. I can just… take only box B.

Similarly, Sleeping Beauty doesn’t have to answer “tails” because she believes that the probability of tails is 1/3. She can just… answer “tails”. Beauty is not obligated to believe anything whatsoever.

And to quote the post linked in the parent comment:

But in the original problem, when she is asked “What is your credence now for the proposition that our coin landed heads?”, a much better answer than “.5” is “Why do you want to know?”. If she knows how she’s being graded, then there’s an easy correct answer, which isn’t always .5; if not, she will have to do her best to guess what type of answer the experimenters are looking for; and if she’s not being graded at all, then she can say whatever the hell she wants (acceptable answers would include “0.0001,” “3⁄2,” and “purple”).

Replies from: radford-neal
comment by Radford Neal (radford-neal) · 2018-07-12T23:34:36.111Z · LW(p) · GW(p)

The linked post by ata is simply wrong. It presents the scenario where

Each interview consists of Sleeping Beauty guessing whether the coin came up heads or tails. After the experiment, she will be given a dollar if she was correct on Monday.
In this case, she should clearly be indifferent (which you can call “.5 credence” if you’d like, but it seems a bit unnecessary).

But this is not correct. If you work out the result with standard decision theory, you get indifference between guessing Heads or Tails only if Beauty's subjective probability of Heads is 1/3, not 1/2.

You are of course right that anyone can just decide to act, without thinking about probabilities, or decision theory, or moral philosophy, or anything else. But probability and decision theory have proven to be useful in numerous applications, and the Sleeping Beauty problem is about probability, presumably with the goal of clarifying how probability works, so that we can use it in practice with even more confidence. Saying that she could just make a decision without considering probabilities rather misses the point.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-07-12T23:44:23.974Z · LW(p) · GW(p)

If you work out the result with standard decision theory, you get indifference between guessing Heads or Tails only if Beauty’s subjective probability of Heads is 1⁄3, not 1⁄2.

I don’t know about “standard decision theory”, but it seems to me that—in the described case (only Monday’s answer matters)—guessing Heads yields an average of 50¢ at the end, and guessing Tails also yields an average of 50¢ at the end. I don’t see that Beauty has to assign any credences or subjective probabilities to anything in order to deduce this.

As the linked post says, you can call this “0.5 credence”. But, if you don’t want to, you can also not call it “0.5 credence”. You don’t have to call it anything. You can just be indifferent.

You are of course right that anyone can just decide to act, without thinking about probabilities, or decision theory, or moral philosophy, or anything else.

My point is that “just deciding to act” in this case actually gets us the result that we want. Saying that probability and decision theory are “useful” is beside the point, since we already have the answer we actually care about, which is: “what do I [Sleeping Beauty] say to the experimenters in order to maximize my profit?”

Replies from: radford-neal
comment by Radford Neal (radford-neal) · 2018-07-13T00:23:41.230Z · LW(p) · GW(p)

But the thing is you can't call it "0.5 credence" and have your credence be anything like a normal probability. The Halfer will assign probability 1/2 for Heads and Monday, 1/4 for Tails and Monday, and 1/4 for Tails and Tuesday. Since only the guess on Monday is relevant to the payoff, we can ignore the Tuesday possibility (in which the action taken has no effect on the payoff), and see that a halfer would have a 2:1 preference for Heads. In contrast, a Thirder would give 1/3 probability to Heads and Monday, 1/3 to Tails and Monday, and 1/3 to Tails and Tuesday. Ignoring Tuesday, they're indifferent between guessing Heads or Tails.

With a slight tweak to payoffs so that Tails are slightly more rewarding, the Halfer will make a definitely wrong decision, while the Thirder will make the right decision.

comment by Radford Neal (radford-neal) · 2018-07-12T23:09:35.181Z · LW(p) · GW(p)

We agree about what the right actions are for the various reward structures. We can then try to work backwards from what the right action is to what probability Beauty should assign to the coin landing Heads after being wakened, in order that this probability will lead (by standard decision theory) to her taking the action we've decided is the correct one.

For your second scenario, Beauty really has to commit to what to do before the experiment, which means this scheme of working backwards from correct decision to probability of Heads after wakening doesn't seem to work. Guessing either Heads or Tails is equally good, but only if done consistently. Deciding after each wakening without having thought about it beforehand doesn't work well, since with the two possibilities being equally good, Beauty might choose differently on Monday and Tuesday, with bad results. Now, if the problem is tweaked with slightly different rewards for guessing Heads correctly than Tails correctly, we can avoid the situation of both guesses being equally good. But the coordination problem still seems to confuse the issue of how to work backwards to the appropriate probabilities (for me at least).

I think it ought to be the case that, regardless of the reward structure, if you work backwards from correct action to probabilities, you get that Beauty after wakening should give probability 1/3 to Heads. That seems to be what happens for all the reward structures where Beauty can decide what to do each day without having to know what she might do or have done the other day.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-07-12T23:19:43.077Z · LW(p) · GW(p)

We can then try to work backwards from what the right action is to what probability Beauty should assign to the coin landing Heads after being wakened, in order that this probability will lead (by standard decision theory) to her taking the action we’ve decided is the correct one.

Is there some reason why you’re committed to standard (by which you presumably mean, causal—or what?) decision theory, when approaching this question? After all:

For your second scenario, Beauty really has to commit to what to do before the experiment

As I understand it, UDT (or some similar decision theory) is the now-standard solution for such dilemmas.

I think it ought to be the case that, regardless of the reward structure, if you work backwards from correct action to probabilities, you get that Beauty after wakening should give probability 1⁄3 to Heads.

Why, though? More importantly, why does it matter?

It seems to me that all that Beauty needs to know, given that that the scenario (one or two awakenings) is chosen by the flip of a fair coin, is that fair coins land heads half the time, and tails the other half. I really don’t see any reason why we should insist on there being some single, “objectively correct”, subjective probability assignment over the outcomes, that has to hold true for all formulations of this thought experiment, and/or all other Sleeping-Beauty-esque scenarios, etc.

In other words:

We agree about what the right actions are for the various reward structures.

I am struggling to see why there should be anything more to the matter than this. We all agree what the right actions are and we are all equally quite capable of determining what those right actions are. It seems to me that we’re done.

Replies from: radford-neal
comment by Radford Neal (radford-neal) · 2018-07-12T23:43:46.436Z · LW(p) · GW(p)

A big reason why probability (and belief in general) is useful is that it separates our observations of the world from our decisions. Rather than somehow relating every observation to every decision we might sometime need to make, we instead relate observations to our beliefs, and then use our beliefs when deciding on actions. That's the cognitive architecture that evolution has selected for (excepting some more ancient reflexes), and it seems like a good one.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-07-12T23:50:46.504Z · LW(p) · GW(p)

I don’t really disagree, per se, with this general point, but it seems strange to insist on rejecting an answer we already have, and already know is right, in the service of this broad point. If you want to undertake the project of generalizing and formalizing the cognitive algorithms that led us to the right answer, fine and well, but in no event should that get in the way of clarity w.r.t. the original question.

Again: we know the correct answer (i.e. the correct action for Beauty to take); and we know it differs depending on what reward structure is on offer. The question of whether there is, in some sense, a “right answer” even if there are no rewards at all, seems to me to be even potentially useful or interesting only in the case that said “right answer” does in fact generate all the practical correct answers that we already have. (And then we can ask whether it’s an improvement on whatever algorithm we had used to generate said right answers, etc.)

Replies from: radford-neal
comment by Radford Neal (radford-neal) · 2018-07-13T00:16:22.229Z · LW(p) · GW(p)

Well of course. If we know the right action from other reasoning, then the correct probabilities better lead us to the same action. That was my point about working backwards from actions to see what the correct probabilities are. One of the nice features about probabilities in "normal" situations is that the probabilities do not depend on the reward structure. Instead we have a decision theory that takes the reward structure and probabilities as input and produces actions. It would be nice if the same nice property held in SB-type problems, and so far it seems to me that it does.

I don't think there has ever been much dispute about the right actions for Beauty to take in the SB problem (i.e., everyone agrees about the right bets for Beauty to make, for whatever payoff structure is defined). So if just getting the right answer for the actions was the goal, SB would never have been considered of much interest.

comment by FeepingCreature · 2018-07-12T18:18:16.003Z · LW(p) · GW(p)

Say that the second time you wake her on Monday, you just outright ignore everything she says. Suddenly, because you changed your behavior, her objectively correct belief is 0.5 / 0.5?

The real problem is that the question is undefined - Sleeping Beauty has no goal function. If she had a goal function, she could just choose the probability assignment that maximized the payoff under it. All the handwaving at "standard frameworks" is just another way to say "assignment that maximizes payoff under a broad spread of goals".

Alternate scenario: all three wakings of Sleeping Beauty are actually copies. Monday 2 is deleted after a few minutes. What's the "true" probability then?

comment by Chris_Leong · 2018-07-13T00:08:03.149Z · LW(p) · GW(p)

Well, it comes down to what we mean by probability. Standard probability isn't designed for real agents, but imagines an objective floating observer that can't be killed or put to sleep or duplicated, ect. One approach is to modify probability so that twice as many agents observing an phenomenon doubles the probability. Another is to insist on analysing the situation as an objective floating observer who has the same knowledge as the agent being discussed (except possibly for any indexical knowledge). Again, if this doesn't make sense, I'll cover it in more detail on my upcoming post on Sleeping Beauty, but I'm waiting for a week when someone hasn't already posted on this topic.