Beauty and the Bets

post by Ape in the coat · 2024-03-27T06:17:27.516Z · LW · GW · 23 comments

Contents

  Introduction
  Different Probabilities for Different Betting Schemes?
    Thirder Per Awakening Betting
    Thirder Per Experiment Betting
  Do Halfers Need to Bet on Thirders Odds?
    Halfer Per Awakening Betting
    Halfer Per Experiment Betting
  Betting Odds Are a Poor Proxy For Probabilities
  Utility Instability under Thirdism
  Thirdism Ignores New Evidence
    Technicolor Sleeping Beauty
    Rare Event Sleeping Beauty
  Conclusion
None
23 comments

This is the ninth post in my series on Anthropics. The previous one is The Solution to Sleeping Beauty [LW · GW].

Introduction

There are some quite pervasive misconceptions about betting in regards to the Sleeping Beauty problem.

One is that you need to switch between halfer and thirder stances based on the betting scheme proposed. As if learning about a betting scheme is supposed to affect your credence in an event.

Another is that halfers should bet at thirders odds and, therefore, thirdism is vindicated on the grounds of betting. What do halfers even mean by probability of Heads being 1/2 if they bet as if it's 1/3?

In this post we are going to correct them. We will understand how to arrive to correct betting odds from both thirdist and halfist positions, and why they are the same. We will also explore the core problems with betting arguments as a way to answer probability theory problems and, taking those into account, manage to construct several examples showing the superiority of the correct halfer position in Sleeping Beauty.

Different Probabilities for Different Betting Schemes?

The first misconception has even found its way to the Less Wrong wiki:

If Beauty's bets about the coin get paid out once per experiment, she will do best by acting as if the probability is one half. If the bets get paid out once per awakening, acting as if the probability is one third has the best expected value.

It originates from the fact that there are two different scoring rules [LW · GW], counting per experiment and per awakening. If we aggregate using the per experiment rule, we get P(Heads) = 1/2 - probability that the coin is Heads in a random experiment. If we aggregate using the per awakening rule we get P(Heads) = 1/3 - probability that the coin is Heads in a random awakening. The grain of truth is that you indeed can use this as a quick heuristic for the correct betting odds.

However, as I've shown in the previous post, only the former probability is mathematically sound for the Sleeping Beauty problem, because awakenings do not happen at random. So, it would've been very strange if we really needed to switch to a wrong model to get the correct answer in some betting schemes. Beyond a quick and lossy heuristic, it would be a very bad sign if we were unable to get the optimal betting odds from the correct model.

It would mean that there is something wrong with it, that we didn't really answer the question fully and now are just rationalizing as all the previous philosophers who endorsed a solution, contradicting probability theory, and then came up with some clever reasoning why it's fine.

And of course, we do not actually need to do that. As a matter of fact, even thirders - people who are mistaken about the answer in the Sleeping Beauty - can totally deal with both per experiment and per awakening bets.

Let  be the utility gained due to the realization of event X. Then we can calculate expected utility of a bet on X as:

where  - mutually exclusive events with  

Thirder Per Awakening Betting

Let's start with the natural-to-them per awakening betting scheme:

On every awakening the beauty can bet on the result of the coin toss. What betting odds should she accept?

In this betting scheme both Tails awakenings are equally rewarded, so

According to thirder models:

, therefore:

Solving  for  we get:

Which means that the utility gained from realization of Heads should be at least twice as big as the utility of realization of Tails, so that betting on Heads wasn't net negative. 

And thus betting odds should be 1:2

Thirder Per Experiment Betting

Now, let's look into per experiment betting

The beauty can bet on the result of the coin toss while she is awakened only once per experiment. What betting odds should she accept?

From the position of thirders, this situation is a bit trickier. Here either , or  is zero, as betting on one of the Tails awakenings doesn't count. Their sum, however is constant.

, taking it into account:

Solving   for  we get

Which means 1:1 betting odds.

Do Halfers Need to Bet on Thirders Odds?

The result from the previous section isn't exactly a secret. It even led to a misconception that halfers have to bet on thirders' odds, and therefore betting arguments validate thirdism.

Now, it has to be said that correctly reasoning halfers indeed have to bet on the same odds as thirders - 1:1 for per experiment betting and 1:2 for per awakening betting. But this is in no way a validation of thirdism; halfers have as much claim for these odds as thirders. It's only an unfortunate occurrence, that they happened to be initially called "thirders odds".

Historically, the model most commonly associated with answering that P(Heads)=1/2 is Lewis's one. When people were comparing it and thirder models, they named the odds that the former produces to be "halfer odds" and the odds that the latter produces to be "thirder odds". Which was quite understandable at the time.

Now we know that Lewis's model is a wrong representation for halfism in Sleeping Beauty, and indeed fails to produce correct betting odds for reasons explored in previous posts. The correct halfer model, naturally, doesn't have such problems. But the naming already stuck, confusing a lot of people along the way.

Halfer Per Awakening Betting

Let's see it for ourselves, which odds the correct model recommends. Starting from the per awakening betting scheme.

On every awakening the beauty can bet on the result of the coin toss. What betting odds should she accept?

 - are all different names for the same outcome, as we remember, so

On the other hand, both  and  awakenings are rewarded when Tails, so

Solving for :

Just as previously, we got 1:2 betting odds.

This situation is essentially making a bet on an outcome of a coin toss, and then the same bet has to be repeated if the coin comes Tails. Betting on 1:2 odds doesn't say anything about the unfairness of the coin or having some new knowledge about its state. Instead, it's fully described by the unfairness of the betting scheme which rewards Tails outcomes more.

Halfer Per Experiment Betting

Now, let's check the per experiment betting scheme

The beauty can bet on the result of the coin toss while she is awakened only once per experiment. What betting odds should she accept?

This time Tails outcome isn't rewarded twice so, everything is trivial

So if :

And we have 1:1 betting odds. Easy as that.

Betting Odds Are a Poor Proxy For Probabilities

Why do models claiming that probabilities are different produce the same betting odds? That doesn't usually happen, does it?

Because betting odds depend on both probabilities and utilities of the events. Usually we are dealing with situations when utilities are fixed, so probabilities are the only variable, therefore, when two models disagree about probabilities, they disagree about betting as well.

But in Sleeping Beauty problem, the crux of disagreement is how to correctly factorize the product . What happens when the Beauty has extra awakenings and extra bets?  One approach is to modify the utility part. The other - to modify probabilities. 

I've already explained why the first one is correct - probabilities follow specific rules according to which they are lawfully modified, so that they keep preserving the truth. But for the sake of betting it doesn't appear to matter. 

Betting odds do not have to follow Kolmogorov's third axiom. 10:20 odds are as well defined as 1:2. It's just a ratio, you can always renormalize it, which you can't do to probabilities [LW · GW]. You can define a betting scheme that ignores the condition of mutual exclusiveness of the outcomes, which is impossible when you define a sample space [LW · GW]. Betting odds are an imperfect approximation of probability, that cares only about frequencies of events and not their other statistical properties [LW · GW]. 

This is why incorrect thirder models manage to produce correct betting odds. All the reasons for why these models are wrong do not matter anymore, when only betting is concerned. And this is why betting is a poor proxy for probabilities - it ignores or obfuscates a lot of information. 

For quite some time I've been arguing [LW(p) · GW(p)] that we can't reduce probability theory to decision theory. That while decision making and betting is an obvious application of probability, it's not its justification. That all such attempts are backwards, confused thinking.

The Sleeping Beauty problem is a great example how simply thinking in terms of betting can lead people astray. People found models that produce correct betting odds and got stuck with them, not thinking further, believing that all the math work is done and they now just need to come up with some philosophical principle justifying the models.

And so the "Shut Up and Calculate" crowd happened to silently compute nonsense.

If a probabilistic model produces incorrect betting odds it's clearly wrong. But if it produces correct odds, it still doesn't mean that it's the right one! Betting is a required but not a necessary condition. You also need to account for theoretical properties of probabilities which are not captured by it.

If I didn't resolve it in a previous post we would've been in a conundrum, still thinking that both models are valid. It's good that now we know better. And yet, there is an interesting question: can we still, somehow, despite all the aforementioned problems, come up with a decision theoretic argument distinguishing between thirdism and the correct version of halfism?

As a matter of fact, I can even present you two of them.

Utility Instability under Thirdism

The reason why in most cases disagreement about probabilities implies disagreement about bets is that we assume, that while probabilities change based on available evidence, the utilities of events are constant and defined by the betting scheme. However, this is not the case with Thirdism in Sleeping Beauty, which not only implies constant shifts in utilities throughout the experiment but also that these shifts can go backwards in time.

Let's investigate what probabilities are assigned to coin being Heads on Sunday - before the experiment started, on awakening during the experiment and on Wednesday - when the experiment ended. The correct model is very straightforward in this regard: 

Updateless [LW · GW] and Updating [LW · GW] Thirder models do not agree which is the correct probability for P(Heads|Sunday), but let's use common sense and accept that it's 1/2 as it should be for a fair coin toss. Therefore: 

Suppose that the Beauty made a bet on Sunday at 1:1 odds, that the coin will come Heads. The bet is to be resolved on Wednesday when the outcome of the coin toss is publicly announced. What does she think about this bet when she awakes during the experiment? If she follows the correct halfer model - everything is fine. She keeps thinking that the bet is neutral in utility.

But a thirder Beauty suddenly find herself in a situation where she is more confident that the coin came Tails when she used to be. How is she supposed to think this? Should she regret the bet and wish she never made it?

This is the usual behavior in such circumstances. Consider the Observer Sleeping Beauty Problem [LW · GW]. There:

 and 

The observer is neutral about a bet on Heads at 1:1 odds on Sunday, but if then they find that the Beauty is awakened on their work day, they would regret the bet. If they were proposed to pay a minor fee to consider the bet null and void, they are better off to do it.

Would Sleeping Beauty also be better off to abolish the bet for a minor fee? No, of course not. That would lead to always paying the fee, thus predictably losing money in every experiment. But how is thirder supposed to persuade herself not to agree?

Mathematically, abolishing such a bet is isomorphic to making an opposite bet at the same odds. And as we already established, making one per experiment bet at 1:1 odds is utility neutral, so a minor fee will be a deal breaker. Thirder's justification for it is that the utility of such bet is halved on Tails, because only one of the Tails outcomes is rewarded.

But it means that a thirder Beauty should think as if the fact of her awakening in the experiment retroactively changes the utility of a bet that she has already made! Instead of changing neither probabilities nor utilities, thirdism modifies both in a compensatory way. 

A similar situation happens when the Beauty makes a bet during the experiment and then reflects on it on Wednesday. Halfer Beauty doesn't change her mind in any way, while thirder Beauty has to retroactively modify utilities of the previous bets to compensate for the back and forth changes of her probability estimates.

Which is just an unnecessarily complicated and roundabout way to arrive to the same conclusion as the correct halfer model. It doesn't bring any advantages, just makes thinking about the problem more confusing.

Thirdism Ignores New Evidence

We already know that Thirdism updates probability estimate in spite of receiving no new evidence. But there is an opposite issue with it as well. It refuses to acknowledge actual relevant evidence, which may lead to confusion and suboptimal bets.

To see this let's investigate two modified settings, where the Beauty actually receives some kind of evidence on awakening.

Technicolor Sleeping Beauty

Technicolor Sleeping Beauty is a version of the original problem that I've encountered in Rachael Brigg's Putting a Value on Beauty, where the idea was credited to Titelbaum.

The modified setting can be described as this:

Sleeping Beauty experiment, but every day the room that the Beauty is in changes its color from Red to Blue or vise versa. The initial color of the Room is determined randomly with equal probability for Red and Blue

Ironically enough, Briggs argues that Technicolor Sleeping Beauty presents an argument in favor of thirdism, because halfer Sleeping Beauty apparently changes her estimate of P(Heads), despite the fact that the color of the room "tells Beauty nothing about the outcome of the coin toss". But this is because she is begging the question, assuming that thirders' approach is correct to begin with.

Let's start from how thirders perceive the Technicolor problem. Just as Briggs claims, from their perspective, it seems completely isomorphic to the Sleeping Beauty. They believe that the color of the room is irrelevant to the outcome of the coin toss.

And so thirder Beauty has the same probability estimate for Technicolor Sleeping Beauty as the regular one.

Which means the same betting odds. 1:2 for per awakening betting and 1:1 for per experiment one. Right?

And so, suppose that Beauty, while going through Technicolor variant is proposed to make one per experiment bet on Heads or Tails with odds in between 1:2 and 1:1, for example, 2:3. Should she always refuse the bet?

Take some time to think about this.

.

.

.

.

.

No, really, it's a trick question. Think about it for at least a couple of minutes before answering.

.

.

.

.

.

Okay, if despite the name and introduction of this section and two explicit warnings, you still answered "Yes, the Beauty should always refuse to bet at these odds", then congratulations!

You were totally misled by thirdism!

The correct answer is that there is a better strategy than always refusing the bet. Namely: choose either Red or Blue beforehand and bet Tails only when you see that the room is in this color. This way the Beauty bets 50% of time when the coin is Heads and every time when it's Tails, which allows her to systematically win money at 2:3 odds.

This strategy is obscured from thirders but is obvious for a Beauty that follows the correct, halfer model. She is fully aware that Tails&Monday awakening is always followed by Tails&Tuesday awakening and so she is completely certain to observe both colors when the coin is Tails:

So now she can lawfully construct the Frequency Argument [LW · GW] and update. For example, if the Beauty selected Red and sees it:

Therefore, the Beauty is supposed to accept 1:2 odds for per experiment betting.

Or, alternatively, she can bet every time that the room is blue. The nature of probability update is the same. The important part is that she has to precommit to a strategy where she bets on one color and doesn't bet on the other.

 

Rare Event Sleeping Beauty

There is another modification of Sleeping Beauty with a similar effect.

Sleeping Beauty experiment but the Beauty has access to a fair coin - not necessary the one that determined her awakening routine - or any other way to generate random events.

It may seem that whether the Beauty has a coin or not is completely irrelevant to probability of generally speaking a different coin to come Heads, when it was tossed to determine the Beauty's awakening routine. Once again, this is how thirders usually think about such a problem. And once again, this is incorrect.

Suppose the Beauty tosses a coin several times on every awakening. And suppose she observes a particular combination of Heads and Tail -  . Observing  is more likely when the initial coin came Tails and the Beauty had two awakenings, therefore, two attempts to observe this combination. 

Let  be probability to observe the combination , and  - the probability to observe the combination  from two independent tries

We can notice that as  

Therefore, if the Beauty can potentially observe a rare event at every awakening, for instance, a specific combination , when she observes it, she can construct the Approximate Frequency Argument and update in favor of Tails:

Just like in Technicolor Sleeping Beauty, it presents a strategy allowing to net win while betting per experiment at odds between 1:2 and 1:1. A strategy that eludes thirders who apparently have already "updated on awakening", thus missing the situation where they actually were supposed to update.

Now there is a potential confusion here. Doesn't Beauty always observe some rare event [LW(p) · GW(p)]? Shouldn't she, therefore, always update in favor of Tails [LW · GW]? Try to resolve it yourself. You have all the required pieces of the puzzle.

.

.

.

.

.

.

The answer is that no, of course, she should not [LW · GW]. The confusion is in not understanding the difference between a probability of observing a specific low probable event and probability of observing any low probable event [LW · GW].  If the Beauty always observes an event it's probability by definition is 1 and, therefore, she can't construct the Approximate Frequency Argument. We can clearly see that as .

And this is additionally supported by the betting argument in Rare Event versions of Sleeping Beauty. When the Beauty actually observes a rare event, she can systematically win money in per experiment bets with 2:3 odds, and when she does not observe a rare event, she can't.

Conclusion

So, now we can clearly see that thirdism in Sleeping Beauty does not have any advantages in regards to betting. On the contrary, its constant shifts of utilities and probabilities only obfuscate the situations where the Beauty actually receives new evidence and, therefore, has to change her betting strategy.

The correct model, however, successfully deals with every being scheme and derivative problems such as Technicolor and Rare Event Sleeping Beauty.

We can also add a final nail to the coffin of thirdism's theoretical justifications. As we can clearly see, when the Beauty actually receives some evidence allowing her to make a Frequency Argument, it leads to changes in her per experiment optimal  betting strategy - contrary to what Updating model claims.

I think, we are fully justified to discard thirdism all together and simply move on, as we have resolved all the actual disagreements. And yet we will linger for a little while. Because even though thirdism is definitely not talking about probabilities and credences that a rational agent supposed to have, it is still talking about something and it's a curious question - what exactly has it been talking about all this time, that people misinterpreted as probabilities.

In the next post we will find the answer to this question and, therefore, dissolve the last, fully semantic disagreement between halfism and thirdism.

23 comments

Comments sorted by top scores.

comment by Yair Halberstadt (yair-halberstadt) · 2024-04-04T13:38:43.026Z · LW(p) · GW(p)

The correct answer is that there is a better strategy than always refusing the bet. Namely: choose either Red or Blue beforehand and bet Tails only when you see that the room is in this color. This way the Beauty bets 50% of time when the coin is Heads and every time when it's Tails, which allows her to systematically win money at 2:3 odds.

 

You place $200 down, and receive $300 if the coin was indeed tails.

If the coin toss ends up heads, you have a 50% chance of losing $200 - expected utility is $-100.

If the coin toss is tails, you have a %100 chance of gaining $100 - expected utility is $100.

So you end up with expected 0 utility.

The point stands, but the odds have to be better than 2:3.

comment by simon · 2024-03-27T17:45:40.506Z · LW(p) · GW(p)

The central point of the first half or so of this post  - that for E(X) = P(X)U(X) you could choose different P and U for the same E so bets can be decoupled from probabilities - is a good one.

I would put it this way: choices and consequences are in the territory*; probabilities and utilities are in the map.

Now, it could be that some probability/utility breakdowns are more sensible than others based on practical or aesthetic criteria, and in the next part of this post ("Utility Instability under Thirdism") you make an argument against thirderism based on one such criterion.

However, your claim that Thirder Sleeping Beauty would bet differently before and after the coin toss is not correct. If Sleeping Beauty is asked before the coin toss to bet based on the same reward structure as after the toss she will bet the same way in each case - i.e. Thirder Sleeping Beauty will bet Thirder odds even before the experiment starts, if the coin toss being bet on is particularly the one in this experiment and the reward structure is such that she will be rewarded equally (as assessed by her utility function) for correctness in each awakening.

Now, maybe you find this dependence on what the coin will be used for counterintuitive, but that depends on your own particular taste.

Then, the "technicolor sleeping beauty" part seems to make assumptions where the reward structure is such that it only matters whether you bet or not in a particular universe and not how many times you bet. This is a very "Halfer" assumption on reward structure, even though you are accepting Thirder odds in this case! Also, Thirders can adapt to such a reward structure as well, and follow the same strategy.  

Finally, on Rare Event Sleeping beauty, it seems to me that you are biting the bullet here to some extent to argue that this is not a reason to favour thirderism.

I think, we are fully justified to discard thirdism all together and simply move on, as we have resolved all the actual disagreements.

uh....no. But I do look forward to your next post anyway.

*edit: to be more correct, they're less far up the map stack than probability and utilities. Making this clarification just in case someone might think from that statement that I believe in free will (I don't).

Replies from: Ape in the coat
comment by Ape in the coat · 2024-03-28T06:56:51.593Z · LW(p) · GW(p)

Throughout your comment you've been saying a phrase "thirders odds", apparently meaning odds 1:2, not specifying whether per awakening or per experiment. This is underspecified and confusing category which we should taboo. 

As I show in the first part of the post, thirder odds are the exact same thing as halfer odds 1:2 per awakening and 1:1 per experiment.

However, your claim that Thirder Sleeping Beauty would bet differently before and after the coin toss is not correct.

I do not claim that. I say that in order to justify not betting differently, thirders have to retroactively change the utility of a bet already made:

Mathematically, abolishing such a bet is isomorphic to making an opposite bet at the same odds. And as we already established, making one per experiment bet at 1:1 odds is utility neutral, so a minor fee will be a deal breaker. Thirder's justification for it is that the utility of such bet is halved on Tails, because only one of the Tails outcomes is rewarded.

But it means that a thirder Beauty should think as if the fact of her awakening in the experiment retroactively changes the utility of a bet that she has already made! Instead of changing neither probabilities nor utilities, thirdism modifies both in a compensatory way. 

I critique thirdism not for making different bets - as the first part of the post explains, the bets are the same, but for their utilities not actually behaving like utilities - constantly shifting back and forth during the experiment, including shifts backwards in time, in order to compensate for the fact that their probabilities are not behaving as probabilities - because they are not sound probabilities as explained in the previous post.

Thirder Sleeping Beauty will bet Thirder odds even before the experiment starts, if the coin toss being bet on is particularly the one in this experiment and the reward structure is such that she will be rewarded equally (as assessed by her utility function) for correctness in each awakening.

Now, maybe you find this dependence on what the coin will be used for counterintuitive, but that depends on your own particular taste.

Wait, are you claiming that thirder Sleeping Beauty is supposed to always decline the initial per experiment bet - before the coin was tossed at 1:1 odds? This is wrong - both halfers and thirders are neutral towards such bets, though they appeal to different reasoning why.

Then, the "technicolor sleeping beauty" part seems to make assumptions where the reward structure is such that it only matters whether you bet or not in a particular universe and not how many times you bet. This is a very "Halfer" assumption on reward structure, even though you are accepting Thirder odds in this case! Also, Thirders can adapt to such a reward structure as well, and follow the same strategy.  

Some reward structures feels more natural for halfers and some for thirders - this is true. But good model for a problem is supposed to deal with any possible betting scheme without significant difficulties. Thirders probably can arrive to the correct answer post hoc, if explicitly primed by a question: "what odds are you supposed to bet if you bet only when the room is red?". But what I'm pointing at, is that thirdism naturally fails to develop an optimal strategy for per experiment bet in technicolor problem, falsly assuming that it's isomorphic to regular sleeping beauty. Nothing about their probabilistic model hints them that betting only when the room is red is the correct move. Their probability estimate is the same, despite new evidence about the state of the coin toss and so they are oblivious that there is a better strategy then always refusing the bet.

Technicolor and Rare Event problems highlight the issue that I explain in Utility Instability under Thirdism - in order to make optimal bets thirders need to constantly keep track of not only probability changes but also utility changes, because their model keeps shifting both of them back and forth and this can be very confusing. Halfers, on the other hand, just need to keep track of probability changes, because their utility are stable. Basically thirdism is strictly more complicated without any benefits and we can discard it on the grounds of Occam's razor, if we haven't already discarded it because of its theoretical unsoundness, explained in the previous post.

Finally, on Rare Event Sleeping beauty, it seems to me that you are biting the bullet here to some extent to argue that this is not a reason to favour thirderism.

I'm confused. What bullet am I biting? How can the fact that thirder probabilistic model misses the situation when the per experiment betting odds are actually 1:2 be an argument in favor of thirdism? 

Rare Event problem is such that the answer is about 1/3 only in some small number of cases. Halfer model correctly highlights the rule how to determine which cases these are and how to develop the correct strategy for betting. Thirder model just keeps answering 1/3 as a broken clock.

uh....no.

What do you still feel that is unresolved?

Replies from: simon
comment by simon · 2024-03-28T18:50:02.529Z · LW(p) · GW(p)

Throughout your comment you've been saying a phrase "thirders odds", apparently meaning odds 1:2, not specifying whether per awakening or per experiment. This is underspecified and confusing category which we should taboo. 

Yeah, that was sloppy language, though I do like to think more in terms of bets than you do. One of my ways of thinking about these sorts of issues is in terms of "fair bets" - each person thinks a bet with payoffs that align with their assumptions about utility is "fair", and a bet with payoffs that align with different assumptions about utility is "unfair".  Edit: to be clear, a "fair" bet for a person is one where the payoffs are such that the betting odds where they break even matches the probabilities that that person would assign.

I do not claim that. I say that in order to justify not betting differently, thirders have to retroactively change the utility of a bet already made:

I critique thirdism not for making different bets - as the first part of the post explains, the bets are the same, but for their utilities not actually behaving like utilities - constantly shifting back and forth during the experiment, including shifts backwards in time, in order to compensate for the fact that their probabilities are not behaving as probabilities - because they are not sound probabilities as explained in the previous post.

Wait, are you claiming that thirder Sleeping Beauty is supposed to always decline the initial per experiment bet - before the coin was tossed at 1:1 odds? This is wrong - both halfers and thirders are neutral towards such bets, though they appeal to different reasoning why.

OK, I was also being sloppy in the parts you are responding to.

Scenario 1: bet about a coin toss, nothing depending on the outcome (so payoff equal per coin toss outcome)

  • 1:1

Scenario 2: bet about a Sleeping Beauty coin toss, payoff equal per awakening

  • 2:1 

Scenario 3: bet about a Sleeping Beauty coin toss, payoff equal per coin toss outcome 

  • 1:1

It doesn't matter if it's agreed to before or after the experiment, as long as the payoffs work out that way. Betting within the experiment is one way for the payoffs to more naturally line up on a per-awakening basis, but it's only relevant (to bet choices) to the extent that it affects the payoffs.

Now, the conventional Thirder position (as I understand it) consistently applies equal utilities per awakening when considered from a position within the experiment.

I don't actually know what the Thirder position is supposed to be from a standpoint from before the experiment, but I see no contradiction in assigning equal utilities per awakening from the before-experiment perspective as well. 

As I see it, Thirders will only regret a bet (in the sense of considering it a bad choice to enter into ex ante given their current utilities) if you do some kind of bait and switch where you don't make it clear what the payoffs were going to be up front.

 But what I'm pointing at, is that thirdism naturally fails to develop an optimal strategy for per experiment bet in technicolor problem, falsly assuming that it's isomorphic to regular sleeping beauty.

Speculation; have you actually asked Thirders and Halfers to solve the problem? (while making clear the reward structure? - note that if you don't make clear what the reward structure is, Thirders are more likely to misunderstand the question asked if, as in this case, the reward structure is "fair" from the Halfer perspective and "unfair" from the Thirder perspective).

Technicolor and Rare Event problems highlight the issue that I explain in Utility Instability under Thirdism - in order to make optimal bets thirders need to constantly keep track of not only probability changes but also utility changes, because their model keeps shifting both of them back and forth and this can be very confusing. Halfers, on the other hand, just need to keep track of probability changes, because their utility are stable. Basically thirdism is strictly more complicated without any benefits and we can discard it on the grounds of Occam's razor, if we haven't already discarded it because of its theoretical unsoundness, explained in the previous post.

A Halfer has to discount their utility based on how many of them there are, a Thirder doesn't. It seems to me, on the contrary to your perspective, that Thirder utility is more stable.

Halfer model correctly highlights the rule how to determine which cases these are and how to develop the correct strategy for betting. Thirder model just keeps answering 1/3 as a broken clock.

... and I in my hasty reading and response I misread the conditions of the experiment (it's a "Halfer" reward structure again). (As I've mentioned before in a comment on another of your posts, I think Sleeping Beauty is unusually ambiguous so both Halfer and Thirder perspectives are viable. But, I lean toward the general perspectives of Thirders on other problems (e.g. SIA seems much more sensible (edit: in most situations) to me than SSA), so Thirderism seems more intuitive to me). 

Thirders can adapt to different reward structures but need to actually notice what the reward structure is! 

What do you still feel that is unresolved?

the things mentioned in this comment chain. Which actually doesn't feel like all that much, it feels like there's maybe one or two differences in philosophical assumptions that are creating this disagreement (though maybe we aren't getting at the key assumptions).

Edited to add: The criterion I mainly use to evaluate probability/utility splits is typical reward structure - you should assign probabilities/utilities such that a typical reward structure seems "fair", so you don't wind up having to adjust for different utilities when the rewards have the typical structure (you do have to adjust if the reward structure is atypical, and thus seems "unfair"). 

This results in me agreeing with SIA in a lot of cases. An example of an exception is Boltzmann brains. A typical reward structure would give no reward for correctly believing that you are a Boltzmann brain. So you should always bet in realistic bets as if you aren't a Boltzmann brain, and for this to be "fair", I set P=0 instead of SIA's U=0.  I find people believing silly things about Boltzmann brains like taking it to be evidence against a theory if that theory proposes that there exists a lot of Boltzmann brains. I think more acceptance of the setting of P=0 instead of U=0 here would cut that nonsense off. To be clear, normal SIA does handle this case fine (that a theory predicting Boltzmann brains is not evidence against it), but setting P=0 would make it more obvious to people's intuitions.

In the case of Sleeping Beauty, this is a highly artificial situation that has been pared down of context to the point that it's ambiguous what would be a typical reward structure, which is why I consider it ambiguous.

Replies from: Ape in the coat
comment by Ape in the coat · 2024-03-31T08:24:25.295Z · LW(p) · GW(p)

One of my ways of thinking about these sorts of issues is in terms of "fair bets"

Well, as you may see it's also is not helpful. Halfers and thirders disagree on which bets they consider "fair" but still agree on which bets to make, whether they call them fair or not. The extra category of a "fair bet" just adds another semantic disagreement between halfers and thirders. Once we specify whether we are talking per experiment or per awakening bet and on which, odds both theories are supposed to agree. 

I don't actually know what the Thirder position is supposed to be from a standpoint from before the experiment, but I see no contradiction in assigning equal utilities per awakening from the before-experiment perspective as well.

Thirders tend to agree with halfers that P(Heads|Sunday) = P(Heads|Wednesday) = 1/2. Likewise, because they make the same bets as the halfers, they have to agree on utilities. So it means that thirders utilities go back and forth which is weird and confusing behavior.

A Halfer has to discount their utility based on how many of them there are, a Thirder doesn't. It seems to me, on the contrary to your perspective, that Thirder utility is more stable

You mean how many awakenings? That if there was not two awakenings on tails, but, for instance, ten, halfers will have to think that U(Heads) has to be ten times as much as U(Tails) for a utility neutral per awakening bet? 

Sure, but it's a completely normal behavior. It's fine to have different utility estimates for different problems and different payout schemes - such things always happen. Sleeping Beauty with ten awakenings on Tails is a different problem than Sleeping Beauty with only two so there is no reason to expect that utilities of the events has to be the same. The point is that as long as we specified the experiment and a betting scheme, then the utilities has to be stable.

And thirder utilities are modified during the experiment. They are not just specified by a betting scheme, they go back and forth based on the knowledge state of the participant - behave the way probabilities are supposed to behave. And that's because they are partially probabilities - a result of incorrect factorization of E(X).

Speculation; have you actually asked Thirders and Halfers to solve the problem? (while making clear the reward structure?

I'm asking it right in the post, explicitly stating that the bet is per experiment and recommending to think about the question more. What did you yourself answer?

My initial state that thirders model confuses them about this per experiment bet is based on the fact that a pro-thirder paper which introduced the technicolor sleeping beauty problem totally fails to understand why halfers scoring rule updates in it. I may be putting to much weight on the views of Rachael Briggs in particular, but it apparently was peer reviewed and so on, so it seems to be decent evidence.

... and I in my hasty reading and response I misread the conditions of the experiment 

Well, I guess that answers my question.

Thirders can adapt to different reward structures but need to actually notice what the reward structure is!

Probably, but I've yet to see one actually derive the correct answer on their own, not post hoc after it was already spoiled or after consulting the correct model. I suppose I should have asked the question beforehand, and then publish the answer, oh well. Maybe I can still do it and ask nicely not to look.

The criterion I mainly use to evaluate probability/utility splits is typical reward structure

Well, if every other thirder reason like this, that would indeed explain the issue. 

You can't base the definition of probability on your intuitions about fairness. Or, rather, you can, but then you are risking contradicting the math. Probability is a mathematical concept with very specific properties. In my previous post I talk about it specifically and show that thirder probabilities for Sleeping Beauty are ill-defined.

Replies from: simon
comment by simon · 2024-03-31T20:19:34.239Z · LW(p) · GW(p)

Well, as you may see it's also is not helpful

My reasoning explicitly puts instrumental rationality ahead of epistemic. I hold this view precisely to the degree which I do in fact think it is helpful.

The extra category of a "fair bet" just adds another semantic disagreement between halfers and thirders. 

It's just a criterion by which to assess disagreements, not adding something more complicated to a model.

Regarding your remarks on these particular experiments:

If someone thinks the typical reward structure is some reward structure, then they'll by default guess that a proposed experiment has that reward structure.

This reasonably can be expected to apply to halfers or thirders. 

If you convince me that halfer reward structure is typical, I go halfer. (As previously stated since I favour the typical reward structure). To the extent that it's not what I would guess by default, that's precisely because I don't intuitively feel that it's typical and feel more that you are presenting a weird, atypical reward structure!

And thirder utilities are modified during the experiment. They are not just specified by a betting scheme, they go back and forth based on the knowledge state of the participant - behave the way probabilities are supposed to behave. And that's because they are partially probabilities - a result of incorrect factorization of E(X).

Probability is a mathematical concept with very specific properties. In my previous post I talk about it specifically and show that thirder probabilities for Sleeping Beauty are ill-defined.

I've previously shown that some of your previous posts incorrectly model the Thirder perspective, but I haven't carefully reviewed and critiqued all of your posts. Can you specify exactly what model of the Thirder viewpoint you are referencing here? (which will not only help me critique it but also help me determine what exactly you mean by the utilities changing in the first place, i.e. do you count Thirders evaluating the total utility of a possibility branch more highly when there are more of them as a "modification" or not (I would not consider this a "modification").

comment by Duschkopf · 2024-04-16T19:26:14.558Z · LW(p) · GW(p)

Rules for per experiment betting seem to be imprecise. What exactly does it mean that Beauty can bet only once per experiment? Does it mean that she is offered the bet only once in case of Tails? If so, is she offered the bet on Monday or Tuesday or is the day randomly selected? Or does it mean that she is offered the bet on both Monday and Tuesday and only one bet counts if she accepts both? If so, which one? Monday bet, Tuesday bet, or is it randomly selected?

Depending on, a Thirder could base his decision on:

P(H/Today is Monday)=1/2, P(H/Today is my last awakening)=1/2, or P(H/Today is the randomly selected day my bet counts/is offered to me)=1/2

and therefore escapes utility instability?

Replies from: Ape in the coat
comment by Ape in the coat · 2024-04-17T08:05:55.867Z · LW(p) · GW(p)

There are indeed ways to obfuscate the utility instability under thirdism by different betting schemes where it's less obvious, as the probability relevant to betting isn't P(Heads|Awake) = 1/3 but one of thoses you meantion which equal 1/2.

The way to define the scheme specifically for P(Heads|Awake), is this: you get asked to bet on every awakening. One agreement is sufficient, and only one agreement counts. No random selecting takes place.

This way the Beauty doesn't get any extra evidence when she is asked to bet, therefore she can't update her credence for the coin being Heads based on the sole fact of being asked to bet, the way you propose.

Replies from: Duschkopf
comment by Duschkopf · 2024-04-17T17:24:51.298Z · LW(p) · GW(p)

Sure, if the bet is offered only once per experiment, Beauty receives new evidence (from a thirder‘s perspective) and she could update.

In case the bet is offered on every awakening: do you mean if she gives conflicting answers on Monday and Tuesday that the bet nevertheless is regarded as accepted?

My initial idea was, that if for example only her Monday answer counts and Beauty knows that, she could reason that when her answer counts it is Monday, arriving at the conclusion that it is reasonable to act as if it was Monday on every awakening, thus grounding her answer on P(H/Monday)=1/2. Same logic holds for rule „last awakening counts“ and „random awakening counts“.

Replies from: Ape in the coat
comment by Ape in the coat · 2024-04-18T06:07:03.301Z · LW(p) · GW(p)

In case the bet is offered on every awakening: do you mean if she gives conflicting answers on Monday and Tuesday that the bet nevertheless is regarded as accepted?

Yes I do. 

Of course, if the experiment is run as stated she wouldn't be able to give conflicting answers, so the point is moot. But having a strict algorithm for resolving such theoretical cases is a good thing anyway.

My initial idea was, that if for example only her Monday answer counts and Beauty knows that, she could reason that when her answer counts it is Monday, arriving at the conclusion that it is reasonable to act as if it was Monday on every awakening, thus grounding her answer on P(H/Monday)=1/2. Same logic holds for rule „last awakening counts“ and „random awakening counts“.

Yes, I got it. As a matter of fact this is unlawful. Probability estimate is about the evidence you receive not about what "counts" for a betting scheme. If the Beauty receives the same evidence when her awakening counts and when it doesn't count she can't update her probability estimate. If in order to arrive to the correct answer she needs to behave as if every day is Monday it means that there is something wrong with her model.

Thankfully for thirdism, she does not have to do it. She can just assign zero utility to Tuesday awakening and get the correct betting odds.

Anyway, all this is quite tangental to the question of utility instability. Which is about the Beauty making a bet on Sunday and then reflecting on it during the experiment, even if no bets are proposed. According to thirdism probability of the coin being Heads changes on awakening, so, in order for Beauty not to regret about making an optimal bet on Sunday, her utility has to change as well. Therefore utility instability.

Replies from: Duschkopf
comment by Duschkopf · 2024-04-20T18:26:10.457Z · LW(p) · GW(p)

Honestly, I do not see any unlawful reasoning going on here. First of all, it‘s certainly important to distinguish between a probability model and a strategy. The job of a probability model is simply to suggest the probability of certain events and to describe how probabilities are affected by the realization of other events. A strategy on the other hand is to guide decision making to arrive at certain predefined goals.

My point is, that the probabilities a model suggests you to have based on the currently available evidence do NOT neccessarily have to match the probabilities that are relevant to your strategy and decisions. If Beauty is awake and doesn‘t know if it is the day her bet counts, it is in fact a rational strategy to behave and decide as if her bet counts today. If she knows that her bet only counts on Monday and her probability model suggests that „Today is Monday“ is relevant for H, then ideal rationality requires her to base her decision on P(H/Monday) cause she knows that Monday is realized when her decision counts. This guarantees that on her Monday awakening when her decision counts, she is calculating the probability for heads based on all relevant evidence that is realized on that day.

It is true that the thirder model does not suggest such a strategy, but suggesting strategies and therefore suggesting which probabilities are relevant for decisions is not the job of a probability model anyway. Similar is the case of the Technicolor Beauty: The strategy „only updating if Red“ is neither suggested nor hinted by your model. All your model suggests are probabilities conditional on the realization of certain events. It can’t tell you to treat the observation „Red room“ as a realization of the event „There is an awakening in a red room“ while treating the observation „Blue room“ merely as a realization of the event „There is an awakening in a red or a blue room“ instead of „There is an awakening in a blue room“. The observation of a blue room is always a realization of both of these events, and it is your strategy „tracking red“ and not your probability model that suggests to prefer one over the other as the relevant evidence to calculate your probabilities. I had been thinking over this for a while after I recently discovered this „Updating only if Red“-strategy for myself and how this strategy could be directly derived from the halfer model. But I honestly see no better justification to apply it than the plain fact that it proves to be more successful in the long run.

comment by Signer · 2024-03-28T07:34:35.144Z · LW(p) · GW(p)

mathematically sound

*ethically

Utility Instability under Thirdism

Works against Thirdism in the Fissure experiment too.

Technicolor Sleeping Beauty

I mean, if you are going to precommit to the right strategy anyway, why do you even need probability theory? The whole question is how do you decide to ignore that P(Head|Blue) = 1/3, when you chose Red and see Blue. And how is it not "a probabilistic model produces incorrect betting odds", when you need to precommit to ignore it?

Replies from: Ape in the coat
comment by Ape in the coat · 2024-03-28T08:28:50.946Z · LW(p) · GW(p)

*ethically

No, I'm not making any claims about ethics here, just math.

Works against Thirdism in the Fissure experiment too.

Yep, because it's wrong in Fissure as well. But I'll be talking about it later.

I mean, if you are going to precommit to the right strategy anyway, why do you even need probability theory? 

To understand whether you should precommit to any stratagy and, if you should, then which one. The fact that 

P(Heads|Blue) = P(Heads|Red) = 1/3

but

P(Heads|Blue or Red) = 1/2

means, that you may precommit to either Blue or Red and it doesn't matter which, but if you don't precommit, you won't be able to guess Tails better than chance per experiment.

The whole question is how do you decide to ignore that P(Head|Blue) = 1/3, when you chose Red and see Blue. And how is it not "a probabilistic model produces incorrect betting odds", when you need to precommit to ignore it?

You do not ignore it. When you choose red and see that the walls are blue you do not observe event "Blue". You observe outcome "Blue" which correspond to event "Blue or Red". Because the sigma-algebra of you probability space is affected by your precommitment [LW · GW].

Replies from: Signer
comment by Signer · 2024-03-28T08:49:32.287Z · LW(p) · GW(p)

You observe outcome “Blue” which correspond to event “Blue or Red”.

So you bet 1:1 on Red after observing this “Blue or Red”?

Replies from: Ape in the coat
comment by Ape in the coat · 2024-03-28T09:48:18.727Z · LW(p) · GW(p)

Yes! There is 50% chance that the coin is Tails and so the room is to be Red in this experiment.

Replies from: Signer
comment by Signer · 2024-03-28T10:29:07.755Z · LW(p) · GW(p)

No, I mean the Beauty awakes, sees Blue, gets a proposal to bet on Red with 1:1 odds, and you recommend accepting this bet?

Replies from: Ape in the coat
comment by Ape in the coat · 2024-03-28T11:18:29.388Z · LW(p) · GW(p)

Yes, if the bet is about whether the room takes the color Red in this experiment. Which is what event "Red" means in Technicolor Sleeping Beauty according to the correct model. The fact that you do not observe event Red in this awakening doesn't mean that you don't observe it in the experiment as a whole.

The situation is somewhat resembling learning that today is Monday and still being ready to bet at 1:1 that Tuesday awakening will happen in this experiment. Though, with colors there is actually an update from 3/4 to 1/2.

What you, probably, tried to ask, is whether you should agree to bet at 1:1 odds that the room is Red in this particular awakening after you wake up and saw that the room is Blue. And the answer is no, you shouldn't. But probability space for Technicolor Sleeping beauty is not talking about probabilities of events happening in this awakening, because most of them are illdefined for reasons explained in the previous post.

Replies from: Signer
comment by Signer · 2024-03-28T11:58:50.625Z · LW(p) · GW(p)

And the answer is no, you shouldn’t. But probability space for Technicolor Sleeping beauty is not talking about probabilities of events happening in this awakening, because most of them are illdefined for reasons explained in the previous post.

So probability theory can't possibly answer whether I should take free money, got it.

And even if "Blue" is "Blue happens during experiment", you wouldn't accept worse odds than 1:1 for Blue, even when you see Blue?

Replies from: Ape in the coat
comment by Ape in the coat · 2024-03-30T04:49:51.680Z · LW(p) · GW(p)

So probability theory can't possibly answer whether I should take free money, got it.

No, that's not what I said. You just need to use a different probability space with a different event - "observing Red in any particular day of the experiment".

You can do this because for every day probability to observe the color is the same. Unlike, say, Tails in the initial coin toss which probability is 1/2 on Monday and 1 on Tuesday.

It's indeed a curious thing which I wasn't thinking about, because you can arrive to the correct betting odds on the color of the room for any day, using the correct model for technicolor sleeping beauty. As P(Red)=P(Blue) and rewards are mutually exclusive, U(Red)=U(Blue) and therefore 1:1 odds. But this was sloppy of me, because to formally update when you observe the outcome you still need an appropriate separate probability space, even if the update is trivial.

So thank you for bringing it up to my attention and, I'm going to talk more about it in a future post.

comment by Mikhail Samin (mikhail-samin) · 2024-03-28T21:53:02.129Z · LW(p) · GW(p)

Sleeping Beauty is an edge case where different reward structures are intuitively possible, and so people imagine different game payout structures behind the definition of “probability”. Once the payout structure is fixed, the confusion is gone. With a fixed payout structure&preference framework rewarding the number you output as “probability”, people don’t have a disagreement about what is the best number to output. Sleeping beauty is about definitions.)

And still, I see posts arguing that if a tree falls on a deaf Sleeping Beauty, in a forest with no one to hear it, it surely doesn’t produce a sound, because here’s how humans perceive sounds, which is the definition of a sound, and there are demonstrably no humans around the tree. (Or maybe that it surely produces the sound because here’s the physics of the sound waves, and the tree surely abides by the laws of physics, and there are demonstrably sound waves.)

This is arguing about definitions. You feel strongly that “probability” is that thing that triggers the “probability” concept neuron in your brain. If people have a different concept triggering “this is probability”, you feel like they must be wrong, because they’re pointing at something they say is a sound and you say isn’t.

Probability is something defined in math by necessity. There’s only one way to do it to not get exploited in natural betting schemes/reward structures that everyone accepts when there are no anthropics involved. But if there are multiple copies of the agent, there’s no longer a single possible betting scheme defining a single possible “probability”, and people draw the boundary/generalise differently in this situation.

You all should just call these two probabilities two different words instead of arguing which one is the correct definition for "probability".

Replies from: Ape in the coat
comment by Ape in the coat · 2024-03-29T05:26:45.875Z · LW(p) · GW(p)

To be frank, it feels as if you didn't read any of my posts on Sleeping Beauty before writing this comment. That you are simply annoyed when people arguing about substantionless semantics - and, believe me, I sympathise enourmously! - assume that I'm doing the same, based on shallow pattern matching "talks about Sleeping Beauty -> semantic disagreement" and spill your annoyance at me, without validating whether your previous assumption is actually correct.

Which is a shame, because I've designed this whole series of posts with people like you in mind. Someone who starts from the assumption that there are two valid answers, because it was the assumption I myself used to be quite sympathetic to until I actually went forth and checked. 

If it's indeed the case, please start here [LW · GW] and then I'd appreciate if you actually engaged with the points I made, because that post addresses the kind of criticism you are making here. 

If you actually read all my Sleeping Beauty posts, saw me highlight the very specific mathematical disagreements between halfers and thirders and how utterly ungrounded the idea of using probability theory with "centred possible words" is, I don't really understand how this kind of appealing to both sides still having a point can be a valid response. 

Anyway, I'm going to address you comment step by step.

Sleeping Beauty is an edge case where different reward structures are intuitively possible

Different reward structures are possible in any probability theory problem. Make a bet on a coin toss but if the outcome is Tails - this bet is repeated three times and if it's Heads you get punched in the face - is a completely possible reward structure for a simple coin toss problem. Is it not very intuitive? Granted, but this is besides the point. Mathematical rules are supposed to always work, even in non-intuitive cases.

Once the payout structure is fixed, the confusion is gone.

People should agree on which bets to make - this is true and this is exactly what I show in the first part of this post. But the mathematical concept of "probability" is not just about bets - which I talk about in the middle part of this post. A huge part of the confusion is still very much present. Or so it was, until I actually resolved it in the previous post.

Sleeping beauty is about definitions.

There definetely is a semantic component in the disagreement betwen halfers and thirders. But it's the least interesting one and that's why I'm postponing the talk about it until the next post.

The thing, you seem to be missing, is that there is also a real objective disagreement which is obfuscated by the semantic one. People noticed that halfers and thirders use different definitions and come to the conclusion that semantics is all there is and decided not to look further. But they totally should have.

My last two posts are talking about this objective matters disagreements. Is there an update on awakening or is there not? There is a disagreement about it even between thirders who, apparently agree on the definition of "probability". Are the ways halfers and thirders define probability formally correct? It's a strictly defined mathematical concept, mind you, not some similarity cluster category border like "sound". Are Tails&Monday and Tails&Tuesday mutually exclusive events? You can't just define mutual exclusivity however you like.

Probability is something defined in math by necessity.

Probability is a measure function over an event space. And if for some mathematical reasons you can't construct an event space, your "probability" is illdefined.

You all should just call these two probabilities two different words instead of arguing which one is the correct definition for "probability".

I'm doing both. I've shown that only one thing formally is probability, and in the next post I'm going to define the other thing and explore it's properties.

Replies from: mikhail-samin
comment by Mikhail Samin (mikhail-samin) · 2024-03-29T07:00:20.993Z · LW(p) · GW(p)

I read the beginning and skimmed through the rest of the linked post. It is what I expected it to be.

We are talking about "probability" - a mathematical concept with a quite precise definition. How come we still have ambiguity about it?

Reading E.T. Jayne’s might help.

Probability is what you get as a result of some natural desiderata related to payoff structures. When anthropics are involved, there are multiple ways to extend the desiderata, that produce different numbers that you should say, depending on what you get paid for/what you care about, and accordingly different math. When there’s only a single copy of you, there’s only one kind of function, and everyone agrees on a function and then strictly defines it. When there are multiple copies of you, there are multiple possible ways you can be paid for having a number that represents something about the reality, and different generalisations of probability are possible.

Replies from: Ape in the coat
comment by Ape in the coat · 2024-03-30T06:22:56.810Z · LW(p) · GW(p)

This is surprising to me. Are you up to a a more detailed discussion? What do you think about the statistical analysis [LW · GW] and the debunk of centred possible worlds [LW · GW]? I haven't seen these points being raised or addressed before and they are definitely not about semantics. The fact that sequential events are not mutually exclusive [LW · GW] can be formally proven [LW(p) · GW(p)]. It's not a matter of perspective at all! We could use the dialogues feature, if you'd like.

Probability is what you get as a result of some natural desiderata related to payoff structures. 

This is a vague gesture to a similarity cluster and not an actual definition. Remove fancy words and you end up with "Probability has something to do with betting". Yes it does. In this post I even specify exactly what it does. You don't need to read E.T. Jayne’s to discover this revelation. The definition of expected utility is much more helpful.

When anthropics are involved, there are multiple ways to extend the desiderata, that produce different numbers that you should say, depending on what you get paid for/what you care about, and accordingly different math. 

There are always multiple ways to "extend the desiderata". But more importantly, you don't have to say different probability estimates depending on what you get paid for/what you care about. This is the exact kind of nonsense that I'm calling out in this post. Probabilities are about what evidence you have. Utilities are about what you care about. You don't need to use thirder probabilities for per awakening betting [LW · GW]. Do you disagree with me here?

When there’s only a single copy of you, there’s only one kind of function, and everyone agrees on a function and then strictly defines it. When there are multiple copies of you, there are multiple possible ways you can be paid for having a number that represents something about the reality, and different generalisations of probability are possible.

How is it different from talking about probability of a specific person to observe an event and probability of any person from a group to observe an event? The fact that people from the group are exact copies doesn't suddenly makes anthropics a separate magisteria.

Moreover, there are no independent copies in Sleeping Beauty. On Tails, there are two sequential time states. The fact that people are trying to make a sample space out of them directly contradicts its definition.

When we are talking just about betting, one can always come up with its own functions, it's own way to separate expected utility of an event into "utility" and "probability". But then their "utilities" will be constantly shifting due to receiving new evidence and "probabilities" will occasionally ignore new evidence, and shift for other reasons. And pointing at this kind of weird behavior is a completely reasonable reaction. Can a person still use such definitions consistently? Sure. But this is not a way to carve reality by its joints. And I'm not just talking about betting. I specifically wrote a whole post about fundamental mathematical reasons, before starting talking about it.