Posts

Should you refuse this bet in Technicolor Sleeping Beauty? 2024-04-04T08:55:09.206Z
Beauty and the Bets 2024-03-27T06:17:27.516Z
The Solution to Sleeping Beauty 2024-03-04T06:46:35.337Z
Lessons from Failed Attempts to Model Sleeping Beauty Problem 2024-02-20T06:43:04.531Z
Why Two Valid Answers Approach is not Enough for Sleeping Beauty 2024-02-06T14:21:58.912Z
Has anyone actually changed their mind regarding Sleeping Beauty problem? 2024-01-30T08:34:43.904Z
Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events 2024-01-21T15:58:14.236Z
Anthropical Paradoxes are Paradoxes of Probability Theory 2023-12-06T08:16:26.846Z
Antropical Probabilities Are Fully Explained by Difference in Possible Outcomes 2023-11-09T15:34:03.406Z
Conservation of Expected Evidence and Random Sampling in Anthropics 2023-09-03T06:55:10.003Z
Anthropical Motte and Bailey in two versions of Sleeping Beauty 2023-08-02T07:08:42.437Z
Is Adam Elga's proof for thirdism in Sleeping Beauty still considered to be sound? 2023-07-16T14:11:23.214Z
The world where LLMs are possible 2023-07-10T08:00:11.556Z
Against sacrificing AI transparency for generality gains 2023-05-07T06:52:33.531Z
The Futility of Status and Signalling 2022-11-13T17:14:57.480Z
Book Review: Free Will 2021-10-11T18:41:35.549Z

Comments

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-04-23T07:42:36.054Z · LW · GW

I knew that not any string of English words gets a probability, but I was naïve enough to think that all statements that are either true or false get one.

Well, I think this one is actually correct. But, as I said in the previous comment, the statement "Today is Monday" doesn't actually have a coherent truth value throughout the probability experiment. It's not either True or False. It's either True or True and False at the same time!

I was hoping they this sequence of posts which kept saying “don’t worry about anthropics, just be careful with the basics and you’ll get the right answer” would show how to answer all possible variations of these “sleep study” questions… instead it turns out that it answers half the questions (the half that ask about the coin) while the other half is shown to be hopeless… and the reason why it’s hopeless really does seem to have an anthropics flavor to it.

We can answer every coherently formulated question. Everything that is formally defined has an answer Being careful with the basics allows to understand which question is coherent and which is not. This is the same principle as with every probability theory problem. 

Consider Sleeping-Beauty experiment without memory loss. There, the event Monday xor Tuesday also can't be said to always happen. And likewise "Today is Monday" also doesn't have a stable truth value throughout the whole experiment. 

Once again, we can't express Beauty's uncertainty between the two days using probability theory. We are just not paying attention to it because by the conditions of the experiment, the Beauty is never in such state of uncertainty. If she remembers a previous awakening then it's Tuesday, if she doesn't - then it's Monday.

All the pieces of the issue are already present. The addition of memory loss just makes it's obvious that there is the problem with our intuition.

Comment by Ape in the coat on When is a mind me? · 2024-04-18T08:15:04.330Z · LW · GW

"You should anticipate having both experiences" sounds sort of paradoxical or magical, but I think this stems from a verbal confusion.

You can easily clear this confusion if you rephrase it as "You should anticipate having any of these experiences". Then it's immediately clear that we are talking about two separate screens. And it's also clear that our curriocity isn't actually satisfied. That the question "which one of these two will actually be the case" is still very much on the table.

Rob-y feels exactly as though he was just Rob-x, and Rob-z also feels exactly as though he was just Rob-x

Yes, this is obvious. Still as soon as we got Rob-y and Rob-z they are not "metaphysically the same person". When Rob-y says "I" he is reffering to Rob-y, not Rob-z and vice versa. More specifically Rob-y is refering to some causal curve through time ans Rob-z is refering to another causal curve through time. These two curves are the same to some point, but then they are not. 

Comment by Ape in the coat on Beauty and the Bets · 2024-04-18T06:07:03.301Z · LW · GW

In case the bet is offered on every awakening: do you mean if she gives conflicting answers on Monday and Tuesday that the bet nevertheless is regarded as accepted?

Yes I do. 

Of course, if the experiment is run as stated she wouldn't be able to give conflicting answers, so the point is moot. But having a strict algorithm for resolving such theoretical cases is a good thing anyway.

My initial idea was, that if for example only her Monday answer counts and Beauty knows that, she could reason that when her answer counts it is Monday, arriving at the conclusion that it is reasonable to act as if it was Monday on every awakening, thus grounding her answer on P(H/Monday)=1/2. Same logic holds for rule „last awakening counts“ and „random awakening counts“.

Yes, I got it. As a matter of fact this is unlawful. Probability estimate is about the evidence you receive not about what "counts" for a betting scheme. If the Beauty receives the same evidence when her awakening counts and when it doesn't count she can't update her probability estimate. If in order to arrive to the correct answer she needs to behave as if every day is Monday it means that there is something wrong with her model.

Thankfully for thirdism, she does not have to do it. She can just assign zero utility to Tuesday awakening and get the correct betting odds.

Anyway, all this is quite tangental to the question of utility instability. Which is about the Beauty making a bet on Sunday and then reflecting on it during the experiment, even if no bets are proposed. According to thirdism probability of the coin being Heads changes on awakening, so, in order for Beauty not to regret about making an optimal bet on Sunday, her utility has to change as well. Therefore utility instability.

Comment by Ape in the coat on Beauty and the Bets · 2024-04-17T08:05:55.867Z · LW · GW

There are indeed ways to obfuscate the utility instability under thirdism by different betting schemes where it's less obvious, as the probability relevant to betting isn't P(Heads|Awake) = 1/3 but one of thoses you meantion which equal 1/2.

The way to define the scheme specifically for P(Heads|Awake), is this: you get asked to bet on every awakening. One agreement is sufficient, and only one agreement counts. No random selecting takes place.

This way the Beauty doesn't get any extra evidence when she is asked to bet, therefore she can't update her credence for the coin being Heads based on the sole fact of being asked to bet, the way you propose.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-04-17T06:37:29.267Z · LW · GW

This makes me uncomfortable. From the perspective of sleeping beauty, who just woke up, the statement “today is Monday” is either true or false (she just doesn’t know which one). Yet you claim she can’t meaningfully assign it a probability. This feels wrong, and yet, if I try to claim that the probability is, say, 2/3, then you will ask me “in what sample space?” and I don’t know the answer.

Where does the feeling of wrongness come from? Were you under impression that probability theory promised us to always assign some measure to any statement in natural language? It just so happens that most of the time we can construct an appropriate probability space. But the actual rule is about whether or not we can construct a probability space, not whether or not something is a statement in natural language.

Is it really so surprising that a person who is experiencing amnesia and the repetetion of the same experience, while being fully aware of the procedure can't meaningfully assign credence to "this is the first time I have this experience"? Don't you think there has to be some kind of problems with Beauty's knowledge state? The situation whre due to memory erasure the Beauty loses the ability to coherently reason about some statements makes much more sense than the alternative proposed by thirdism - according to which the Beauty becomes more confident in the state of the coin than she would've been if she didn't have her memory erased.

Another intuition pump is that “today is Monday” is not actually True xor False from the perspective of the Beauty. From her perspective it's True xor (True and False). This is because on Tails, the Beauty isn't reasoning just for some one awakening - she is reasoning for both of them at the same time. When she awakens the first time the statement "today is Monday" is True, and when she awakens the second time the same statement is False. So the statement "today is Monday" doesn't have stable truth value throughout the whole iteration of probability experiment. Suppose that Beauty really does not want to make false statements. Deciding to say outloud "Today is Monday", leads to making a false statement in 100% of iterations of experiemnt when the coin is Tails.

P(today is Monday | heads) = 100% is fine. (Or is that tails? I keep forgetting.) P(today is Monday | tails) = 50% is fine too. (Or maybe it’s not? Maybe this is where I’m going working? Needs a bit of work but I suspect I could formalize that one if I had to.) But if those are both fine, we should be able to combine them, like so: heads and tails are mutually exclusive and one of them must happen, so: P(today is Monday) = P(heads) • P(today is Monday | heads) + P(tails) • P(today is Monday | tails) = 0.5 + .25 = 0.75 Okay, I was expecting to get 2/3 here. Odd. More to the point, this felt like cheating and I can’t put my finger on why. maybe need to think more later

Here you are describing Lewis's model which is appropriate for Single Awakening Problem. There the Beauty is awakened on Monday if the coin is Heads, and if the coin is Tails, she is awakened either on Monday or on Tuesday (not both). It's easy to see that 75% of awakening in such experiment indeed happen on Monday.

It's very good that you notice this feeling of cheating. This is a very important virtue. This is what helped me construct the correct model and solve the problem in the first place - I couldn't accept any other - they all were somewhat off. 

I think, you feel this way, because you've started solving the problem from the wrong end, started arguing with math, instead of accepting it. You noticed that you can't define "Today is Monday" normally so you just assumed as an axiom that you can.

But as soon as you assume that "Today is Monday" is a coherent event with a stable truth value throughout the experiment, you inevitably start talking about a different problem, where it's indeed the case. Where there is only one awakening in any iteration of probability experiment and so you can formally construct a sample space where "Today is Monday" is an elementary mutually exclusive outcome. There is no way around it. Either you model the problem as it is, and then "Today is Monday" is not a coherent event, or you assume that it is coherent and then you are modelling some other problem. 

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-04-15T09:30:01.560Z · LW · GW

The second one looks “obvious” from symmetry considerations but actually formalizing seems harder than expected.

Exactly! I'm glad that you actually engaged with the problem. 

The first step is to realize that here "today" can't mean "Monday xor Tuesday" because such event never happens. On every iteration of experiment both Monday and Tuesday are realized. So we can't say that the participant knows that they are awakened on Monday xor Tuesday.

Can we say that participant knows that they are awakened on Monday or Tuesday? Sure. As a matter of fact:

P(Monday or Tuesday) = 1

P(Heads|Monday or Tuesday) = P(Heads) =  1/2

This works, here probability that the coin is Heads in this iteration of the experiment happens to be the same as what our intuition is telling us P(Heads|Today) is supposed to be, however we still can't define "Today is Monday":

P(Monday|Monday or Tuesday) = P(Monday) = 1

Which doesn't fit our intuition. 

How can this be? How can we have a seeminglly well-defined probability for "Today the coin is Heads" but not for "Today is Monday"? Either "Today" is well-defined or it's not, right? Take some time to think about it. 

What do we actually mean when we say that on an awakening the participant supposed to believe that the coin is Heads with 50% probability? Is it really about this day in particular? Or is it about something else? 

The answer is: we actually mean, that on any day of the experiment be it Monday or Tuesday the participant is supposed to believe that the coin is Heads with 50% probability. We can not formally specify "Today" in this problem but there is a clever, almost cheating way to specify "Anyday" without breaking anything.

This is not easy. It requires a way to define P(A|B), when P(B) itself is undefined which is unconventional. But, moreover, it requires symmetry. P(Heads|Monday) has to be equal to P(Heads|Tuesday) only then we have a coherent P(Heads|Anyday). 

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-04-14T18:52:41.348Z · LW · GW

First of all, that can’t possibly be right. 

I understand that it all may be somewhat counterintuitive. I'll try to answer whatever questions you have. If you think you have some way to formally define what "Today" means in Sleeping Beauty - feel free to try. 

Second of all, it goes against everything you’ve been saying for the entire series.

Seems very much in accordance with what I've been saying. 

Throughout the series I keep repeating the point that all we need to solve anthropics is to follow probability theory where it leads and then there will be no paradoxes. This is exactly what I'm doing here. There is no formal way to define "Today is Monday" in Sleeping Beauty and so I simply accept this, as the math tells me to, and then the "paradox" immediately resolves. 

Suppose someone who has never heard of the experiment happens to call sleeping beauty on her cell phone during the experiment and ask her “hey, my watch died and now I don’t know what day it is; could you tell me whether today is Monday or Tuesday?” (This is probably a breach of protocol and they should have confiscated her phone until the end, but let’s ignore that.).

Are you saying that she has no good way to reason mathematically about that question? Suppose they told her “I’ll pay you a hundred bucks if it turns out you’re right, and it costs you nothing to be wrong, please just give me your best guess”. Are you saying there’s no way for her to make a good guess? If you’re not saying that, then since probabilities are more basic than utilities, shouldn’t she also have a credence?

Good question. First of all, as we are talking about betting I recommend you read the next post, where I explore it in more details, especially if you are not fluent in expected utility calculations.

Secondly, we can't ignore the breach of the protocol. You see, if anything breaks the symmetry between awakening, the experiment changes in a substantial manner. See Rare Event Sleeping Beauty, where probability that the coin is Heads can actually be 1/3.

But we can make a similar situation without breaking the symmetry. Suppose that on every awakening a researcher comes to the room and proposes the Beauty to bet on which day it currently is. At which odds should the Beauty take the bet?

This is essentially the same betting scheme as ice-cream stand, which I deal with in the end of the previous comment.

Comment by Ape in the coat on Ackshually, many worlds is wrong · 2024-04-12T09:08:52.681Z · LW · GW

Sampling is not the way randomness is usually modelled in mathematics, partly because mathematics is deterministic and so you can't model randomness in this way

As a matter of fact, it is modeled this way. To define probability function you need a sample space, from which exactly one outcome is "sampled" in every iteration of probability experiment.

But yes, the math is deterministic, so it's not "true randomness" but a pseudo-randomness, so just like with every software library it's hidden-variables model rather than Truly Stochastic model.

And this is why, I have troubles with the idea of "true randomness" being philosophically coherent. If there is no mathematical way to describe it, in which way can we say that it's coherent?

Like, the point of many-worlds theory in practice isn't to postulate that we should go further away from quantum mechanics by assuming that everything is secretly deterministic.

The point is to describe quantum mechanics as it is. If quantum mechanics is deterministic we want to describe it as deterministic. If quantum mechanics is not deterministic we do not want to descrive quantum mechanic as deterministic. The fact that many-words interpretation describes quantum mechanics is deterministic can be considered "going further from quantum mechanics"  only if it's, in fact, not deterministic, which is not known to be the case. QM just has a vibe of "randomness" and "indeterminism" around it, due to historic reasons, but actually whether it deterministic or not is an open question.

Comment by Ape in the coat on The Closed Eyes Argument For Thirding · 2024-04-10T09:29:51.286Z · LW · GW

You are already aware of this but, for the benefits of other readers all mention it anyway. 

In this post I demonstrate that the narrative of betting arguments validating thirdism is generally wrong and is just a result of the fact that the first and therefore most popular ha;fer model is wrong. 

Both thirders and halfers, following the correct model, make the same bets in Sleeping Beauty, though for different reasons. The disagreement is about how to factorize the product of probability of event and utility of event.

And if we investigate a bit deeper, halfer way to do it makes more sense, because its utilities do not shift back and forth during the same iteration of the experiment.

Comment by Ape in the coat on The Closed Eyes Argument For Thirding · 2024-04-10T08:39:31.335Z · LW · GW

You would violate conservation of expected evidence if 

P(Monday) + P(Tuesday) = 1 

However this is not the case because P(Monday) = 1 and P(Tuesday) = 1/2

Comment by Ape in the coat on The Closed Eyes Argument For Thirding · 2024-04-10T08:38:55.509Z · LW · GW

I'm a bit surprised that you think this way, considering that you've basically solved the problem yourself in this comment.

P(Heads & Monday) = P(Tails & Monday) = 1/2

P(Tails & Monday) = P(Tails&Tuesday) = 1/2

Because Tails&Monday and Tails&Tuesday are the exact same event.

The mistake that everyone seem to be making is thinking that Monday/Tuesday mean "This awakening is happening during Monday/Tuesday". But such events are ill-defined in the Sleeping Beauty setting. On Tails both Monday and Tuesday awakenings are supposed to happen in the same iteration of probability experiment and the Beauty is fully aware of that, so she can't treat them as individual mutual exclusive outcomes. 

You can only lawfully talk about "In this iteration of probability experiment Monday/Tuesday awakening happens".

In this post I explain it in more details.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-04-09T04:05:15.992Z · LW · GW

Meta: the notion of writing probability 101 wasn't addressed to you specifically. It was a release of my accumulated frustration of not-particularly productive arguments with several different people which again and again led to the realizations that the crux of disagreement lies in the most basics, from which you are only one person.

You are confusing to talk to, with your manner to rise seemingly unrelated points and then immediately drop them. And yet you didn't deserve the full emotional blow that you apparently received and I'm sorry about it.

Writing a probability 101 seems to me as a constructive solution to such situations, anyway. It would provide opportunity to resolve this kinds of disagreements as soon as they arise, instead of having to backtrack to them from a very specific topic. I may still add it to my todo list.

Ah yes, clearly, the problem is that I don't understand basic probability theory. (I'm a bit sad that this conversation happened to take place with my pseudonymous account.) In my previous comment, I explicitily prepared to preempt your confusion about seeing the English word 'experiment' with my paragraph (the part of it that you, for some reason, did not quote), and specifically linking a wiki which only contains the mathematical part of 'probability', and not philosophical interpretations that are paired with it commonly, but alas, it didn't matter.

i figured that either you don't know what "probability experiment" is or you are being confusing on purpose. I prefer to err in the direction of good faith, so the former was my initial hypothesis. 

Now, considering that you admit that you you were perfectly aware of what I was talking about, to the point where you specifically tried to cherry pick around it, the latter became more likely. Please don't do it anymore. Communication is hard as it is. If you know what a well established thing is, but believe it's wrong - just say so.

Nevertheless, from this exchange, I believe, I now understand that you think that "probability experiment" isn't a mathematical concept, but a philosophical one. I could just accept this for the sake of the argument, and we would be in a situation where we have a philosophical consensus about an issue, to a point where it's a part of standard probability theory course that is taught to students, and you are trying to argue against it, which would put quite some burden of proof on your shoulders.

But, as a matter of fact, I don't see anything preventing us from formally defining "probability experiment". We already have a probability space. Now we just need a variable going from 1 to infinity for the iteration of probability experiment, and a function which takes sample space and the value of this variable as an input and returns one outcome that is realized in this particular iteration. 

I said that I can translate the math of probability spaces to first order logic, and I explicitly said that our conversation can NOT be translated to first order logic as proof that it is not about math

Sorry, I misunderstood you. 

Also a reminder that you you still haven't addressed this:

If a mathematical probabilistic model fits some real world process - then the outcomes it produces has to have the same statistical properties as the outcomes of real world process.

If we agree on this philosophical statement, then we reduced the disagreement to a mathematical question, which I've already resolved in the post. If you disagree, then bring up some kind of philosophical argument which we will be able to explore.

Anyway, are you claiming that it's impossible to formalize what "today" in "today the coin is Heads" means even in No-Coin-Toss problem? Why are you so certain that people have to have credence in this statement then? Would you then be proven wrong if I indeed formally specify what "Today" means?

Because, as I said, it's quite easy. 

Today = Monday xor Tuesday

P(Today) = P(Monday xor Tuesday) = 1

P(Heads|Today) = P(Heads|Monday xor Tuesday) = P(Heads) = 1/3

Likewise we can talk about "Today is Monday":

P(Monday|Today) = P(Monday|Monday xor Tuesday) = P(Monday) = 1/2

Now, do you see, why this method doesn't work for Two Awakenings Either Way and Sleeping Beauty problems?

If you are not ready to accept that people have various levels of belief in the statement "Today is Monday" at all times, then I don't think this conversation can go anywhere, to be honest. This is an extremely basic fact about reality.

In reality people may have all kind of confused beliefs and ill-defined concepts in their heads. But the question of Sleeping Beauty problem is about what the ideal rational agent is supposed to believe. When I say "Beauty does not have such credence" I mean, that an ideal rational agent ought not to. That probability of such event is ill-defined.

As you may've noticed I've successfully explained the difference in real life beliefs about optimal actions in the ice-cream stand scenario, without using such ill-defined probabilities.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-04-08T08:55:15.389Z · LW · GW

This whole conversation isn't about math. It is about philosophy.

The tragedy of the whole situation is that people keep thinking that. 

Everything is "about philosophy" until you find a better way to formalize it. Here we have a better way to formalize the issue, which you keep ignoring. Let me spell it for you once more:

If a mathematical probabilistic model fits some real world process - then the outcomes it produces has to have the same statistical properties as the outcomes of real world process.

If we agree on this philosophical statement, then we reduced the disagreement to a mathematical question, which I've already resolved in the post. If you disagree, then bring up some kind of philosophical argument which we will be able to explore.

If you are a layman

I'm not. And frankly, it baffles me that you think that you need to explain that it's possible to talk about math using natural language, to a person who has been doing it for multiple posts in a row.

mathematical objects itself have no concept of 'experiment' or 'time' or anything like those.


https://en.wikipedia.org/wiki/Experiment_(probability_theory)

The more I post about anthropics the clearer it becomes that I should've started with posting about probability theory 101. My naive hopes that average LessWrong reader is well familiar with the basics and just confused about more complicated cases are crushed beyond salvation.

Can a probability space model a person's beliefs at a certain point in time?

This question is vague in a similar manner to what I've seen from Lewis's paper. Let's specify it, so that we both understand what we are talking about

Did you mean to ask 1. or 2:

  1. Can a probability space at all model some person's belif in some circumstance at some specific point in time?
  2. Can a probability space always model any person's belief in any circumstances at any unspecified point in time?

The way I understand it, we agree on 1. but disagree on 2. There are definetely situations where you can correctly model uncertanity about time via probability theory. As a matter of fact, it's most of the cases. You won't be able to resolve our disagreement by pointing to such situations - we agree on them.

But you seem to have generalized that it means that probability theory always has to be able to do it. And I disagree. Probability space can model only aspects of reality that can be expressed in terms of it. If you want to express uncertanity between "today is Monday" or "today is Tuesday" you need a probability space for which Monday and Tuesday are mutually exclusive outcomes and it's possible to design a specific setting - like the one in Sleeping Beauty - where they are not, where on the same trial both Monday and Tuesday are realized and the participant is well aware of it. 

In particular, Beauty, when awoken, has a certain credence in the statement "Today is Monday."

No she does not. And it's easy to see if you actually try to formally specify what is meant here by "today" and what is meant by "today" in regular scenarios. Consider me calling your bluff about being ready to translate to first order logic at any moment. 

Let's make it three different situations: 

  1. No-Coin-Toss problem.
  2. Two awakenings with memory loss, regardless of the outcome of the coin.
  3. Regular Sleeping Beauty

Your goal is to formally define "today" using first order logic so that a person participating in such experiments could coherently talk about event "today the coin is Heads".

My claim is: it's very easy to do so in 1. It's a harder, but still doable in 2. And it's not possible to do so in 3, without contradicting the math of probability theory.

setting up an icecream stand which is only open on Monday in one direction from the lab, another in the opposite direction which is only open on Tuesday and making this fact known to subjects of an experiment who are then asked to give you icecream and observe where the go

This is not a question about simply probability/credence. It also involves utilities and it's implicitly assumed that the participant preferes to walk for less distance than more. Essentially you propose a betting scheme where:

P(Monday)U(Monday) = P(Tuesday)U(Tuesday)

According to my model P(Monday) = 1, P(Tuesday) = 1/2, so:

2U(Monday) = U(Tuesday), therefore odds are 2:1. As you see, it deals with such situations without any problem.

Comment by Ape in the coat on Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events · 2024-04-08T05:05:09.340Z · LW · GW

What she is really surprised about however, is not that she has observed an unlikely event ({HHTHTHHT}), but that she has observed an unexpected pattern.

Why do you oppose these two things to each other? Talking about patterns is just another way to describe the same fact.

In this case, the coincidence of the sequence she had in mind and the sequence produced by the coin tosses constitutes a symmetry which our mind readily detects and classifies as such a pattern.

Well, yes. Or you can say that having a specific combination in mind allowed to observe event "this specific combination" instead of "any combination". Once again this is just using different language to talk about the same thing.

One could also say that she has not just observed the event {HHTHTHHT} alone, but also the coincidence which can be regarded as an event, too. Both events, the actual coin toss sequence and the coincidence, are unlikely events and both become extremely unlikely with longer sequences.

Oh! Are you saying that she has observed the intersection of two rare events: "HHTHTHHT was produced by coin tossing" and "HHTHTHHT was the sequence that I came up with in my mind" both of which have probability 1/2^8 so now she is surprised as if she observed an event with (1/2^8)^2?

That's not actually the case.  If the person came up with some other combination and then it was realized on the coin tosses the surprise would be the same - there are 1/2^8 degrees of dreedom here - for every possible combination of Heads and Tails with lenghth 8. So the probability of the observed event is still 1/2^8.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-04-07T08:10:52.699Z · LW · GW

I meant to show you that if you don't start out with "centered worlds don't work", you CAN make it work

The clever way isn't that clever to be honest. It's literally just: don't assume that it does not work and try it.

I didn't start believing that "centred worlds don't work". I suspect you got this impression mostly because you were reading the posts in the wrong order. I started from trying the existent models noticed when they behave weirdly if we assume that they are describing Sleeping Beauty and then noticed that they are actually talking about different problems - for which their behavior is completely normal.

And then, while trying to understand what is going on, I stumbled at the notion of centred possible worlds and their complete lack of mathematical justification and it opened my eyes. And then I was immediately able to construct the correct model, which completely resolves the paradox, adds up to normality and has no issues whatsoever.

But in hindsight, if I did start from the assumption that centred possible worlds do not work, - that would be the smart thing to do and I'd save me a lot of time. 

With my previous comment I meant to show you that if you don't start out with "centered worlds don't work", you CAN make it work (very important: here, I haven't yet said that this is how it works or how it ought to work, merely that it CAN work without some axiom of probability getting hurt).

Well, you didn't. All this time you've just been insisting on a privileged treatment for them: "Can work until proven otherwise". Now, that's not how math works. If you come up with some new concept, be so kind to prove that they are coherent mathematical entities and what are their properties. I'm more than willing to listen to such attempts. The problem is - there are none. People just seem to think that saying "first person perspective" allows them to build sample space from non-mutually exclusive outcomes. 

Still, I struggle to see what your objection is apart form your intuition that "NO! It can't work!"

It's like you didn't even read my posts or my comments.

By definition of a sample space it can be constructed only from elementary outcomes which has to be mutually exclusive. Tails&Monday and Tails&Tuesday are not mutually exclusive - they happen to the same person in the same iteration of probability experiment during the same outcome of the coin toss. "Centredness" framework attempts to treat them as elementary outcomes, regardless. Therefore, it contradicts the definition of a sample space. 

This is what statistical analysis clearly demonstrates. If a mathematical probabilistic model fits some real world process - then the outcomes it produces has to have the same statistical properties as the outcomes of real world process. All "centred" models produce outcomes with different properties, compared to what actually running Sleeping Beauty experiment would do. Therefore they do not correctly fit the Sleeping Beauty experiment.

I want to argue how it CAN work in another way with credences/centeredness/bayesianism.

If you want to understand how centered world/credence/bayesian epistemology works

Don't mix bayesianism and credences with this "centredness" nonsense. Bayesianism is not in trouble - I've been appealing to Bayes theorem a lot throughout my posts and it's been working just fine. Likewise, credence in the event is simply probability conditional on all the evidence - I'm exploring all manner of conditional probabilities in my model. Bayesianism and credences are not some "another way" It is the exact same way. It's probability theory. "Centredness" - is not.

experiment isn't a good word, because it might lock you into a third-person view

Your statistical analysis is of course also assumes the third-person

I don't understand what you mean by "third-person view" here, and I suspect neither do you. 

Statistical test is very much about Beauty's perspective - only awakenings that she experiences are noted down, not all the states of the experiment. Heads&Tuesday isn't added to the list, which would be the case if we were talking about third person perspective.

On the other hand, when you were talking about justifying an update on awakening, you are treating the situation from the observer perspective - someone who has non zero probability for Heads&Tuesday outcome and could realistically not observe the Beauty being awakened and, therefore, updates when sees her indeed awaken.

"Centred" models do not try to talk about Beauty's perspective. They are treating different awakened states of the Beauty as if they are different people, existing independently of each other, therefore contradicting the conditions of the setting, according to which all the awakenings are happening to the same person. Unless, of course, there is some justification why treating Beauty's awakened states this way is acceptable. The only thing resembling such justification, that I've encountered, is vaguely pointing towards the amnesia that the Beauty is experiencing, with which I deal in the section Effects of Amnesia. If there is something else - I'm open to consider it, but the initial burden of proof is on the "centredness" enthusiasts.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-04-06T05:07:09.336Z · LW · GW

I'll start from adressing the actual crux of our disagreement

You often do this mistake in the text, but here it's too important to not mention that "Awake" does not mean that "Beauty is awakened.", it means that "Beauty is awake" (don't forget that centeredness!) and, of course, Beauty is not awake if it is Tuesday and the coin is heads.

As I've written in this post, you can't just said magical word "centredness" and think that you've solved the problem. If you wont a model that can have an event that changes its truth predicate with the passage of time during the same iteration of the probability experiment - you need to formally construct such model, rewriting all the probability theory from scratch, because our current probability theory doesn't allow that.

In probability theory, one outcome of a sample space is realized per an iteration of experiment. And so for this iteration of experiment, every event which includes this outcome is considered True. All the "centred" models therefore, behave as if Sleeping Beauty consist of two outcomes of probability experiment. As if Monday and Tuesday happen at random and that to determine whether the Beauty has another awakening the coin is tossed anew. And because of it they contradict the conditions of the experiment, according to which Tails&Tuesday awakening always happen after Tails&Monday. Which is shown in Statistical Analysis section. It's a model for random awakening not for current awakening that. Because current awakening is not random.

So no, I do not do this mistake in the text. This is the correct way to talk about Sleeping Beauty. Event "The Beauty is awaken in this experement" is properly defined. Event "The Beauty is awake on this particular day" is not, unless you find some new clever way to do it - feel free to try.

Consider the following problem: "Forgetful Brandon"

I must say, this problem is very unhelpful to this discussion. But sure, lets analyze it regardless.

I hope you agree that Brandon not actually doing the Bayesian calculation is irrelevant to the question.

I suppose? Such questions are usually about ideal rational agents, so yes, it shouldn't matter, what a specific non-ideal agent does, but then why even add this extra complication to the question if it's irrelevant?

Anytime Brandon updates he predictably updates in the direction of HEADS

Well, that's his problem, honestly, I though we agreed that what he does is irrelevant to the question.

Also his behavior here is not as bad as what you want the Beauty to do - at least Brandon doesn't update in favor of Heads on literally every iteration of experiment

should we point out a failure of conservation of expected evidence?

I mean, if we want to explain Brandon's failure at rationality - we should. The reason why Brian's behaviour is not rational is exactly that - he fails at conservation of expected evidence. There are two possible signals that he may receive: "Yay", "No yay and getting ice cream". These signals are differently correclated with the outcome of the coin toss. If he behaved rationally he updated on both of them in opposite direction, therefore following the conservation of expected evidence.

In principle, it's possible to construct a better example where Brandon doesn't update not because of his personal flaws in rationality, but due to the specifics of the experiment. For example, if he couldn't be sure when exactly Adam is supposed to shout. Say, Adam intended to shout one minute after he saw the result of the coin toss, but Brandon doesn't knows it, according to his information Adam shouts "Yay" in the interval of three minutes sicnce the coin was tossed. And so he is still waiting, unupdated aftre just one minute.

But then, it won't be irrelevant to the question as you seem to want it for some reason.

I don't see why you object to Sleeping Beauty not doing the calculation in case she is not awakened. (Which is the only objection you wrote under the "Freqency Argument" model)

I do not object to the fact that the Beauty doesn't do calculation in case she is not awakened - she literally can't do it due to the setting of the experiment. 

I object to Beauty predictably updating in favor of Tails when she awakens in every iteration of the experiment which is a blatant contradiction of conservation of expected evidence. Updating model, as a whole descrives Observer Sleeping Beauty problem, where the observer can legitimately not see that the Beauty is awake and therefore update on awakening is lawful

Which is the only objection you wrote under the "Freqency Argument" model

See also Towards the Correct Model where I point to core mathematical flaw of Frequency Argument - ignoring the fact that it works only when P(Heads|Awake) = 1/2 which is wrong for Sleeping Beauty. And, of course, Updating Model fails the Statistical Analysis as every other "centred" model.

Uninformed Sleeping Beauty

When the Beauty doesn't know the actual setting of the experiment she has a different model, fitting her uninformed state of knowledge, when she is told what is actually going on she discards it and starts using the correct model from this post.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-04-04T16:40:18.652Z · LW · GW

Again, that depends.

I think, I talk about something like you point to here:

If I forget what is the current day of the week in my regular life, well, it's only natural to start from a 1/7 prior per day and work from there. I can do it because the causal process that leads to me forgetting such information can be roughly modeled as a low probability occurrence which can happen to me at any day. 

It wouldn't be the case, if I was guaranteed to also forget the current day of the week on the next 6 days as well, after I forgot it on the first one. This would be a different causal process, with different properties - causation between forgetting - and it has to be modeled differently. But we do not actually encounter such situations in everyday life, and so our intuition is caught completely flat footed by them.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-04-04T16:16:35.143Z · LW · GW

I've started at your latest post and recursively tried to find where you made a mistake

I think you'd benefit more if you read them in the right order starting from here.

Philosophers answer "Why not?" to the question of centered worlds because nothing breaks and we want to consider the questions of 'when are we now?' and 'where are we now?'.

Sure, we want a lot of things. But apparently we can't always have everything we want. To preserve the truth statements we need to follow the math wherever it leads and not push it where we would like it to go. And where the math goes - that what we should want.

Am I understanding you correctly that you reject P(today is Monday) as a valid probability in general (not just in sleeping beauty)?

This post refers several alternative problems where P(today is Monday) is a coherent probability, such as Single Awakening and No-Coin-Toss problems, which were introduced in the previous post. And here I explain the core principle: when there is only one day that is observed in the one run of the experiment you can coherently define what "today" means - the day from this iteration of the experiment. A random day. Monday xor Tuesday.

This is how wrong models try to treat Monday and Tuesday in Sleeping Beauty. As if they happen at random. But they do not. There is an order between them, and so they can't be treated this way. Today can't be Monday xor Tuesday, because on Tails both Monday and Tuesday do happen.

As a matter of fact, there is another situation where you can coherently talk about "today", which I initially missed. "Today" can mean "any day". So, for example, in Technicolor Sleeping beauty from the next post, you can have coherent expectation to see red with 50% and blue with 50% on the day of your awakening, because for every day it's the same. But you still can't talk about "probability that the coin is Heads today" because on Monday and Tuesday these probabilities are different.

So in practice, the limitation is only about Sleeping Beauty type problems where there are multiple awakenings with memory loss in between per one iteration of experiment, and no consistent probabilities for every awakening. But generally, I think it's always helpful to understand what exactly you mean by "today" in any probability theory problem.

axiomatically deciding that 1/3 is the wrong probability for sleeping beauty

I do not decide anything axiomatically. But I notice that existent axioms of probability theory do not allow to have predictable update in favor of Tails in 100% of iterations of experiment, neither they allow a fair coin toss to have unconditional probability for Heads equal 1/3.

And then I notice that the justification that people came up with for such situations, about "new type of evidence" that a person receives is based on nothing but some philosopher wanting it to be this way. He didn't come with any new math, didn't prove any theorems. He simply didn't immediately notice any contradictions in his reasoning. And when an example was broiught up, he simply doubled dowm/ Suffice to say, its absolutely not how anything supposed to work.

if everything else seems to work, is it not much simpler to accept that 1/3 is the correct answer and then you don't have to give up considering whether today is Monday?

If everything actually worked then the situation would be quite different. However, my previous post explores how every attempt to model the Sleeping Beauty problem, based on the framework of centred possible worlds fail one way or another. 

You can also clearly see it in Statistical Analysis section of this post. I don't see how this argument can be refuted, frankly. If you treat Tails&Monday and Tails&Tuesday as different elementary outcomes then you can't possibly keep their correct order, and it's in the definition of the experiment that on Tails, Monday awakening is always followed by the Tuesday awakening and that the Beauty is fully aware of it. Events that happen in sequence can't be mutually exclusive and vice versa. I'm even formally proving it in the comments here.

And so, we can just accept that Tails&Monday and Tails&Tuesday are the same outcome of the probability space and suddenly everything adds up to normality. No paradox, no issues with statistical analysis, no suboptimal bets, no unjustified updates and no ungrounded philosophical handwaving. Seems like the best deal to me!

Comment by Ape in the coat on Should you refuse this bet in Technicolor Sleeping Beauty? · 2024-04-04T16:07:07.475Z · LW · GW

Well done!

Halfer and thirder are about answer to the initial question of the Sleeping Beauty problem: What is the probability that the coin landed tails when you awake in the experiment?

Comment by Ape in the coat on Should you refuse this bet in Technicolor Sleeping Beauty? · 2024-04-04T15:15:32.222Z · LW · GW

Yes, you are correct, thanks!

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-04-02T17:16:34.526Z · LW · GW

A probability experiment is a repeatable process

On every iteration we have exactly one outcome from a sample space that is realized. And every event from event space which has this outcome is also assumed to be realized. When I say "experiment" I mean a particular iteration of it yes, because one run of sleeping beauty experiment correspond to one iteration of the probability experiment. I hope it cleared the possible misunderstanding.

THE OBSERVATION IS NOT AN EVENT

Event is not an outcome, it's a set of one or more outcomes, from the sample space, which itself has to belong to the event space. 

What you mean by "observation" is a bit of a mystery. Try tabooing it - after all probability space consists of only sample space, event space and probability function, no need to invoke this extra category for no reason.

A common way to avoid rebuttal

It's also a common way to avoid unnecessary tangents. Don't worry we will be back to it as soon as we deal with the more interesting issue, though I suspect then you will be able to resolve your confusion yourself.

No, thats how you try to misinterpret my version to fit your incorrect model. You use the term for Elga's one-coin version as well. Strawman arguments are another avoidance technique.

I don't think that correcting your misunderstanding about my position can be called "strawmanning". If anything it is unintentional strawmannig from your side, but don't worry, no offence taken.

Yes, One-coin-version has the exact same issue, where sequential awakenings Tails&Monday, Tails Tuesday are often treated as disconnected mutually exclusive outcomes.

But anyway, it's kind of pointless to talk about it at this point when you've already agreed to the the fact that the correct sample space for two coins version is {HT_HH, TT_TH, TH_TT, HH_HT}. We agree on the model, let's see where it leads.

Huh? What does "connect these pairs" mean to pairs that I already connected?

It means that you've finally done the right thing of course! You've stopped talking about individual awakenings as if they are themselves mutually exclusive outcomes and realized that you should be talking about the pairs of sequential awakenings treating them as a single outcome of an experiment. Well done!

No, I am not.

But apparently you still don't exactly undertand the full consequences of it. But that's okay, you've already done the most difficult step, I think the rest will be easier.

I am saying that I was able to construct a valid, and useful, sample space

And indeed you did! Once again - good job! But let's take a minute and understand what it means. 

Suppose that in a particular instance of the experiment outcome TT_TH happened. What does it mean for the Beauty? It means that she is awakened the first time before the second coin was turned and then awakened the second time after the coin was turned. This outcome encompases both her awakenings. 

Likewise, when outcome HT_HH happens, the Beauty is awakened before the coin turn and is not awakened after the coin turn. This outcome describes both her awakening astate and her sleeping state. 

And so on with other two outcomes. Are we on the same page here?

If there was no amnesia the Beauty could easily distinguish between the outcomes where she awakes twice orr only once. But with amnesia she is none the wiser. In the moment of awakening they feel exactly the same for her.

The thing you need to properly acknowledge, is that in the probability space you've constructed P(Heads) doesn't attempt to describe probability of first coin being Heads in this awakening. Once again - awakenings are not treated as outcomes themselves anymore. Now it describes probability that the coin is Heads in this iteration of experiment as a whole

I understand, that this may be counterintuitive for you if you got accustomed to the heresy of centred possible words. This is fine - take your time. Play with the model a bit, see what kind of events you can express with it, how it relates to betting, make yourself accustomed to it. There is no rush.

I am describing how she knows that she is in either the first observation or the second.

You've described two pairs of mutually exclusive events. 

{HT_HH, TT_TH, TH_TT}; {HH_HT} - Beauty is awakened before the coin turn; Beauty is not awakened before the coin turn 

{HH_HT, TT_TH, TH_TT}; {HT_HH} - Beauty is awakened after the coin turn; Beauty is not awakened after the coin turn.

Feel free to validate that it's indeed what these events are. 

And you correctly notice that 

P(Heads|HT_HH, TT_TH, TH_TT) = 1/3

and

P(Heads|HH_HT, TT_TH, TH_TT) = 1/3

Once again, I completely agree with you! This is a correct result that we can validate through a betting scheme. A Beauty that bets on Tails exclusively when she is awoken before the coin is turned is correct 66% of iterations of experiment. A Beauty that bets on Tails exclusively when she is awoken after the coin is turned is also correct in 66% of iterations of experiment. Once again, you do not have to trust me here, you are free to check this result yourself via a simulation.

And from this you assumed that the Beauty can always reason that the awakening that she is experiencing either happened before the second coin was turned or after the second coin was turned and therefore P(Heads|(HT_HH, TT_TH, TH_TT), (HH_HT, TT_TH, TH_TT)) = 1/3.

But this is clearly wrong, which is very easy to see.

First of all 

P(Heads|(HT_HH, TT_TH, TH_TT), (HH_HT, TT_TH, TH_TT)) = P(Heads|HT_HH, TT_TH, TH_TT, HH_HT)

Which is 1/2, because it is probability of Heads conditional on the whole sample space, where exactly 1/2 of the outcomes are such that the first coin is Heads. But also we may appeal to a betting argument, a Beauty that simply bets on Tails every time is correct only in 50% of experiments. This is a well known result - that per experiment betting in Sleeping Beauty should be done at 1:1 odds. But you are, nevertheless, also free to validate it yourself if you wish.

With me so far? 

Now you have an opportunity to find the mistake in your reasoning yourself. It's an actually interesting result, with fascinating consequences, by the way. And I don't think that many people properly understand it, based on the current level of discourse about Sleeping Beauty and anthropics as a whole. So, even though it's going to be a bit embarrasing for you, you will also discover rare and curious new piece of knowledge as a compensation for it.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-04-01T13:46:12.342Z · LW · GW

With these definitions, we can see that the SB Problem is one random experiment with a single result.

Yes! I'm so glad you finally got it! And the fact that you simply needed to remind yourself of the foundations of probability theory validates my suspicion that it's indeed the solution for the problem. You may want to reread the post and notice that this is exactly what I've been talking about the whole time.

Now, I ask you to hold in mind the fact that "SB Problem is one random experiment with a single result". We are goin to use this realization later.

That result is observed twice (yes, it is; remaining asleep is an observation of a result that we never make use of, so awareness as it occurs is irrelevant

This is false, but not crucial. We can postpone this for later.

What you call "sequential events" are these two separate observations of the same result.

No, what I call sequential events are pairs HH and HT, TT and TH, corresponding to exact awakening, which can't be treated as individual outcomes. 

The sample space for the experiment is {HH1_HT2, HT1_HH2, TH1_TT2, TT1_TH2}.

On the other hand, as soon as you connect these pairs and got HH_HT, HT_HH, TT_TH and TH_TT, they totally can create a sample space, which is exactly what I told you in this comment. As soon as you've switched to this sound sample space we are in agreement.

 Each outcome has probability 1/4. The first observation establishes the condition as {HT1_HH2, TH1_TT2, TT1_TH2} and its complement as {HH1_HT2}. Conditional probability says the probability of {HT1_HH2} is 1/3. The second observation establishes the condition as {HH1_HT2, TH1_TT2, TT1_TH2} and its complement as {HT1_HH2}. Conditional probability says the probability of {HH1_HT2} is 1/3.

You are describing a situation where the Beauty was told whether she is experiencing an awakening before the second coin was turned or not. If the Beauty awakens and learns that it's the awakening before the coin was turned, she indeed can reason that she observed the event {HT1_HH2, TH1_TT2, TT1_TH2} and that the probability that the first coin is Heads is 1/3. This, mind you, is not sneaky thirder idea of probability, where P(Heads) can be 1/3 even though the coin is Heads in 1/2 of the experiments. This is actual probability that the coin is Heads in this experiment. Remember the thing I asked you to hold in mind, our mathematical model doesn't attempt to describe the individual awakening anymore, as you may be used to, it describes the experiment as a whole. Let this thought sink through.

The Beauty which learned that she is awakened before the coin was turned, can bet on Tails and win with 66% chance per experiment. So she should agree for per experimental betting odds up to 1:2 - which isn't usually a good idea in Sleeping Beauty when you do not have any extra information about the state of the coin.

Likewise, if she knows for certain that she is experiencing an awakening after the second coin was turned. The same logic applies. She can lawfully update in favor of Tails and win per experimental bets on Tails with 66% probability.

And the point is that it does not matter which observation corresponds to SB being awake, since the answer is 1/3 regardless.

So one might think. But strangely enough, this doesn't work this way. If the Beauty awakens without learning whether she is experiencing the awakening before the coin turn or after, she can't just reason that whatever awakening she is experiencing, the probability is 1/3 and win per experiment bets with 66% probability. She will be right only in 50% of experiments. As it turns out:

P(Heads|Before Coin Turn) = P(Heads|After Coin Turn) = 1/3

however

P(Heads|Before or After Coin Turn) = 1/2

How can this be the case? 

I could walk you through the solution to this truly fascinating problem, but you've demonstrated much better ability to arrive the the correct(ish) answer when you are doing it on your own, than when I give you all the answers - so feel free to engage with this problem on your own. I believe you do have all the pieces of the puzzle now and the only reason you haven't completed it yet is because you've seen the number "1/3", decided that it validates thirdism and refused to think further. But now you know that it's not the case, so your motivated reasoning is less likely to be stopping you. 

As a matter of fact, thirdism is completely oblivious to the fact that there can be situations in Sleeping Beauty where betting per experiment at up to 1:2 odds may be a good idea. So you are discovering some new grounds here!

Comment by Ape in the coat on Beauty and the Bets · 2024-03-31T08:24:25.295Z · LW · GW

One of my ways of thinking about these sorts of issues is in terms of "fair bets"

Well, as you may see it's also is not helpful. Halfers and thirders disagree on which bets they consider "fair" but still agree on which bets to make, whether they call them fair or not. The extra category of a "fair bet" just adds another semantic disagreement between halfers and thirders. Once we specify whether we are talking per experiment or per awakening bet and on which, odds both theories are supposed to agree. 

I don't actually know what the Thirder position is supposed to be from a standpoint from before the experiment, but I see no contradiction in assigning equal utilities per awakening from the before-experiment perspective as well.

Thirders tend to agree with halfers that P(Heads|Sunday) = P(Heads|Wednesday) = 1/2. Likewise, because they make the same bets as the halfers, they have to agree on utilities. So it means that thirders utilities go back and forth which is weird and confusing behavior.

A Halfer has to discount their utility based on how many of them there are, a Thirder doesn't. It seems to me, on the contrary to your perspective, that Thirder utility is more stable

You mean how many awakenings? That if there was not two awakenings on tails, but, for instance, ten, halfers will have to think that U(Heads) has to be ten times as much as U(Tails) for a utility neutral per awakening bet? 

Sure, but it's a completely normal behavior. It's fine to have different utility estimates for different problems and different payout schemes - such things always happen. Sleeping Beauty with ten awakenings on Tails is a different problem than Sleeping Beauty with only two so there is no reason to expect that utilities of the events has to be the same. The point is that as long as we specified the experiment and a betting scheme, then the utilities has to be stable.

And thirder utilities are modified during the experiment. They are not just specified by a betting scheme, they go back and forth based on the knowledge state of the participant - behave the way probabilities are supposed to behave. And that's because they are partially probabilities - a result of incorrect factorization of E(X).

Speculation; have you actually asked Thirders and Halfers to solve the problem? (while making clear the reward structure?

I'm asking it right in the post, explicitly stating that the bet is per experiment and recommending to think about the question more. What did you yourself answer?

My initial state that thirders model confuses them about this per experiment bet is based on the fact that a pro-thirder paper which introduced the technicolor sleeping beauty problem totally fails to understand why halfers scoring rule updates in it. I may be putting to much weight on the views of Rachael Briggs in particular, but it apparently was peer reviewed and so on, so it seems to be decent evidence.

... and I in my hasty reading and response I misread the conditions of the experiment 

Well, I guess that answers my question.

Thirders can adapt to different reward structures but need to actually notice what the reward structure is!

Probably, but I've yet to see one actually derive the correct answer on their own, not post hoc after it was already spoiled or after consulting the correct model. I suppose I should have asked the question beforehand, and then publish the answer, oh well. Maybe I can still do it and ask nicely not to look.

The criterion I mainly use to evaluate probability/utility splits is typical reward structure

Well, if every other thirder reason like this, that would indeed explain the issue. 

You can't base the definition of probability on your intuitions about fairness. Or, rather, you can, but then you are risking contradicting the math. Probability is a mathematical concept with very specific properties. In my previous post I talk about it specifically and show that thirder probabilities for Sleeping Beauty are ill-defined.

Comment by Ape in the coat on Beauty and the Bets · 2024-03-30T06:22:56.810Z · LW · GW

This is surprising to me. Are you up to a a more detailed discussion? What do you think about the statistical analysis and the debunk of centred possible worlds? I haven't seen these points being raised or addressed before and they are definitely not about semantics. The fact that sequential events are not mutually exclusive can be formally proven. It's not a matter of perspective at all! We could use the dialogues feature, if you'd like.

Probability is what you get as a result of some natural desiderata related to payoff structures. 

This is a vague gesture to a similarity cluster and not an actual definition. Remove fancy words and you end up with "Probability has something to do with betting". Yes it does. In this post I even specify exactly what it does. You don't need to read E.T. Jayne’s to discover this revelation. The definition of expected utility is much more helpful.

When anthropics are involved, there are multiple ways to extend the desiderata, that produce different numbers that you should say, depending on what you get paid for/what you care about, and accordingly different math. 

There are always multiple ways to "extend the desiderata". But more importantly, you don't have to say different probability estimates depending on what you get paid for/what you care about. This is the exact kind of nonsense that I'm calling out in this post. Probabilities are about what evidence you have. Utilities are about what you care about. You don't need to use thirder probabilities for per awakening betting. Do you disagree with me here?

When there’s only a single copy of you, there’s only one kind of function, and everyone agrees on a function and then strictly defines it. When there are multiple copies of you, there are multiple possible ways you can be paid for having a number that represents something about the reality, and different generalisations of probability are possible.

How is it different from talking about probability of a specific person to observe an event and probability of any person from a group to observe an event? The fact that people from the group are exact copies doesn't suddenly makes anthropics a separate magisteria.

Moreover, there are no independent copies in Sleeping Beauty. On Tails, there are two sequential time states. The fact that people are trying to make a sample space out of them directly contradicts its definition.

When we are talking just about betting, one can always come up with its own functions, it's own way to separate expected utility of an event into "utility" and "probability". But then their "utilities" will be constantly shifting due to receiving new evidence and "probabilities" will occasionally ignore new evidence, and shift for other reasons. And pointing at this kind of weird behavior is a completely reasonable reaction. Can a person still use such definitions consistently? Sure. But this is not a way to carve reality by its joints. And I'm not just talking about betting. I specifically wrote a whole post about fundamental mathematical reasons, before starting talking about it.

Comment by Ape in the coat on Beauty and the Bets · 2024-03-30T04:49:51.680Z · LW · GW

So probability theory can't possibly answer whether I should take free money, got it.

No, that's not what I said. You just need to use a different probability space with a different event - "observing Red in any particular day of the experiment".

You can do this because for every day probability to observe the color is the same. Unlike, say, Tails in the initial coin toss which probability is 1/2 on Monday and 1 on Tuesday.

It's indeed a curious thing which I wasn't thinking about, because you can arrive to the correct betting odds on the color of the room for any day, using the correct model for technicolor sleeping beauty. As P(Red)=P(Blue) and rewards are mutually exclusive, U(Red)=U(Blue) and therefore 1:1 odds. But this was sloppy of me, because to formally update when you observe the outcome you still need an appropriate separate probability space, even if the update is trivial.

So thank you for bringing it up to my attention and, I'm going to talk more about it in a future post.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-03-29T09:48:30.368Z · LW · GW

But the subject has knowledge of only one pass.

This is the crux of our disagreement. 

The Beauty doesn't know only about one pass she knows about their relation as well. And because of it she can't reason as if they happen at random. You need to address this point before we could move on, because all your further reasoning is based on the incorrect premise that beauty knows less than she actually knows.

She has no ability to infer/anticipate what the coins were/will be showing  on another day.

She absolutely has this ability as long as she knows the procedure, that TT and TH follow in pairs, she can make such conditional statements: "if the coins are currently TT then they either will be TH tomorrow or were TH yesterday". It's very different from not knowing anything whatsoever about the state of the coin on the next day. The fact that you for some reason feel that it should not matter is irrelevant. It's still clearly more than no information whatsoever and, therefore, she can't justifiably reason as if she doesn't have any.

On the other hand, if the memory wipe removed this knowledge from her head as well, if the only thing she truly knew was that she is currently awakened at one of three possible states either TH, HT and TT, and had no idea of the relationship between them, then, only then, she would be justified to reason as you claim she should.

What you are doing, is treating HH (or, in Elga's implementation, H&Tuesday) as if it ceases to exist

No, I treat is as an event that Beauty doesn't expect to observe and therefore she doesn't update when she indeed doesn't observe it according to the law of conservation of expected evidence. We are talking about Beauty's perspective after all, not a some outside view.

Suppose an absolutely trustwothy source tells you that the coin is Heads side up. Then you go and look at the coin and indeed it's Heads side up. What should have been your probability that the coin is Tails side up before you looked at it? 

It should be zero. You've already known the state of the coin before you looked at it, you got no new information. Does it mean that Tails side of a coin doesn't exist? No, of course not! It just that you didn't expect that the coin could possibly be Tails in this particular case based on your knowledge state.

Say I roll a six-sided die tell you that the result is odd. Then I administer the amnesia drug, and tell you that I previously told you whether th result was even or odd. I then ask you for your degree of belief that the result is a six. Should you say 1/6, because as far as you know the sample space is {1,2,3,4,5,6}? Or should you say 0, because "you are [now] observing a state that you've already observed is only {1,3,5}?

I was going to post a generalized way of reasoning under amnesia in a future post, but here is some: getting memory erased about some evidence just brings you to the state where you didn't have this particular evidence. And getting an expected memory wipe can only make you less confident in your probability estimate, not more.

In this dice rolling case, initially my P(6) = 1/6, then after you told me that it's odd, P(6|Odd)=0, and then when I'm memory wiped I'm back to P(6) = 1/6 and the knowledge that you've already told me whether the result is even or odd doesn't help P(6|Even or Odd) = 1/6

Likewise in Sleeping Beauty I initially have P(Heads) = 1/2. Then I awakened exactly as I've expected in the experiment and still have P(Heads|Awake) = 1/2. Now suppose that I'm awakened once more. If there was no memory wipe I'd learn that I'm a awake a second time which would bring me to P(Heads|Two Awakenings) = 0. But I do not get this evidence due to memory wipe. So due to it, when I'm awakened the second time, I once again learn that I'm awake and still having P(Heads|Awake) = 1/2.

What you are implicitly claiming, however, is that getting memory wiped, or even just a possibility of it, makes the Beauty more confident in one outcome over the other! Which is quite bizarre. As if knowing less gives you more knowledge. Moreover, you assume that the person who knowns that their memory was/may be erased, just have to act as if they do not know it.

Suppose a coin is tossed and you received some circumstantial evidence about it's state. As a result you are currently at 2/3 in favor of Heads. And then I tell you: "What odds are you ready to bet on? By the way, I have erased from your memory some crucial evidence in favor of Tails". Do you really think that you are supposed to agree to bet on 1:2 odds even though you now know that the state of the evidence your currently have may not be trustworthy?

Comment by Ape in the coat on Beauty and the Bets · 2024-03-29T05:26:45.875Z · LW · GW

To be frank, it feels as if you didn't read any of my posts on Sleeping Beauty before writing this comment. That you are simply annoyed when people arguing about substantionless semantics - and, believe me, I sympathise enourmously! - assume that I'm doing the same, based on shallow pattern matching "talks about Sleeping Beauty -> semantic disagreement" and spill your annoyance at me, without validating whether your previous assumption is actually correct.

Which is a shame, because I've designed this whole series of posts with people like you in mind. Someone who starts from the assumption that there are two valid answers, because it was the assumption I myself used to be quite sympathetic to until I actually went forth and checked. 

If it's indeed the case, please start here and then I'd appreciate if you actually engaged with the points I made, because that post addresses the kind of criticism you are making here. 

If you actually read all my Sleeping Beauty posts, saw me highlight the very specific mathematical disagreements between halfers and thirders and how utterly ungrounded the idea of using probability theory with "centred possible words" is, I don't really understand how this kind of appealing to both sides still having a point can be a valid response. 

Anyway, I'm going to address you comment step by step.

Sleeping Beauty is an edge case where different reward structures are intuitively possible

Different reward structures are possible in any probability theory problem. Make a bet on a coin toss but if the outcome is Tails - this bet is repeated three times and if it's Heads you get punched in the face - is a completely possible reward structure for a simple coin toss problem. Is it not very intuitive? Granted, but this is besides the point. Mathematical rules are supposed to always work, even in non-intuitive cases.

Once the payout structure is fixed, the confusion is gone.

People should agree on which bets to make - this is true and this is exactly what I show in the first part of this post. But the mathematical concept of "probability" is not just about bets - which I talk about in the middle part of this post. A huge part of the confusion is still very much present. Or so it was, until I actually resolved it in the previous post.

Sleeping beauty is about definitions.

There definetely is a semantic component in the disagreement betwen halfers and thirders. But it's the least interesting one and that's why I'm postponing the talk about it until the next post.

The thing, you seem to be missing, is that there is also a real objective disagreement which is obfuscated by the semantic one. People noticed that halfers and thirders use different definitions and come to the conclusion that semantics is all there is and decided not to look further. But they totally should have.

My last two posts are talking about this objective matters disagreements. Is there an update on awakening or is there not? There is a disagreement about it even between thirders who, apparently agree on the definition of "probability". Are the ways halfers and thirders define probability formally correct? It's a strictly defined mathematical concept, mind you, not some similarity cluster category border like "sound". Are Tails&Monday and Tails&Tuesday mutually exclusive events? You can't just define mutual exclusivity however you like.

Probability is something defined in math by necessity.

Probability is a measure function over an event space. And if for some mathematical reasons you can't construct an event space, your "probability" is illdefined.

You all should just call these two probabilities two different words instead of arguing which one is the correct definition for "probability".

I'm doing both. I've shown that only one thing formally is probability, and in the next post I'm going to define the other thing and explore it's properties.

Comment by Ape in the coat on Beauty and the Bets · 2024-03-28T11:18:29.388Z · LW · GW

Yes, if the bet is about whether the room takes the color Red in this experiment. Which is what event "Red" means in Technicolor Sleeping Beauty according to the correct model. The fact that you do not observe event Red in this awakening doesn't mean that you don't observe it in the experiment as a whole.

The situation is somewhat resembling learning that today is Monday and still being ready to bet at 1:1 that Tuesday awakening will happen in this experiment. Though, with colors there is actually an update from 3/4 to 1/2.

What you, probably, tried to ask, is whether you should agree to bet at 1:1 odds that the room is Red in this particular awakening after you wake up and saw that the room is Blue. And the answer is no, you shouldn't. But probability space for Technicolor Sleeping beauty is not talking about probabilities of events happening in this awakening, because most of them are illdefined for reasons explained in the previous post.

Comment by Ape in the coat on Beauty and the Bets · 2024-03-28T09:48:18.727Z · LW · GW

Yes! There is 50% chance that the coin is Tails and so the room is to be Red in this experiment.

Comment by Ape in the coat on Beauty and the Bets · 2024-03-28T08:28:50.946Z · LW · GW

*ethically

No, I'm not making any claims about ethics here, just math.

Works against Thirdism in the Fissure experiment too.

Yep, because it's wrong in Fissure as well. But I'll be talking about it later.

I mean, if you are going to precommit to the right strategy anyway, why do you even need probability theory? 

To understand whether you should precommit to any stratagy and, if you should, then which one. The fact that 

P(Heads|Blue) = P(Heads|Red) = 1/3

but

P(Heads|Blue or Red) = 1/2

means, that you may precommit to either Blue or Red and it doesn't matter which, but if you don't precommit, you won't be able to guess Tails better than chance per experiment.

The whole question is how do you decide to ignore that P(Head|Blue) = 1/3, when you chose Red and see Blue. And how is it not "a probabilistic model produces incorrect betting odds", when you need to precommit to ignore it?

You do not ignore it. When you choose red and see that the walls are blue you do not observe event "Blue". You observe outcome "Blue" which correspond to event "Blue or Red". Because the sigma-algebra of you probability space is affected by your precommitment.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-03-28T08:12:24.270Z · LW · GW

The Two Coin version is about what happens on one day.

Let it be not two different days but two different half-hour intervals. Or even two milliseconds - this doesn't change the core of the issue that sequential events are not mutually exclusive.

observation of a state, when that observation bears no connection to any other, as independent of any other.

It very much bears a connection. If you are observing state TH it necessary means that either you've already observed or will observe state TT.

What law was broken?

The definition of a sample space - it's supposed to be constructed from mutually exclusive elementary outcomes. 

Do you disagree that, on the morning of the observation, there were four equally likely states? Do you think the subject has some information about how the state was observed on another day?

Disagree on both accountsd. You can't treat HH HT TT TH as individual outcomes and the term "morning of observation" is underspecified. The subject knows that some of them happen sequentially.

what I am trying to do is eliminate any basis for doing that

I noticed, and I applaud your attempts. But you can't do that because you still have sequential events, anyway, the fact that you call them differently doesn't change much.

Yes, each outcome on the first day can be paired with exactly one on the second. 

Exactly. And the Beauty knows it. Case closed.

But without any information passing to the subject between these two days, she cannot do anything with such pairings. To her, each day is its own, completely independent probability experiment.

She knows that they do not happen at random. This is enough to be sure that each day is not completely independent probability experiment. See Effects of Amnesia section.

No, it treats the current state of the coins as four mutually exclusive states.

Call them "states" if you want. It doesn't change anything.

How so? If your write down the state on the first day that the researchers look at the coins, you will find that {HH, TH, HT, TT} all occur with frequency 1/4. Same on the second day.

I've specifically explained how. We write down outcomes when the researcher sees the Beauty awake - when they updated on the fact of Beauty's awakening. The frequency for three outcomes is 1/3, moreover they actually go in random order because the observer witnesses only one random awakening per experiment. 

If you write down the frequencies when the subject is awake, you find that {TH, HT, TT} all have frequency 1/3.

Yep, no one is arguing with that. The problem is that the order isn't random as your model predicts - TH and TT always go in pairs.

Here is what you are arguing: Say you repeat this many times and make two lists, one for each day.

No, I'm not complicating this with two lists for each day. There is only one list, which documents all the awakenings of the subject, while she is going through the series of experiments. The theory that predicts that two awakening are "completely independent probability experiments" expect that the order of the awakenings is random and it's proven wrong because there is an order between awakenings. Easy as that.

That is what the amnesia drug accomplishes.

You are mistaken about what the amnesia acomplishes. Once again I send you to reread the Effects of Amnesia section. It's equally applicable to Two-Coin version of the problem as a regular one.

And your arguments that this is wrong require associating the attempts, essentially removing the effect of amnesia.

According to Beauty's knowledge, the attempts are already connected. Only if the amnesia removed from her mind the setting of the experiment if she forgot that TT and TH go in pairs, only then she should reason the way you want her to.

On the other hand, if we trully removed the effect of amnesia alltogether, then the Beauty would be 100% confident in Tails when awaken the second time in the same experiment.

So no, I'm talking about the exact knowledge state of the Beauty with the exact level of amnesia that she gets, while you your are talking about a more significant alteration of her mind.

Comment by Ape in the coat on Beauty and the Bets · 2024-03-28T06:56:51.593Z · LW · GW

Throughout your comment you've been saying a phrase "thirders odds", apparently meaning odds 1:2, not specifying whether per awakening or per experiment. This is underspecified and confusing category which we should taboo. 

As I show in the first part of the post, thirder odds are the exact same thing as halfer odds 1:2 per awakening and 1:1 per experiment.

However, your claim that Thirder Sleeping Beauty would bet differently before and after the coin toss is not correct.

I do not claim that. I say that in order to justify not betting differently, thirders have to retroactively change the utility of a bet already made:

Mathematically, abolishing such a bet is isomorphic to making an opposite bet at the same odds. And as we already established, making one per experiment bet at 1:1 odds is utility neutral, so a minor fee will be a deal breaker. Thirder's justification for it is that the utility of such bet is halved on Tails, because only one of the Tails outcomes is rewarded.

But it means that a thirder Beauty should think as if the fact of her awakening in the experiment retroactively changes the utility of a bet that she has already made! Instead of changing neither probabilities nor utilities, thirdism modifies both in a compensatory way. 

I critique thirdism not for making different bets - as the first part of the post explains, the bets are the same, but for their utilities not actually behaving like utilities - constantly shifting back and forth during the experiment, including shifts backwards in time, in order to compensate for the fact that their probabilities are not behaving as probabilities - because they are not sound probabilities as explained in the previous post.

Thirder Sleeping Beauty will bet Thirder odds even before the experiment starts, if the coin toss being bet on is particularly the one in this experiment and the reward structure is such that she will be rewarded equally (as assessed by her utility function) for correctness in each awakening.

Now, maybe you find this dependence on what the coin will be used for counterintuitive, but that depends on your own particular taste.

Wait, are you claiming that thirder Sleeping Beauty is supposed to always decline the initial per experiment bet - before the coin was tossed at 1:1 odds? This is wrong - both halfers and thirders are neutral towards such bets, though they appeal to different reasoning why.

Then, the "technicolor sleeping beauty" part seems to make assumptions where the reward structure is such that it only matters whether you bet or not in a particular universe and not how many times you bet. This is a very "Halfer" assumption on reward structure, even though you are accepting Thirder odds in this case! Also, Thirders can adapt to such a reward structure as well, and follow the same strategy.  

Some reward structures feels more natural for halfers and some for thirders - this is true. But good model for a problem is supposed to deal with any possible betting scheme without significant difficulties. Thirders probably can arrive to the correct answer post hoc, if explicitly primed by a question: "what odds are you supposed to bet if you bet only when the room is red?". But what I'm pointing at, is that thirdism naturally fails to develop an optimal strategy for per experiment bet in technicolor problem, falsly assuming that it's isomorphic to regular sleeping beauty. Nothing about their probabilistic model hints them that betting only when the room is red is the correct move. Their probability estimate is the same, despite new evidence about the state of the coin toss and so they are oblivious that there is a better strategy then always refusing the bet.

Technicolor and Rare Event problems highlight the issue that I explain in Utility Instability under Thirdism - in order to make optimal bets thirders need to constantly keep track of not only probability changes but also utility changes, because their model keeps shifting both of them back and forth and this can be very confusing. Halfers, on the other hand, just need to keep track of probability changes, because their utility are stable. Basically thirdism is strictly more complicated without any benefits and we can discard it on the grounds of Occam's razor, if we haven't already discarded it because of its theoretical unsoundness, explained in the previous post.

Finally, on Rare Event Sleeping beauty, it seems to me that you are biting the bullet here to some extent to argue that this is not a reason to favour thirderism.

I'm confused. What bullet am I biting? How can the fact that thirder probabilistic model misses the situation when the per experiment betting odds are actually 1:2 be an argument in favor of thirdism? 

Rare Event problem is such that the answer is about 1/3 only in some small number of cases. Halfer model correctly highlights the rule how to determine which cases these are and how to develop the correct strategy for betting. Thirder model just keeps answering 1/3 as a broken clock.

uh....no.

What do you still feel that is unresolved?

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-03-27T05:36:30.571Z · LW · GW

As I've told you multiple times your "Two-Coin Sleeping-Beauty" is fully isomorphic to regular Sleeping Beauty problem and so thirder model of it has all the same issues. It treats sequential events as mutually exclusive, therefore unlawfully constructs sample space, contradicting the fundamentals of probability theory. Your elimination argument has all the same flaws of elimination argument from updating model which I explored in the previous post.

But sure enough, let's look specifically at two-coin version of the problem and see how your updating model fails. Let's start from the statistical test. 

Your model treats HH, HT, TH and TT as four individual mutually exclusive outcomes that define a sample space, where each outcome has the probability of 1/4 and conditional on awakening we have three mutually exclusive outcomes HT, TH and TT which have probability 1/3. So according to it, running two coin experiment multiple times and writing down the states of the coins on every awakening of the Beauty should produce a list of outcomes HT, TH and TT in random order, where all of them have frequency 1/3.

However, when you actually do it, you get a different list. The frequency is 1/3, but the order is not random. TH and TT always go in pairs, and you can use this knowledge to predict the next token in the list better than chance.

Therefore, your model can't possibly be describing Two-Coin-Sleeping-Beauty problem. By analogy with regular Updating model, it actually describes Observer Two-Coin Sleeping Beauty problem:

You were hired to work as an observer for one random day in a laboratory which is conducting Two-Coin Sleeping Beauty experiment. You can see whether the Beauty is awake or not.

An observer who arrives on a random day may very well catch the Beauty asleep, so when you see her awake you actually receive new evidence about the state of the first coin and lawfully update. For an observer HH, HT, TH and TT are indeed mutually exclusive outcomes that do not have any order. If we repeat the observer two-coin experiment multiple times documenting all the outcomes of the coins every time that the Beauty is awake we indeed get a list where HT, TH and TT go in random order and have 1/3 frequency each.

Likewise, when exloring the reason why your model fails in the Two-Coin Sleeping Beauty, we notice that it is guilty of treating sequential events TH and TT as mutually exclusive. All the criticue of "centred possible worlds" equally applies to your model as well.

And we can just as well construct the correct model. The correct sample space looks like this:

Which, for the sake of simplicity, we can reduce to:

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-03-22T10:42:47.170Z · LW · GW

They are approximately disconnected according to our current best theory. Like your clones in different rooms are approximately disconnected, but still gravitationally influence each other.

I think this level of accuracy is good enough for now.

Still don't get how it's consistent with your argument about statistical test. It's not about multiple experiments starting from each copy, right?

It very much is. Every copy is its own person who can then participate in whatever experiments they chose to independently from the other copy. 

You still would object to simulating multiple Beauties started from each awakening as random?

I don't see how it is possible in principle. If the Beauty in the middle of experiment how can she starts participating in another experiment without breaking the setting of the current one? In what sense is she the same person anyway if you treat any waking moment as a different person?

Like, you based your modelling of Monday and Tuesday as both happening on how we usually treat events when we use probability theory. But the same justification is even more obvious, when both the awakening in Room 1 and the awakening in Room 2 happen simultaneously. 

No, they are not. Events that happen to Beauty on Monday and Tuesday are not mutually exclusive because they are sequential. On Tails if an awakening happened to her on Monday it necessary means that an awakening will happen to her on Tuesday in the same experiment.

But the same argument isn't applicable to fissure, where awakening in different Rooms are not sequential, and truly are mutually exclusive. If you are awaken in Room 1 you definetely are not awaken in Room 2 in this experiment and vice versa.

Or you say that the Beauty knows that she will be awake both times so she can't ignore this information. But both copies also know that they both will be awake, so why they can ignore it?

Well if there was some probability theoretic reason why copies could not reason independently, then that would be the case. This is indeed an interesting situation and I'll dedicate a separate post or even multiple of them to comprehensive analysis of it.

Is this what it is all about? It depends on definition of "you". Under some definitions the Beauty also doesn't experience both days. 

Of course it depends on definitions. Everything does. But not all definitions are made equal. Some carve reality at its joints and some do not. Some allows to construct theories that adds up to normality and some - that lead to bizarre conclusions.

Are you just saying that distinction is that no sane human would treat different moments as distinct identities?

Well it's a bit too late for that, because there definetely are otherwise sane people, who are eager to bite the bullet, no matter how ridiculous. 

What I'm saying is that to carve reality at it's joints we need to base our definitions on the causal graphs. And as an extra bonus it indeed seems to fit the naive intuition of personal identity and adds up to normality.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-03-21T07:00:15.368Z · LW · GW

Don't know specifics, as usual, but as far as I know, amplitudes of the branch would be slightly different from what you get by evolving this branch in isolation, because other branch would also spread everywhere.

I'm afraid I won't be able to address your concerns without the specifics. Currently I'm not even sure that they are true. According to Wei Dai in one of a previous comments our current best theory claims that Everett branches are causally disconnected and I'm more than happy to stick to that until our theories change.

wouldn't Lewis’ model fail statistical test, because it doesn't generate both rooms on Tails?

If you participate in a Fissure experiment you do not experience being at two rooms on Tails. You are in only one of the rooms in any case, and another version of you is in another room when it's Tails. You can participate in a thousand fissure experiment in a row and accumulate a list of rooms and coin outcomes corresponding to your experience and I expect them to fit Lewis's model. 75% of time you find yourself in room 1, 50% of time the coin is Heads.

I don't get why modeling coexistence in one timeline is necessary, but coexistence in space is not.

Because coexistence in space happens separately to different people who are not causally connected, while coexistence in one timeline happen to the same person, whose past and future are causally connected. I really don't understand why everyone seem to have so much trouble with such an obvious point. 

Suppose in a Sleeping Beauty it's Tails and the participant eats a big meal on Monday. On Tuesday they will likely need to visit the toilet as a result. But in Fissure on Tails if a person in one room eats a big meal it doesn't affect in any way the person in the other room. 

What do you mean by "can be correctly approximated as random sampling"?

Probability is in the map. And this map may or may not correspond to the territory. When someone throws a coin it can usually be treated as a random sample from two outcomes. But it's not some inherent law of the universe about coin tossing. Its possible to make a robot arm that throws coins in such a way to always produce Tails.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-03-19T17:47:32.401Z · LW · GW

I mean that different branches are casually connected - there is some level of interference between them.

Can't we model interference as separate branches? My QM is a bit rusty, what kind of casual behaviour is implied? It's not that we can actually jump from one branch to the other.

You said in another comment, that copying changes things, but I assume (from the OP) that you still would say that Elga's model is not allowed, because both rooms exist simultaneously? Well, branches also exist simultaneously.

Simultaneous of existence has nothing to do with it. Elga's model is wrong here because unlike the Sleeping Beauty, learning that you are in Room 1 is evidence for Heads, as you could not be sure to find yourself in Room 1 no matter what. Here Lewis' model seems a better fit.

..or do you accept Elga's model for copies and it is really all about awakenings being sequential?

I think some cloning arrangement can work according to Elga's model, it fully depends on the specific of the cloning procedure. Whether the process that led to your existence can be correctly approximated as random sampling or not. Though, I need to think more about it as these cases still feel a bit confusing to me. There definitely are settings where SIA-like reasoning is valid, like when there is a limited set of souls that a randomly picked to be instantiated in bodies, it just doesn't really seem to be the way our universe works.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-03-19T17:24:10.834Z · LW · GW

So, why is it ok for a simulation of an outcome with 1/2 probability to have 1/3 frequency?

There are only two outcomes and both of them have 1/2 probability and 1/2 frequency. The code saves awakenings in the list, not outcomes

People mistakenly assume that three awakenings mean three elementary outcomes. But as the simulation shows, there is order between awakenings and so they can't be treated as individual outcomes. Tails&Monday and Tails&Tuesday awakenings are parts of the same outcome.

If this still doesn't feel obvious, consider this. You have a list of Heads and Tails. And you need to distinguish between two hypothesis. Either the coin is unfair and P(Tails)=2/3, or the coin is fair but whenever it came Tails, the outcome was written twice in the list, while for Heads - only once. You check whether outcomes are randomly spread or pairs of Tails follow together. In the second case, even though the frequency of Tails in the list is twice as high as Heads, P(Tails)=P(Heads)=1/2.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-03-14T15:26:32.437Z · LW · GW

There is no such thing as direct translation of a problem into a betting strategy question. A model for a problem should be able to deal with any betting schemes, no matter how extravagant.

And the scheme where the Beauty can bet on every awakening is quite extravagant. It's an asymetric bet on a coin toss, where Tails outcome is rewarded twice as Heads outcome.

So surely the probability that the coin had landed tails prior to these events is 2/3? Not because it's an unfair coin or there was an information update (neither is true)

If there is no information update then the probability of the coin to be Tails can't change from 1/2 to 2/3. It would contradict the law of conservation of expected evidence.

but because the SB problem asks the probability from the perspective of someone being awakened, and 2/3 of these experiences happen after flipping tails.

As I've written in the Effects of Amnesia section, from Beauty's perspective Tails&Monday and Tails&Tuesday awakening are still part of the same elementary outcome because she remembers the setting of the experiment. If she didn't know that Tails&Monday and Tails&Tuesday necessary follow each other, if all she knew is that there are three states in which she can awaken, then yes, she should've reasoned that P(Tails)=2/3.

Alternatively if the question was about a random awakening of the Beauty among multiple possible experiments, then, once again, P(Heads) would be 1/3. But in the experiment as stated, the Beauty isn't experiencing a random awakening, she is experiencing ordered awakening, determined by a coin toss.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-03-14T15:08:40.266Z · LW · GW

It means "today is Monday".

And Beauty is awakened, because all the outcomes represent Beauty's awakened states. Which is "Beauty is awakened today which is Monday" or simply "Beauty is awakened on Monday" just as I was saying.

I mean what will happen, if Beauty runs the same code? Like you said, "any person" - what if this person is Beauty during the experiment? If we then compare combined statistics, which model will be closer to reality?

Nothing out of the ordinary. The Beauty will generate the list with the same statistical properties. Two lists if the coin is Tails.

My thinking is because then Beauty would experience more tails and simulation would have to reproduce that.

Simulation already reproduces that. Only 1/3 of the elements of the list are Heads&Monday. You should probably try running the code yourself to see how it works, because I have a feeling that you are missing something.

Comment by Ape in the coat on Advice Needed: Does Using a LLM Compomise My Personal Epistemic Security? · 2024-03-11T16:39:03.982Z · LW · GW

I think in a scenario where LLMs are already superhuman manipulators your personal decision not to interact with them doesn't matter at all. My personal copying mechanism for such timelines is the though that dying from rogue AI is so much cooler than from something trivial as old age, infectous diseases and war with other primates. 

But such scenarios do not seem likely. Eliezer didn't see LLMs coming. For this reason his warnings are not exactly on point. LLMs are superhuman in predicting the next token but not in manipulation. They are not powerseeking themselves, though they can probably be a part of a powerseeking entity if arranged in the right pattern. But then we will be able to easily read the mind of such entity. P(Doom) is in decades of percents but probably less than 50%

Comment by Ape in the coat on Evolution did a surprising good job at aligning humans...to social status · 2024-03-11T08:18:37.676Z · LW · GW

It seems that a huge part of "human behaviour is explained by status seeking" is just post hoc proclaiming that whatever humans do is status seeking

Suppose you want to predict whether a given man will go hang out with friends or work more on a project. How does the idea of status seeking helps? When we already know that the human chose friends we say, yes of course, he get more status around his friend group by spending more time with them, improving their bonds and having good friends is a marker of status in its own right. Likewise, when we know that the man chose work, we can say that this is behaviour that leads towards promotion and more money and influence inside the company which is a marker of high status. But when we want to predict beforehand... I don't think it really helps.

Comment by Ape in the coat on 0th Person and 1st Person Logic · 2024-03-11T07:11:42.609Z · LW · GW

Is the puzzle supposed to be agnostic to the specifics of copying? 

It seems to me that if by copying we mean fissure, when a person is separated into two, we have 1/2 for OO, 1/4 for CO and 1/4 for CC, while if by copying we mean "a clone of you is created" then the probability to observe OO a time 0 is 1, because there is no causal mechanism due to which you would swap bodies with a clone.

Comment by Ape in the coat on In defense of anthropically updating EDT · 2024-03-11T07:04:19.799Z · LW · GW

By what standard do you judge some betting odds as "correct" here?

The same as always. Correct betting odds systematically lead to winning. 

I don't see the motivation for that (as discussed in the post)

The motivation is that you don't need to invent extraordinary ways to wiggle out from being dutch booked, of course. 

Do you systematically use this kind of reasoning in regards to betting odds? If so, what is your reasons to endourse EDT in the first place?

as I note in "Aside: Non-anthropically updating EDT sometimes 'fails' these cases."

This subsection is another example of "two wrongs make a right" reasoning. You pointing out at some problems of EDT not related to antropic updating and then conclude that then the fact that EDT with anthropic updating has similar problems is okay. This doesn't make sense. If a theory has a flaw we need to fix the flaw, not treat it as a license to add more flaws to the theory. 

I gave independent epistemic arguments for anthropic updating at the end of the post, which you haven't addressed

I'm sorry but I don't see any substance in your argument to address. This step renders all the chain of reasoning meaningless:

What is , i.e., assuming I exist in the given world, how likely am I to be in a given index? Min-RC-SSA would say, “‘I’ am just guaranteed to be in whichever index corresponds to the person ‘I’ am.” This view has some merit (see, e.g., here and Builes (2020)). But it’s not obvious we should endorse it — I think a plausible alternative is that “I” am defined by some first-person perspective.[19] And this perspective, absent any other information, is just as likely to be each of the indices of observers in the world. On this alternative view,.

You are saying that there is a view 1. that has some merits, but it's not obvious that it is true so... you just assume the view 2., instead. Why? Why would you do it? What's the argument that you should assume that? You don't give any. Just make an ungrounded assumption and go with your reasoning further.

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-03-10T10:29:23.614Z · LW · GW

It can’t be usefully defined if we assume that Elga’s model is true. I agree that it is not a point in favor. Doesn't mean we can't use it instead of assuming it is true.

No disagreement here, then. Indeed, we can use wrong models as some form of approximations, we just have to be aware of the fact that they are wrong and not insist on their results when they contradict results of correct models.

What do you mean by "rigorously"?

As in what you mean by "today" in logical terms. I gave you a very good example with how it's done with No-Coin-Toss and Single-Awakening problems.

Yes, it's all unreasonable pedantry, but you are just all like "Math! Math!".

It's not unreasonable pedantry. It's an isolated demand for rigor from your part. 

I do not demand from Elga's model anything my model doesn't do. I'm not using more vague language while describing my model, that the one I used while describing Elga's. 

You, on the other hand, in attempt to defend it, suddenly pretend that you don't know what "event happens" means and demand to formally prove that events that happen have probability more than zero. We can theoretically go this route. Wikipedia's article for probability space covers the basics. But do you really want to loose more time on obvious things that we do not actually disagree about?

On wiki it's "When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?" - notice the "ought"^^. And the point is mostly that humans have selfish preferences.

First awakened? Then even Elga's model agrees that P(Heads|Monday)=1/2

No, the question is about how she is supposed to reason anytime she is awakened, not just the first one.

Nah, I was just wrong. But... Ugh, I'm not sure about this part.

Thank you for noticing it. I'd recommend to take some time and reflect on the new evidence that you didn't expect.

 First of all, Elga’s model doesn't have "Beauty awakened on Monday" or whatever you simulate 

What else does event "Monday" that has 2/3 probability means then? According to Elga's model there are three mutually exclusive outcomes: Heads&Monday, Tails&Monday Tails&Tuesday, corresponding to three possible awakening states of the Beauty. What do you disagree with here?

And what would happen, if Beauty performed simulation instead of you? I think then Elga's model would be statistically closest, right? 

I do not I understand what you mean here. Beauty is part of simulation. Nothing prevents any person from running the same code and getting the same results.

Also what if we tell Beauty what day it is after she tells her credence - would you then change your simulation to have 1/3 Heads?

Why would it? The simulation shows which awakening the Beauty is going through on a repetition of an experiment as it described, so that we could investigate the statistical properties of these awakenings.

No, that's the point - it means they are using different definitions of knowledge.

How is definition of knowledge relevant to probability theory? I suppose, if someone redefines "knowledge" as "being wrong" then yes, in such definition the Beauty should not accept the correct model, but why would we do it?

"Default" doesn't mean "better" - if extra assumptions give you what you want, then it's better to make more assumptions.

It means doesn't require any further justifications. You are free to make any other assumptions if you managed to justify them - the burden of proof is on you. As I'm pointing in the post, no one managed to justify all this "centered worlds" kind of reasoning thus we ought to discard it until it is formally proved to be applicable to probability theory.

Comment by Ape in the coat on 0th Person and 1st Person Logic · 2024-03-10T09:44:09.590Z · LW · GW

I'll say though that I don't think the usefulness or validity of the 0P/1P idea hinges on whether it helps with anthropics or Sleeping Beauty (note that I marked the Sleeping Beauty idea as speculation).

I agree. Or I'd even say that the usefulness and validity of the 0P/1P idea is reversely correlated with their applications to "anthropic reasoning".

This is frustrating because I'm trying hard here to specify exactly what I mean by the stuff I call "1st Person"

Yes, I see that and I'm sorry. This kind of warning isn't aimed at you in particular, it's a result of my personal pain how people in general tend to misuse such ideas.

What makes the interpretations different practically comes from wiring them up differently in the robot - is it reasoning about its world model or about its sensor values? It sounds like you think the 1P interpretation is superfluous, is that right?

I'm not sure. It seems that one of them has to be reducible to the other, though probably in a opposite direction. Isn't having a world model also a type of experience? 

Like, consider two events: "one particular robot observes red" and "any robot observes red". It seems that the first one is 1st person perspective, while the second is 0th person perspective in your terms. When a robot observes red with its own sensor it concludes that it in particular has observed red and deduces that it means that any robot has observed red. Observation leads to an update of a world model. But what if all robots had a synchronized sensor that triggered for everyone when any of them has observed red. Is it 1st person perspective now?

Probability theory describes subjective credence of a person who observed a specific outcome from a set possible outcomes. It's about 1P in a sense that different people may have different possible outcomes and thus have different credence after an observation. But also it's about 0P because any person who observed the same outcome from the same set of possible outcomes should have the same credence.

I guess, I feel that the 0P, 1P distinction doesn't really carve math by its joints. But I'll have to think more about it.

Comment by Ape in the coat on In defense of anthropically updating EDT · 2024-03-10T07:45:24.487Z · LW · GW

The point of analogy is that just as there different ways to account for the fact that red paint is required in the mix - either by adding it first or second, there are different ways to account for the fact that, say, Sleeping Beauty awakens twice on Tails and only once on Heads.

One way is to modify probabilities, saying that probability of awakening on Tails is twice of awakening on Heads - that's what SIA does. The other is to modify utilities, saying that the reward of correctly guessing Tails is twice as large as Heads in a per awakening betting rule - that's what EDT does, if I understand correctly. Both ways produce the same product P(Tails)U(Tails), which define the betting odds. But if you modify both utilities and probabilities you obviously get the wrong result.

Now, you are free to choose to bite the bullet that it has never been about getting the correct betting odds in the first place. For some reason, people bite all kind of ridiculous bullets specifically in anthropic reasoning, and so I hoped that re-framing the issue as a recipe for purple paint may snap you out of it, which, apparently, failed to be the case.

But usually when people find themselves in a situation where only one theory out of two can be true, despite there being compelling reasons to believe in both of them, they treat it as a reason to re-examine these reasons, because at least one of these theories is clearly wrong.

And yeah, SIA is wrong. Clearly wrong. It's so obviously wrong that even according to Carlsmith, who defends it in a series of posts, it implies telekinesis, and the main appeal that at least it's not as bad as SSA. As I've previously commented on this topic:

A common way people tend to justify SIA and all it ridiculousness is by pointing at SSA ridiculousness and claiming that it's even more ridiculous. Frankly, I'm quite tired of this kind of anthropical whataboutism. It seems to be some kind of weird selective blindness. In no other sphere of knowledge people would accept this as a valid reasoning. But in anthropics, somehow, it works?

The fact that SSA is occasionally stupid doesn't justify SIA's occasional stupidity. Both are obviously wrong in general, even though sometimes both may produce correct result.

Comment by Ape in the coat on 0th Person and 1st Person Logic · 2024-03-10T07:03:15.313Z · LW · GW

Frankly, I'm not sure whether the distinction between "worlds" and "experiences" is more useful or more harmful. There is definitely something that rings true about your post but people have been misinterpreting all that in a very silly ways for decades and it seems that you are ready to go in the same direction, considering your mentioning of anthropics.

Mathematically, there are mutually exclusive outcomes which can be combined into events. It doesn't matter whether these outcomes represent worlds or possible experiences in one world or whatever else - as long as they are truly mutually exclusive we can lawfully use probability theory. If they are not, then saying the phrase "1st person perspective" doesn't suddenly allow us to use it.

How do we give  the intuitive meaning of "my sensor sees red"?

We don't, unless we can formally specify what "my" means. Until then we are talking about the truth of statements "Red light was observed" and "Red light was not observed". And if our mathematical model doesn't track any other information, then for the sake of this mathematical model all the robots that observe red are the same entity. The whole point of math is that it's true not just for one specific person but for everyone satisfying the conditions. That's what makes it useful.

Suppose I'm observing a dice roll and I wonder what is the probability that the result will be "4". The mathematical model that tells me that it's 1/6 also tells the same to you, or any other person. It tells the same fact about any other roll of any other dice with similar relevant properties. 

From this, we get a nice potential explanation to the Sleeping Beauty paradox: 1/2 is the 0P-probability, and 1/3 is the 1P-probability. This could also explain why both intuitions are so strong.

I was worried that you would go there. There is only one lawful way to define probability in Sleeping Beauty problem. The crux of disagreement between thirders and halfers is whether this awakening should be modeled as random awakening between three equiprobable mutually exclusive outcomes: Heads&Monday, Tails&Monday and Tails&Tuesday. And there is one correct answer to it - no it should not. We can formally prove that if Tails&Monday awakening is always followed by Tails&Tuesday awakening, then they are not mutually exclusive.

Comment by Ape in the coat on Lessons from Failed Attempts to Model Sleeping Beauty Problem · 2024-03-08T13:48:41.909Z · LW · GW

Yes, you are completely correct.

Frankly, it's a bit bizarre to me that the absolute majority of people do not notice it. That we still do not have a consensus. As if people mysteriously loose the ability to apply basic probability theory reasoning when talking about "anthropical problems".

Comment by Ape in the coat on The Solution to Sleeping Beauty · 2024-03-06T09:43:42.927Z · LW · GW

But who does the picking?

The math stays the same, regardless. That's the whole point.

The problem is that all branches exist, so objective statistics shows them always existing simultaneously.

I don't see how it's a problem. We deal with such cases all the time in probability theory. Suppose there are n students and n exam question sheets. Every sheet may includes several questions and some questions are asked often then others. "Objective statistics" shows that all the sheets are spread among students and all the questions are asked. And yet there is a meaningful way to say that to a particular student there is a specific probability to receive a particular question in the exam.

On the fundamental level, they are.

I don't think I understand what you mean here. Can you elaborate? I'm talking about the difference in causal graphs.

if you are fine with approximations, then you can treat Elga's model as approximation too.

Or, it's works as an approximation, to some degree, no argument here. But what's the point in using imperfect approximation when there is a better model?

Comment by Ape in the coat on In defense of anthropically updating EDT · 2024-03-06T09:23:57.261Z · LW · GW

Why?

Well, let's try a simple example. Suppose you have two competing theories how to produce purple paint:

  1. Add red paint into the vial before the blue paint and then mix them together.
  2. Add blue paint into the vial before the red paint and then mix them together.

Both theories work on practice. And yet, they are incompatible with each other. Philosophers write papers about the conundrum and soon two assumptions are coined: red first assumption - RFA and red second assumption - RSA. 

Now, you observe that there are compelling arguments in favor of both theories. Does it mean that it's an argument in favor of RSA+RFA - adding red both the first and the second time? Even though the result is visibly not purple?

Of course not! It means that something is subtly wrong with both theories, namely that they assume that the order in which we add paint is relevant at all. What is required is that blue and red ingredients are accounted for and are present in the resulting mix.

Do you see the similarity between this example and SIA+EDT case?