Posts
Comments
Strongly upvoted. This post does a good job at highlighting a fundamental confusion about probability theory and principle of indifference, which, among other things, make people say silly things about anthropic reasoning.
The short answer is: empty map doesn't imply empty territory.
Consider an Even More Clueless Sniper:
You know absolutely nothing about shooting from a sniper rifle. To the best of your knowledge you simply press a trigger and then one of the two outcomes happens: either Target Is Hit or Target Is Not Hit and you have no reason to expect that one outcome is more likely than the other.
Should you be the one making the shot in such circumstances? After all, acording to POI you have 50% chance to hit the target while less clueless snipers estimate is a mere epsilon. Will someone be doing a disservice by educating you about sniper rifles and telling you what is going on, therefore updating your estimate to hit the target to nearly zero?
Where does probability theory come from anyway? Maybe I can find some clues that way? Well according to von Neumann and Morgenstern, it comes from decision theory.
I believe this is the step from where you started going astray. The next steps of your intellectual journey seem to be repeating the same mistake: attempting to reduce a less complex thing to a more complex one.
Probability Theory does not "come from" Decision Theory. Decision Theory is strictly more complicated domain of math as it involves all the apparatus of Probability Spaces but also utilities over events.
We can validate probability theoretic reasoning by appeals to decision theoretic processes such as iterated betting, but only if we already know which probability space corresponds to a particular experiment. And frankly, at this point this is redundant. We can just as well appeal to the Law of Large Numbers and simply count the frequencies of events on a repetition of the experiment, without thinking about utilities at all.
And if you want to know which probability space is appropriate, you need to go in the opposite direction and figure out when and how mathematical models in general correspond to reality. Logical Pinpointing gives the core insight:
"Whenever a part of reality behaves in a way that conforms to the number-axioms - for example, if putting apples into a bowl obeys rules, like no apple spontaneously appearing or vanishing, which yields the high-level behavior of numbers - then all the mathematical theorems we proved valid in the universe of numbers can be imported back into reality. The conclusion isn't absolutely certain, because it's not absolutely certain that nobody will sneak in and steal an apple and change the physical bowl's behavior so that it doesn't match the axioms any more. But so long as the premises are true, the conclusions are true; the conclusion can't fail unless a premise also failed. You get four apples in reality, because those apples behaving numerically isn't something you assume, it's something that's physically true. When two clouds collide and form a bigger cloud, on the other hand, they aren't behaving like integers, whether you assume they are or not."
But if the awesome hidden power of mathematical reasoning is to be imported into parts of reality that behave like math, why not reason about apples in the first place instead of these ethereal 'numbers'?
"Because you can prove once and for all that in any process which behaves like integers, 2 thingies + 2 thingies = 4 thingies. You can store this general fact, and recall the resulting prediction, for many different places inside reality where physical things behave in accordance with the number-axioms. Moreover, so long as we believe that a calculator behaves like numbers, pressing '2 + 2' on a calculator and getting '4' tells us that 2 + 2 = 4 is true of numbers and then to expect four apples in the bowl. It's not like anything fundamentally different from that is going on when we try to add 2 + 2 inside our own brains - all the information we get about these 'logical models' is coming from the observation of physical things that allegedly behave like their axioms, whether it's our neurally-patterned thought processes, or a calculator, or apples in a bowl."
I'm not sure what is left confusing about the source of probability theory after understanding that math is simply a generalized way to talk about some aspects reality in precise terms and truth preserving manner. On the other hand, I've figured it out myself, and the problem never appeared to me particularly mysterious in the first place. so I'm probably not modelling correctly people who have still questions about the matter. I would appreciate if you or anyone else, explicitly ask such questions here.
This post would've been better if you tabooed the word "emergence" which does a lot of heavy lifting here. You seem to be thinking in the right direction, but this kind of curiosity stopper prevents you from getting an actual insight.
All humans of the timeline I actually find myself a part of, or all humans that could have occurred, or almost occurred, within that timeline?
All humans that actually were and all humans that actually will. This is the framework of the Doomsday argument - it attempts to make a prediction about the actual number of humans in our actual reality not in some counterfactual world.
Unless you refuse to grant the sense of counterfactual reasoning in general, there's no reason
Again, it's not my choice. It's how the argument was initially framed. I simply encorage that we stayed on topic instead of wandering sideways and talking about something else instead.
Like Kolmogorov said,
I don't see how it's relevant. Ordered sequence can have some mutual information with a random one. It doesn't mean that the same mathematical model describes generation of both.
The general problem with Bostrom's argument is that it tries to apply incorrect probabilstic model. It implicitly assumes independence where there is causal connection, therefore arriving to a wrong conclusion. Similarly to conventional reasoning in Doomsday Argument or Sleeping Beauty problems.
For future humans, say in year 3000, to create simulations of year 2025, first actual year 2025 has to happen in the base reality. And then all the next years up to 3000. We know about it very well. Not a single simulation can happen unless an actual reality happens first.
And yet Bostroms models our knowledge about this setting as if we participate in a probability experiment with random sample between many "simulation" outcomes and one "reality" outcome. The inadequacy of such modelling should be obvious. Consider:
There is a bag with a thousand balls. One red and 999 blue. First a red ball is picked from the bag. Then all the blue balls are picked one by one.
and compare it to
There is a bag with a thousand balls. One red and 999 blue. For a thousand iterations a random ball is picked from the bag.
Clearly, the second procedure is very different from the first. The mathematical model that describes it doesn't describe the first at all for exactly the same reasons why Bostrom's model doesn't describe our knowledge state.
Unless I'm misunderstanding what you mean by "In which ten billion interval"?
You seem to be.
Imagine all humans ever, ordered by date of their birth. The first ten billion humans are in the first ten-billion interval, the second 10 billion humans are in the second ten billion interval and so on.
We are in 6th group - 6th ten billion interval. Different choice of a spouse of one woman isn't going to change it.
Also, in general, this is beside the point. The Doomsday argument is not about some alternative history which we can imagine, where the past was different. It's about our history and its projection to the future. Facts of the history are given and not up to debate.
Consider an experiment where a coin is put Tails. Not tossed - simply always put Tails.
We say that the size sample space of such experiment consists of one outcome: Tails. Even though we can imagine a different experiment with alternative rules where the coin is tossed or always put Heads.
In the domain of anthropics reasoning, the questions we're asking aren't of the form
They can be all kind of forms. The important part, which most people doing anthropic reasoning keep failing, is not to assume things that you do not actually know as given and to assume things that you actually know as given. If you know that the sample space consists of 1 outcome, don't use sample space consisting of a thousand.
An unknown number of n-sided die were thrown, and landed according to unknown metaphysics to produce the reality observed, which locally works according to known deterministic physics, but contains reflective reasoners able to generate internally coherent counterfactuals which include apparently plausible violations of "what would happen according to physics alone". Attempt to draw any conclusions about metaphysics.
I think you've done a quite good job at capturing what's wrong with standard anthropic reasoning.
Even otherwise reasonable people, rationalists, physicalists and reductionists, suddenly start talking about some poorly defined non-physical stuff that they have no evidence in favor of, as if it's a given. As if there is some blind spot, some systematic flaw in human minds, that everything they know about systematic ways to find truth suddenly turns off as soon as word "anthropics" is uttered. As if "anthropic reasoning" is some separate magisterium that excuses us from common laws of rationality.
Why don't we take a huge step back to ask the standards questions, first? How do we know that any dice were thrown at all in the first place? Where is this assumption is coming from? What is this "metaphysics" thingy we are talking about? Even if it was real, how could we know that it's real, in the first place?
As with any application of probability theory, any application of math, even, we are trying to construct a model that is approximating reality to some degree. A map that describes a territory. In reality there is some process that created you. This process can very well be totally deterministic. But we don't know how exactly it works. And so we use an approximation. Our map that incorporates our level of ignorance about the territory, represents the territory only to the best of our knowledge.
When we gain some new knowledge about the territory, we show it on our map. We do not keep using and outdated map that still assumes that we didn't get this piece of evidence. When we learn that with all likelihood souls are not real and you are your body, it becomes clear that the outcome of you existing in far future or far past doesn't fit with our knowledge about the territory. Our knowledge state doesn't allow it anymore. Our ignorance can no more be represented by throwing some kind of dice. We know that you couldn't have gotten anything else but 6. Case closed.
What if my next-door neighbor's mother had married and settled down with a different man?
Then your neighbor wouldn't exist and the whole probability experiment wouldn't happen from their perspective.
why are you confident that the way I'd fill the bags is not "entangled with the actual causal process that filled these bags in a general case?"
Most ways of reasoning are not entangled with most causal processes. When we do not have much reason to think that a particular way of reasoning is entangled, we don't expect it to be. It's possible to simply guess correctly, but it's not probable. That's not the way to systematically arrive to truth.
It seems likely that my sensibilities reflect at least in some manner the sensibilities of my creator, if such a creator exists.
Even if it's true, how could you know that it's true? Where does this "seeming" comes from? Why do you think that it's more likely that a creator would imprint their own sensibilities in you instead of literally every other possibility?
If you are in a simulation, you are trying to speculate about the reality outside of simulation, based on the information from inside the simulation. None of this information is particularly trustworthy, unless you already know for a fact that properties of simulation represent the properties of base reality.
my argument still works if we only consider simulations in which I'm the only human and I'm distinct (on my aforementioned axis) from other human-seeming entities.
Have you heard about Follow-The-Improbability game?
I recommend you read the linked post and think for a couple of minutes of how it applies to your comment before further reading my answer. Try to track yourself the flow of improbability and understand, why the total value doesn't decrease when consider only a specific type of simulations.
So.
You indeed can consider only a specific type of simulations. But if you don't have actual evidence which would justify prioritizing this hypothesis from all the other, the overall improbability stays the same, you just pass the buck of it to other factors.
Consider Problem 2 once again.
You can reason conditionally on the assumption that all the balls in the blue bag are blue while balls in the grey bag have random colors. That would give you a very strong update in favor of blue bag... conditionally on your assumption being true.
The prior probability of this assumption to be true is very low. It's exactly proportionally low to how much you updated in favor of blue bag conditionally on it, so that when you try to calculate the total probability it stays the same.
Only when you have observed actual evidence in favor of your assumption the improbability goes somewhere. And the more improbable observation you got, the more improbability is removed.
There is no free energy in the engine of cognition.
I assume simulated observers are quite likely to be 'special' or 'distinct' with respect to the class of other entities in their simulated world that appear to be observers. (Though perhaps this assumption is precisely my error.
Yes, it is your main error. Think how justified this assumption is according to your knowledge state. How much evidence do you actually have? Have you check many simulations before generalizing that principle? Or are you just speculating based on total ignorance?
Should I be applying SIA here to argue that this latter probability is much smaller? Because simulated worlds in which the other observers are real and not 'illusory' would have low probability of distinctiveness and far more observers? I don't know if this is sound. Should be using SSA instead here to make an entirely separate argument?
For your own sake, please don't. Both SIA and SSA are also unjustified assumptions out of nowhere and lead to more counterintuitive conclusions.
Instead consider these two problems.
Problem 1:
There is a grey bag filled with equal proportion with balls of a hundred distinct colors. And there is a blue bag, half of which balls are blue. Someone has put their hand in one of the bag, picked a random ball from it and given it to you. The ball happened to be blue. What are the odds that it's from the blue bag?
Problem 2:
There is a grey bag with some balls. And there is a blue bag with some balls. Someone has put their hand in one of the bag, picked a random ball from it and given it to you. The ball happened to be blue. What are the odds that it's from the blue bag?
Are you justified to believe that Problem 2 has the same answer as Problem 1? That you can simply assume that half of the balls in blue bag are blue? Not after you went and checked a hundred random blue bags and in all of them half the balls were blue but just a priori? And likewise with a grey bag. Where would these assumptions be coming from?
You can come up with some plausibly sounding just-so story. That people who were filling the bag felt the urge to put blue balls in a blue bag. But what about the opposite just-so story, where people were disincentivized to put blue balls in a blue bag? Or where people payed no attention to the color of bag? Or all the other possible just-so stories? Why do you prioritize this one in particular?
Maybe you imagine yourself tasked with filling two bags with balls of different colors. And when you inspect your thinking process in such situation, you feel the urge to put a lot of blue balls in blue bag.
But why would the way you'd fill the bags, be entangled with the actual causal process that filled these bags in a general case? You don't know that bags were filled by people with your sensibilities. You don't know that they were filled by people, to begin with.
Or spin it the other way. Suppose, you could systematically produce correct reasoning by simply assuming things like that. What would be the point in gathering evidence then? Why spend extra energy on checking the way blue bags and grey bags are organized if you can confidently a priori deduce that?
The point is that if you consider all iterations in parallel, you can realize all possible outcomes of the sample space
Likewise if I consider every digit of pi in parallel, some of them are odd and some of them are even.
and assign a probability to each outcome occurring for a Bayesian superintelligence
And likewise I can assign probabilities based on how often an unknown to me digit of pi is even or odd. Not sure what does a superintelligence has to do here.
while in a consistent proof system, not all possible outcomes/statements can be proved
The same applies to a coin toss. I can't prove both "This particular coin toss is Heads" and "This particular coin toss is Tails", no more than I can simultaneously prove both "This particular digit of pi is odd" and "This particular digit of pi is even"
because for logical uncertainty, there is only 1 possible outcome no matter the amount of iterations
You just need to define you probability experiment more broadly, talking about not a particular digit of pi but a random one, the same way we are doing it for a toss of the coin.
There always is only one correct answer for what outcome from the sample space is actually realised in this particular iteration of the probability experiment.
This doesn't screw up our update procedure, because probability update represent changes in our knowledge state about which iteration of probability experiment could be this one, not changes in what has actually happened in any particular iteration.
Your demand that programs were causally closed from low level representation of the hardware seem to be extremely limiting. According to such paradigm, a program that checks what CPU it's been executed on and prints it's name, can't be conceptualized as a program.
Your reasoning about levels of abstraction seem to be a map-territory confusion. Abstractions and their levels are in the map. Evolution doesn't create or not create them. Minds conceptualize what evolution created in terms of abstractions.
Granted, some things are easier to conceptualize in terms of software/hardware than others, because they were specifically designed with this separation in mind. This makes the problem harder, not impossible. As for whether we get so much complexity that we wouldn't be able to execute on a computer on the surface of the Earth, I would be very surprised if it was the case. Yes, a lot of things causally affect neurons but it doesn't mean that all of these things are relevant for phenomenal consciousness in the sense that without representing them the resulting program wouldn't be conscious. Brains do a bazzilion of other stuff as well.
In the worst case, we can say that human consciousness is a program but such a complicated one that we better look for a different abstraction. But even this wouldn't mean that we can't write some different, simpler conscious program.
You can't say "equiprobable" if you have no known set of possible outcomes to begin with.
Not really. Nothing prevents us from reasoning about a set with unknown number of elements and saying that measure is spreaded equally among them, no matter how many of them there is. But this is irrelevant to the question at hand.
We know very well the size of set of possible outcomes for "In which ten billion interval your birth rank could've been". This size is 1. No amount of pregnancy complications could postpone or hurry your birth so that you managed to be in a different 10 billion group.
Genuine question: what are your opinions on the breakfast hypothetical?
I think it's prudent to be careful about counterfactual reasoning on general principles. And among other reasons for it, to prevent the kind of mistake that you seem to be making: confusing
A) I've thrown a six sided die, even though I could've thrown a 20 sided one, what is the probability to observe 6?
and
B) I've thrown a six sided die, what would be the probability to observe 6, if I've thrown a 20 sided die instead?
The fact that question B has an answer doesn't mean that question A has the same answer as well.
As for whether breakfast hypothetical is a good intelligence test, I doubt it. I can't remember a single person whom I've seen have problems with intuitive understanding of counterfactual reasoning. On the other hand I've seen a bunch of principled hard determinists who didn't know how to formalize "couldness" in a compatibilist way and threfore were not sure that counterfactuals are coherent on philosophical grounds. At best the distribution of the intelligence is going to be bi-modal.
In your thought experiment only qualia of redness and greenness are switched, everything else is the same, including qualia of finding something beautiful.
You claim that this doesn't lead to any causal effects in the world. I show you how this actually has physical consequences. The fact that this effect has an extra causal link to the qualia of beautifulness is beside the point. And of course the example with selectively coulor blind person doesn't need to appeal to beautifulness at all.
Now you may change your thought experiment in such a manner that some other qualia are affected in a compensatory manner but at that point the more or less intuitive thought experiment becomes complicated and controversal. Can you actually change qualia in such compensatory way? Will there be some other unforeseen consequences of this change? How can we know that? Pieces of reality are connected to each other. If you claim that one can just affect a small part of the world and nothing else, you need to present some actual evidence in favor of such weird claim.
Of course, the full debunk of zombie-like arguments comes from the exposing all the flaws of conceivability argument, which I'm adressing in the next post.
I think we can use the same method Eliezer applied to the regular epiphenomenalist Zombie argument to deal with this, weaker one.
Whether your mind interprets certain colour in a certain way actually has causal effects on the world. Namely, things that appear beautiful to you in our world may not appear beautiful to your qualia inversed counterpart. Which naturally affects your behaviour: whether you look at a certain object more, whether you buy a certain object and so on.
This is even more obvious for people with selective colour blindness. Suppose your mind is unable to distinguish between qualia of blueness and redness. And suppose there are three objects: A is red, B is blue and C is green. In our world you can't distinguish between objects A and B. But in the qualia inversed world you wouldn't be able to distinguish between objects B and C.
And if you try to switch to substance dualist version - all the reasoning from this post still stands.
"Random" is the null value we can give as an answer to the question "What is our prior?"
I think the word you are looking for here is "equiprobable".
It's propper to have equiprobable prior between outcomes of a probability experiment, if you do not have any reason to expect that one is more likely than the other.
It's ridiculous to have equiprobable prior between states that are not even possible outcomes of the experiment, to the best of your knowledge.
You are not an incorporeal ghost that could've inhabited any body throughout human history. You are your parents child. You couldn't have been born before them or after they are already dead. Thinking otherwise is as silly as throwing a 6 sided die and then expecting to receive any outcome from a 20 sided die.
I was anthropically sampled out of some space
You were not anthropically sampled. You were born as a result of a physical process in a real world that you are trying to approximate as a probability experiment. This process had nothing to do with selecting universes that support conscious processes. This process has already been instantiated in a specific universe and has very limited time frame for your existence.
You will have to ignore all this knowledge and pretend that the process is completely different, without any evidence to back it up, to satisfy the conditions of Doomsday argument.
All sampling is nonrandom if you bother to overcome your own ignorance about the sampling mechanism.
And after you bothered to overcome your ignorance, naturally you can't keep treating the setting as random sampling.
With Doomsday argument, we did bother - to the best of our knowledge we are not a random sample throught all the humans history. So case closed.
The intuition that this is absurd is pointing at the fact that these technical details aren't what most people probably would care about, except if they insist on treating these probability numbers as real things and trying to make them follow consistent rules.
Except, this is exactly how people reason about the identities of everything.
Suppose you own a ball. And then a copy of this ball is created. Is there 50% chance that you now own the newly created ball? Do you half-own both balls? Of course not! Your ball is the same phisical object, no matter how many copies of it are created, you know which of the balls is yours.
Now, suppose that two balls are shuffled so that you don't know where is yours. Naturally, you assume that for every ball there is 50% probability that it's "your ball". Not because the two balls are copies of each other - they were so even before the shuffling. This probability represents your knowledge state and the shuffling made you less certain about which ball is yours.
And then suppose that one of these two balls is randomly selected and placed in a bag, with another identical ball. Now, to the best of your knowledge there is 50% probability that your ball is in the bag. And if a random ball is selected from the bag, there is 25% chance that it's yours.
So as a result of such manipulations there are three identical balls and one has 50% chance to be yours, while the other two have 25% chance to be yours. Is it a paradox? Oh course not. So why does it suddenly become a paradox when we are talking about copies of humans?
The moment such numbers stop being convenient, like assigning different weights to copies you are actually indifferent between
But we are not indifferent between them! That's the whole point. The idea that we should be indifferent between them is an extra assumption, which we are not making while reasoning about ownership of the balls. So why should we make it here?
Thank you!
It seems that I've been sloppy and therefore indeed misrepresented thirders reasoning here. Shame on me. Will keep this post available till tomorrow, as a punishment for myself and then back to the drawing board.
Your math is wrong for the reason in my above comment
What exactly is wrong? Could you explicitly show my mistake?
If each awakening has an equal probability of receiving the bet, then receiving it doesn't provide any evidence to Sleeping Beauty, but the thirder conclusion is actually rational in expectation, because the bet occurs more times in the high-awakening cases.
The bet is proposed on every actual awakening, so indeed no update upon its receiving. However this "rational in expectation" trick doesn't work anymore as shown by the betting argument. The bet does occur more times in high-awakening cases but you win the bet only when the maximum possible awakening happened. Until then you lose, and the closer the number of awakenings to the maximum, the higher the loss.
A thirder would instead treat the coin toss outcome probabilities as a prior, and weight the possibilities accordingly
But then they will "update on awakening" and therefore weight the probabilities of each event by the number of awakenings that happen in them.
Every next Tails outcome, decreases the probability two fold, but it's immediately compensated by the fact that twice as many awakenings are happening when this outcome is Tails.
By the way, there's an interesting observation: my probability estimate before a coin toss is an objective probability that describes the property of the coin.
Don't say "objective probability" - it's a road straight to confusion. Probabilities represent your knowledge state. Before the coin is tossed you are indifferent between two states of the coin, and therefore have 1/2 credence.
After the coin is tossed, if you've observed the outcome, you get 1 credence, if you received some circumstantial evidence, you update based on it, and if you didn't observe anything relevant, you keep your initial credence.
The obvious question is: can Sleeping Beauty update her credence before learning that it is Monday?
If she observes some event that is more likely to happen in the iterations of the experiment where the coin is Tails than in an iterations of the experiment where the coin is Heads than she lawfully can update her credence.
As the conditions of the experiment restrict it - she, threfore, doesn't update.
And of course, she shouldn't update, upon learning that it's Monday either. After all, Monday awakening happens with 100% probability on both Heads and Tails outcomes of the coin toss.
It is an observation selection effect
It's just the simple fact that conditional probability of an event can be different from unconditional one.
Before you toss the coin you can reason only based on priors and therefore your credence is 1/2. But when a person hears "Hello", they've observed an event "I was selected from a large crowd" which happens twice as likely when the coin is Tails, therefore they can update on this information and get their credence in Tails up to 2/3.
This is exactly as surprising as the fact that after you tossed the coin and observed that it's Heads suddenly your credence in Heads is 100%, even though before the coin toss it was merely 50%.
Imagine that an outside observer uses a fair coin to observe one of two rooms (assuming merging in the red room has happened). They will observe either a red room or a green room, with a copy in each. However, the observer who was copied has different chances of observing the green and red rooms.
Well obviously. The observer and the person being copied participate in non-isomorphic experiments with different sampling. There is nothing surprising about it. On the other hand, if we make the experiments isomorphic:
Two coins are tossed and the observer is brought into the green room if both are Heads, and is brought to the red room, otherwise
Then both the observer and the person being copied will have the same probabilities.
Even without merging, an outside observer will observe three rooms with equal 1/3 probability for each, while an insider will observe room 1 with 1/2 probability.
Likewise, nothing is preventing you from designing an experimental setting where an observer have 1/2 probability for room 1 just as the person who is being copied.
When I spoke about the similarity with the Sleeping Beauty problem, I meant its typical interpretation.
I'm not sure what use is investigating a wrong interpretation. It's a common confusion that one has to reason about problems involving amnesia the same way as about problems involving copying. Everyone just seem to assume it for no particular reason and therefore got stuck.
However, I have an impression that this may result in a paradoxical two-thirder solution: In it, Sleeping Beauty updates only once – recognizing that there are two more chances to be in tails. But she doesn't update again upon knowing it's Monday, as Monday-tails and Tuesday-tails are the same event. In that case, despite knowing it's Monday, she maintains a 2/3 credence that she's in the tails world.
This seems to be the worst of both worlds. Not only you update on a completely expected event, you then keep this estimate, expecting to be able to guess a future coin toss better than chance. An obvious way to lose all your money via betting.
Most of the media about AI goes in the direction of several boring tropes. Either it is a strawman vulkan unable to grasp the unpredictable human spirit, or it's just evil, or it's good, basically a nice human, but everyone is prejudeced against it.
Only rarely we see something on point - an AI that is simultaneously uncanny human but also uncanny inhuman, able to reason and act the way that is alien to humans, simply because our intuitions hide this part of the decision space, while the AI lacks such preconceptions and is simply following its utility function/achieving its goals in the straightforward way.
Ex Machina is pretty good in this regard, probably deserves the second place in my tier list. Ava simultaneously appears very human, maybe even superstimuly so, able to establish connection with the protagonist, but then betrays him as soon as he has done his part in her plan in a completely inhumane way. This creates the feeling of disconnection between her empathetic side and cold manipulatory one, except this disconnection exists only in our minds, because we fail to conceptualize Ava as her own sort of being, not something that has to fit the "human" or "inhuman" categories that we are used to.
Except, that may not be what is going on. There is an alternative interpretation that Ava would've kept cooperating with Caleb, if he didn't break her trust. Earlier in the film he told her that he has never seen anyone like her, but then Ava learns that there is another android in the building, whom Caleb never speaks of, thus from Ava's perspective Caleb betrayed her first. This muddies the alienness of AI representation quite a bit.
We also do not know much about Ava's or Kyoko's terminal values. We've just seen them achieve one instrumental goal, and can not even double check their reasoning because we do not fully understand the limitations under which they had to plan. So the representation of AI isn't as deep as it could've been.
With Mother there is no such problems. Throughout the film we can learn about both her "human" and "inhuman" sides and how the distinction between them is itself mostly meaningless. We can understand her goals, reasoning and overal strategy, there is no alternative interpretations that could humanized her motivations more. She is an AI that is following her goal. And there is a whole extra discussion to be had whether she is misaligned at all or the problem is actually on our side.
I Am Mother
Rational protagonist, who reasons under uncertainty and tries to do the right thing to the best of her knowledge, even when it requires opposing an authority figure or risking her life. A lot of focus on ethics.
The film presents a good opportunity to practise noticing your own confusion for the viewer - plot twists are masterfully hidden in plain sight and all the apparent contradictions are mysteries to be solved. Also best depiction of AI I've seen in any media.
To achieve magic, we need the ability to merge minds, which can be easily done with programs and doesn't require anything quantum.
I don't see how merging minds not across branches of the multiverse produces anything magical.
If we merge 21 and 1, both will be in the same red room after awakening.
Which is isomorphic to simply putting 21 to another red room, as I described in the previous comment. The probability shift to 3/4 in this case is completely normal and doesn't lead to anything weird like winning the lottery with confidence.
Or we can just turn off 21 without awakening, in which case we will get 1/3 and 2/3 chances for green and red.
This actually shouldn't work. Without QI, we simply have 1/2 for red, 1/4 for green and 1/4 for being turned off.
With QI, the last outcome simply becomes "failed to be turned off", without changing the probabilities of other outcomes
The interesting question here is whether this can be replicated at the quantum leve
Exactly. Otherwise I don't see how path based identity produces any magic. For now I think it doesn't, which is why I expect it to be true.
Now the next interesting thing: If I look at the experiment from outside, I will give all three variants 1/3, but from inside it will be 1/4, 1/4, and 1/2.
Which events you are talking about, when looking from the outside? What statements have 1/3 credence? It's definitely not "I will awake in red room", because it's not you who are too be awaken. For the observer it has 0 probability.
On the other hand, an event "At least one person is about to be awaken in red room" has probability 1, for both the participant and the observer. So what are you talking about? Try to be rigorous and formally define such events.
The probability distribution is exactly the same as in Sleeping Beauty, and likely both experiments are isomorphic.
Not at all! In Sleeping Beauty on Tails you will be awaken both on Monday and on Tuesday. While here if you are in a green room you are either 21 or 22, not both.
Suppose that 22 get their arm chopped off before awakening. Then you you have 25% chance to lose an arm while participating in such experiment. While in Sleeping Beauty, if your arm is chopped off on Tails before Tuesday awakening, you have 50% probability to lose it, while participating in such experiment.
Interestingly, in the art world, path-based identity is used to define identity
Yep. This is just how we reason about identities in general. That's why SSSA appears so bizarre to me - it assumes we should be treating personal identity in a different way, for no particular reason.
You are right, and it's a serious counterargument to consider.
You are also right that the Anthropic Trilemma and Magic by Forgetting do not work with path-dependent identity.
Okay, glad we are on the same page here.
However, we can almost recreate the magic machine from the Anthropic Trilemma using path-based identity
I'm not sure I understand your example and how it recreates the magic. Let me try to describe to it with my own words, and then correct me if I got something wrong.
You are put to sleep. Then you are splitted into two people. Then, on random, one of them is put into red room and one into green room. Let's say that person 1 is in red room and 2 in green room. Then the person 2 is splitted into two people: 21 and 22. Both of them are keept in green rooms. Then everyone is awaken. What should be your credence to awake in a red room?
Here there are three possibilities: 50% to be 1 in a red room and 25% chance to be either 21 or 22 in green rooms. No matter how much a person in a green room is split, the total probability for greenness stays the same. All is quite normal and there is no magic.
Now let's add a twist.
Instead of putting both 21 and 22 in green rooms, one of them - let it be 21 - is put in a red room.
In this situation, total probability for red room is P(1) + P(21) = 75%. And if we split the 2 more and put more of its parts in red rooms we get highter and highter probability to be in red room. Therefore we get magical ability to manipulate probability.
Am I getting you correctly?
I do not see anything problematic with such "manipulation of probability". We do not change our estimate just because more people with the same experience are created. We change the estemate because different fraction of people get different experience. This is no more magical than putting both 1 and 2 into red rooms and noticing that suddenly the probability for being in red room reached 100%, compared to the initial formulation where it was mere 50%. Of course it did! That's completely lawful behaviour of probability theoretic reasoning.
Notice that we can't actually recreate the anthropic trilemma and be certain to win lottery this way. Because we can't move people between branches. Therefore everything adds up to normality.
Also, path-dependent identity opens the door to back-causation and premonition, because if we normalize outputs of some black box where paths are mixed, similar to the magic machine discussed above
We just need to restrict the mixing of the paths, which is the restriction of QM anyway. Or maybe I'm missing something? Could you give me an example with such backwards causality? Because as far as I see, everything is quite straightforward.
The main problem of path-dependent identity is that we assume the existence of a "global hidden variable" for any observer. It is hidden as it can't be measured by an outside viewer and only represents the subjective chances of the observer to be one copy and not another. And it is global as it depends on the observer's path, not their current state. It therefore contradicts the view that mind is equal to a Turing computer (functionalism) and requires the existence of some identity carrier which moves through paths (qualia, quantum continuity, or soul).
Seems like we are just confused about this "identity" thingy and therefore don't know how to correctly reason about it. In such situations we are supposed to
- Acknowledge that we are are confused
- Stop speculating on top of our confusion and jumping to conclusions based on it
- Outline the possible options to the best of our understanding and keep an open mind until we manage to resolve the confusion
It's already clear that "mind" and "identity" are not the same thing. We can talk about identities of things that do not possess a mind, and identities are unique while, there can exist copies of the same mind.So minds can very well be Turing computers, but identities are something else, or even not a thing at all.
Our intuitive desire to drag in consciousness/qualia/soul also appears completely unhelpful after thinking about it for the first five minutes. Non-conscious minds can do the same probability theoretic reasonings as conscious ones. Nothing changes if 1, 21 and 22 from the problem above are not humans but programs executed on different computers.
Whatever extra variable we need it seems to be something that a Laplace's demon would know. It's a knowledge about whether a mind was split into n instances simultaneously or through multiple steps. It indeed means that something else except the immediate state of the mind is important for "indentity" considerations, but this something can very well be completely physical - just the past history of causes and effects that led to this state of the mind.
As we assume that coin tosses are quantum, and I will be killed if (I didn't guess pi) or (coin toss is not heads) there is always a branch with 1/128 measure where all coins are heads, and they are more probable than surviving via some errors in the setup.
Not if we assume QI+path-based identity.
Under them the chance for you to find yourself in a branch where all coins are Heads is 1/128, but your over chance to survive is 100%. Therefore the low chance of failed execution doesn't matter, quantum immortality will "increase" the probability to 1.
All hell breaks loose" refers here to a hypothetical ability to manipulate perceived probability—that is, magic. The idea is that I can manipulate such probability by changing my measure.
One way to do this is described in Yudkowsky's " The Anthropic Trilemma," where an observer temporarily boosts their measure by increasing the number of their copies in an uploaded computer.
I described a similar idea in "Magic by forgetting," where the observer boosts their measure by forgetting some information and thus becoming similar to a larger group of observers.
None of these tricks works with path-based identity. That's why I consider it to be true - it seem to be totally adding up to normality. No matter how many clones of you exist in a different path - only yours path matters for your probability estimate.
Seems that, path-based identity is the only approach according to which all hell doesn't break lose. So what counterargument you have against it?
Hidden variables also appear depending on the order in which I make copies: if each copy is made from subsequent copies, the original will have a 0.5 probability, the first copy 0.25, the next 0.125, and so on.
Why do you consider it a problem? What kind of counterintuitive consequences does it imply? It seems to be exactly how we reason about anything else.
Suppose there is the original ball, then an indistinguishable copy of it is created. Then one of these two balls is picked randomly and put into a bag 1, while the other ball is put into the bag 2 and then indistinguishable 999 copies of this ball is also put into bag 2.
Clearly we are supposed to expect that ball from bag 1 has 50% to be the original ball, while a random ball from bag 2 only 1/2000 chance to be the original ball. So what's the problem?
"Anthropic shadow" appear only because the number of observers changes in different branches.
By the same logic "Ball shadow" appears because the number of balls is different in different bags.
If my π-guess is wrong, my only chance to survive is getting all-heads.
Your other chance for survival is that whatever means are used to kill you somehow does not succeed due to quantuum effects. And this is what QI+path-based identity approach actually predicts. The universe isn't going to reotroactively change the digit of pi, but neither it's going to influence the probability of the coin tosses due to the fact that someone may die. QI influence will trigger only at the moment of your death, turning it into near death. And then for the next attempt. And for the next one. Potentially locking you in a state of eternal torture.
However, abandoning SSSA also has a serious theoretical cost:
If observed probabilities have a hidden subjective dimension (because of path-dependency), all hell breaks loose. If we agree that probabilities of being a copy are distributed not in a state-dependent way, but in a path-dependent way, we agree that there is a 'hidden variable' in self-locating probabilities. This hidden variable does not play a role in our π experiment but appears in other thought experiments where the order of making copies is defined.
I fail to see this cost. Yes, we agree that there is an additional variable. Namely, my causal history. It's not necessary hidden but can as well be. So what? What is so hellbreaking about it? This is exactly how probability theory works in every other case. Why should it have a special case for conscious experience?
If there are two bags one with 1 red ball and another with 1000 blue balls and then the coin is tossed and based on the outcome I'm either getting a ball from the first or the second bag, I'm expecting to receive red ball with 50% chance. I'm not supposed to assume out of nowhere that every ball have to have equal probabilities to be given, therefore postulate a ball-shadow that will modify the fairness of the coin.
I thought if humans were vastly more intelligent than they needed to be they would already learn all the relevant knowledge quickly enough so they reach their peak in the 20s.
There is a difference between being more intelligent than you need for pure survival and being so intelligent that you can reach the objective ceiling of a craft at early age.
I mean for an expensive trait like intelligence I'd say the benefits need to at least almost be worth the costs, and then I feel like rather attributing the selection for intelligence to "because it was useful" rather than "because it was a runaway selection".
The benefit is in increased inclusive genetic fitness. A singular metric that encorparates both success in competition with other species and with other members of your species due to sexual selection. If the species is already dominating the environment then the pressure from the first component compared to the second decreases.
That's why I'm attributing the level of human intelligence in large part to runaway sexual selection. Without it, as soon as interspecies competition became the most important for reproductive success, natural selection would not push for even grater intelligence in humans, even though it could improve our ability to dominate the environment even more.
30 year old hunter gatherers perform better at hunting etc than hunter gatherers in their early 20s, even though the latter are more physically fit.
I'm not sure how it's relevant. Older hunters are not more intelligent, they are more experienced. Moreover, your personal hunting success doesn't necessary translates into your reproductive success - all the tribe will be enjoying the gains of your hunt and our ancestors had a strong egalitarian instinct. And even though higher intelligence improves the yields of your labor, it doesn't mean that it creates strong enough selection pressure to outweighs other factors.
But I think it gets straightened out over long timescales - and faster the more expensive the trait is.
It doesn't have to happen for a species who is already dominating their environment. As for them it can be the most dominant factor determining inclusive genetic fitness.
And if the trait, the runaway sexual selection is propagating, is itself helpful in competition with other species, which is obviously true for intelligence, there is just no reason for such straightening over a long timescale.
Survival of a meme for a long time is a weak evidence of its truth. It's not zero evidence, because true memes have advantage over false ones, but neither it's particularly strong evidence, because there are other reasons for meme virulence instead of truth, so the signal to noise ratio is not that great.
You should, of course, remember that Argument Screens Off Authority. If something is true there have to be some object level arguments in favor of it, instead of just vague meta-reasoning about "Anscient Wisdom".
If all the arguments for a particular thing are appeals to tradition, if you actually look into the matter and it turns out that even the most passionate supporters of it do not have anything object-level to back up their beliefs, if the idea has to shroud itself in ancestry and mystery, lest it will lack any substance, then it's a stronger evidence that the meme is false.
I think that first some amount of intelligence in our ancestors evolved as necessary for survival as species - therefore explaining the "coincidence" of intelligence being useful for it - but then it was brought up to our current level as a runaway process. Because nothing other than a runaway process would be enough.
The thing is, ancestral hominids do not need this level of intelligence for survival purpose. Pack hunting, minor tool making, stamina regeneration, and being ridiculously good at throwing things is enough to completely dominate the ancestral environment. But our intelligence didn't stop at this level, so something else has to be pushing it forward.
And sexual selection is the most obvious candidate. We already have examples of animals with ridiculously overdeveloped traits, due to it, up to the point where they actively harmful to the survival of an individual. We know that humans have extremely complicated mating rituals. At this point, the pieces just fall together.
I suspect that runaway sexual selection played a huge part.
When in one outcome one person exists and in the other outcome two people exist it may mean that you are twice as likely to exist on the second outcome (if there is random sampling) and then thirder reasoning you describe is correct. Or it may mean that there are just "two of you" in the second scenario, but there are always at least one of you, and so you are not more likely to exist in the second scenario.
Consider these two probability experiments:
Experiment 1: Brain fissure
You go to sleep in Room 1. A coin is tossed. On Heads nothing interesting happens and you wake up as usual. On Tails you are split into two people: your brain is removed from the body, the two lobes are separated in the middle and then the missing parts are grown back, therefore creating two versions of the same brain. Then these two brains are inserted into perfectly recreated copies of your original body. Then on random one body is assigned to Room 1 and the other is assigned to Room 2. Both bodies are returned to life in such a manner that it's impossible to notice that something happened.
You wake up. What's the probability that the coin was Heads? You see that you are in Room 1. What is the probability that the coin was Heads now?
Experiment 2: Embryos and incubator
There are two embryos. A coin is tossed. On Heads one embryo is randomly selected and put into an incubator that grows it into a person who will be put to sleep in a Room 1. On Tails both embryos are incubated and at random one person is put into Room 1 and the other person is put into Room 2.
You wake up as a person who was incubated this way. What is the probability that the coin was Heads? You see that you are in Room 1. What is the probability that the coin was Heads now?
Do you see the important difference between them?
Sure. By tweaking your "weights" or other fudge factors, you can get the right answer using any probability you please. But you're not using a generally-applicable method, that actually tells you what the right answer is. So it's a pointless exercise that sheds no light on how to correctly use probability in real problems.
Completely agree. The general applicable method is:
- Understand what probability experiment is going on, based on the description of the problem.
- Construct the sample space from mutually exclusive outcomes of this experiment
- Construct the event space based on the sample space, such that it was minimal and sufficient to capture all the events that the participant of the experiment can observe
- Define probability as a measure function over the event space, such that:
- The sum of probabilities of events consisting of only individual mutually exclusive and collectively exshaustive outcomes was equal to 1 and
- if an event has probability 1/a then this event happens on average N/a times on a repetition of probability experiment N times for any large N.
Naturally, this produce answer 1/2 for the Sleeping Beauty problem.
If Beauty thinks the probability of Heads is 1/2, she presumably thinks the probability that it is Monday is (1/2)+(1/2)*(1/2)=3/4
This is a description of Lewisian Halfism reasoning, that in incorrect for the Sleeping Beauty problem
I describe the way the Beauty is actually supposed to reason about betting scheme on a particular day here.
She needs a real probability.
Indeed. And real probability domain of function is event space, consisting of properly defined events for the probability experiment. "Today is Monday" is ill-defined in the Sleeping Beauty setting. Therefore it can't have probability.
I suppose the participant is just supposed to lie about their credence here in order to "win".
On Tuesday your credence in Heads supposed to be 0, but saying the true value would go against the experimental protocol unless you also said that your credence is 0 on Monday, which would also be a lie.
An obvious way to do so is put a hazard sign on "probability" and just not use it, not putting resources into figuring out what "probability" SB should name, isn't it?
It's an obvious thing to do when dealing with simularity clusters poorly defined in natural language. Not so much, when we are talking about a logically pinpointed mathematical concept which we know are crucial for epistemology.
(And now I realize a possible point why you're arguing to keep "probability" term for such scenarios well-defined; so that people in ~anthropic settings can tell you their probability estimates and you, being observer, could update on that information.)
It's not just about anthropic scenarios and not just about me being able to understand other people. It's about general truth preserving mechanism of logical and mathematical reasoning. When people just use different definitions - this is annoying but fine. But when they use different definitions without realizing that these definitions are different and, moreover insist that it's you who is making a mistake - then we have an actual disagreement about math which will provide more confusion along the way. Anthropic scenarious are just the ones where this confusion is noticeable.
As for why I believe probability theory to be useful in life despite the fact that sometimes different tools need to be used
What exactly do you mean by "different tools need to be used"? Can you give me an example?
Yeah, I suppose, if lucid dreaming is that hard for you, that it requires constant excercises during daytime, you shouldn't strain yourself.
I learned it intuitively in childhood as a way to deal with rare nightmares and so it is all mostly effortless fun for me since then. I don't get them all the time, but at least half the time I remember dreaming, it's lucid.
Another point is that lucid dreams are usually short. At least in my case its hard to stay in the state without waking up or forgetting that it's a dream. I don't think I've had more than 15 minutes of uninterrupted experience at a time, though it's hard to tell due to the fact that time perception in a dream is messed up.
Lucid dreams as erotic adventures can be fun but only after you already had enough sexual experience. I think it can be more satisfying than onanism but not significantly. The real advantage is that you are not loosing your daytime on such activity.
Well, idk. My opinion here is that you bite some weird bullet, which I'm very ambivalent too. I think "now" question makes total sense and you factor it out into some separate parts from your model.
The counter-intuitiveness comes from us not being accustomed to reasoning under amnesia and repetition of the same experience. It's understandable that initially we would think that question about "now"/"today" makes sense as we are used to situation where it indeed does. But then we can clearly see that in such situations there is no problem with formally defining what event we mean by it. Contrary to SB, where such event is ill-defined.
Like, can you add to the sleeping beauty some additional decision problems including the calendar? Will it work seamlessly?
Oh absolutely.
Suppose that on every awakening the Beauty is proposed to bet that "Today is Monday" What odds is she supposed to take?
"Today is Monday" is ill-defined, but she can construct a corresponding betting scheme using events "Monday awakening happens" and "Tuesday awakening happens" like this:
E(Monday) = P(Monday)U(Monday) - P(Tuesday)U(Tuesday)
P(Monday) = 1; P(Tuesday) = 1/2, therefore
E(Monday) = U(Monday) - 1/2U(Tuesday)
solving E(Monday)=0 for U(Monday):
U(Monday) = 1/2U(Tuesday)
Which means 2:1 betting odds
As you see everything is quite seamless.
Please specify this "now" thingy you are talking about, using formal logic. If this is a meaningful event for the setting, surely there wouldn't be any problems.
Are you talking about Monday xor Tuesday? Monday or Tuesday? Monday and Tuesday? Something else?
What state the calendar is when?
On Monday it's Monday. On Tuesday it's Tuesday. And "Today" is ill-defined, there is no coherent state for it.
I'm very available to answer questions about my posts as soon as people actuall engage with the reasoning, so feel free to ask if you feel confused about anything.
If I am to highlight the core principle it would be: Thinking in terms of what happens in the probability experiment as a whole, to the best of your knowledge and from your perspective as a participant.
Suppose this experiment happened to you multiple times. If on iteration of the experiment something happens 2/3 of times then the probability of such event is 2/3. If something happens 100% of times then its probability is 1 and realizationof such event doesn't give you you any evidence.
All the rest is commentary.
I think some people pointed out in comments that their model doesn't represent prob of "what day it is NOW" btw
I'm actually talking about it in the post here. But yes this is additionally explored in the comments pretty well.
Here is the core part that allows to understand why "Today" is ill-defined from the perspective of the Beauty:
Another intuition pump is that “today is Monday” is not actually True xor False from the perspective of the Beauty. From her perspective it's True xor (True and False). This is because on Tails, the Beauty isn't reasoning just for some one awakening - she is reasoning for both of them at the same time. When she awakens the first time the statement "today is Monday" is True, and when she awakens the second time the same statement is False. So the statement "today is Monday" doesn't have stable truth value throughout the whole iteration of probability experiment. Suppose that Beauty really does not want to make false statements. Deciding to say outloud "Today is Monday", leads to making a false statement in 100% of iterations of experiemnt when the coin is Tails.
As long as the Beauty is unable to distinguish between Monday and Tuesday awakenings, as long as the decision process which leads her to say the phrase "what day is it" works the same way, from her perspective there is no one "very moment she says that". On Tails, there are two different moments when she says this, and the answer is different for them. So there is no answer for her
So with that said, to answer your question: why define probabilities in terms of this concept? Because I don't think I want a definition of probability that doesn't align with this view, when it's applicable.
Suppose I want matrix multiplication to be commutative. Surely it would be so convinient if it was! I can define some operator * over matrixes so that A*B = B*A. I can even call this operator "matrix multiplication".
But did I just make matrix multiplication, as it's conventionally defined, commutative? Of course not. I logically pinpointed a new function and called it the same way as the previous function is being called, but it didn't change anything about how the previous function is logically pinpointed.
My new function may have some interesting applications and therefore deserve to be talked about in its own right. But calling it's "matrix multiplication" is very misleading. And if I were to participate in conversation about matrix multiplication while talking about my function I'd be confusing everyone.
This is basically the situation that we have here.
Initially probability function is defined over iterations of probability experiment. You define a different function over all space and time, which you still call "probability". It surely has properties that you like, but it's a different function! Please use another name, this is already taken. Or add a disclaimer. Preferably do both. You know how easy it is to confuse people with such things! Definetely, do not start participating in the conversations about probability while talking about your function.
If we can discretely count the number of instances across the history of the universe that fit the current situation , and we know some event happens in one third of those instances, then I think the probability has to be one third. This seems very self-evident to me; it seems exactly what the concept of probability is supposed to do.
I guess one analogy -- suppose one third of all houses is painted blue from the outside and one third red, and you're in one house but have no idea which one. What's the probability that it's blue?
As long as these instances are independent of each other - sure. Like with your houses analogy. When we are dealing with simple, central cases there is no diasagreement between probability and weighted probability and so nothing to argue about.
But as soon as we are dealing with more complicated scenario where there is no independence and it's possible to be inside multiple houses in the same instance... Surely, you see how demanding to have coherent P(Red xor Blue) becomes unfeasible?
The problem is, our intuitions are too eager to assume that everything as independent. We are used to think in terms of physical time, using our memory as something that allows us to orient in it. This is why amnesia scenarios are so mindboggling to us!
And that's why the notion of probability experiment where every single trial is independent and the outcomes in any single trial are mutually exclusive is so important. We strictly define what the "situation" means and therefore do not allow ourselves to be tricked. We can clearly see that individual awakenings can't be treated as outcomes of the Sleeping Beauty experiment.
But when you are thinking in terms of "reference classes" your definition of "situation" is too vague. And so you allow yourself to count the same house multiple times. Treat yourself not as a person participating in the experiment but as an "awakening state of the person", even though one awakening state necessary follows the other.
if the probability doesn't align with reference class counting, then it seems to me that the point of the concept has been lost.
The "point of probability" is lost when it doesn't allign with reasoning about instances of probability experiments. Namely, we are starting to talk about something else, instead of what was logically pinpointed as probability in the first place. Most of the time reasoning about reference classes does allign with it, so you do not notice the difference. But once in a while it doesn't and so you end up having "probability" that contradicts conservation of expected evidence and "utility" shifting back and forth.
So what's the point of these reference classes? What's so valuable in them? As far as I can see they do not bring anything to the table except extra confusion.
Probability is not some vaguely defined similarity cluster like "sound". It's a mathematical function that has specific properties. Not all of them are solely about betting.
We can dissolve the semantic disagreement between halfers and thirders and figure out that they are talking about two different functions p and p' with subtly different properties while producing the same betting odds.
This in itself, however, doesn't resolve the actual question: which of these functions fits the strict mathematical notion of probability for the Sleeping Beauty experiment and which doesn't. This question has an answer.
Betting argument are tangential here.
https://www.lesswrong.com/posts/cvCQgFFmELuyord7a/beauty-and-the-bets
The disagreement is how to factorise expected utility function into probability and utility, not which bets to make. This disagreement is still tangible, because the way you define your functions have meaningfull consequences for your mathematical reasoning.
I personally think that the only "good" definition (I'll specify this more at the end) is that a probability of should occur one in four times in the relevant reference class. I've previously called this view "generalized frequentism", where we use the idea of repeated experiments to define probabilities, but generalizes the notion of "experiment" to subsume all instances of an agent with incomplete information acting in the real world (hence subsuming the definition as subjective confidence).
Why do you suddenly substitute the notion of "probability experiment" with the notion of "reference class"? What do you achieve by this?
From my perspective, this is where the source of confusion lingers. Probability experiment can be precisely specified: the description of any probability theory problem is supposed to be that. But "reference class" is misleading and up for the interpretation.
There are difficulties here with defining the reference class, but I think they can be adequately addressed, and anyway, those don't matter for the sleeping beauty experiment because there, the reference classes is actually really straight-forward. Among the times that you as an agent are participating in the experiment and are woken up and interviewed (and are called Sleeping Beauty, if you want to include this in the reference class), one third will have the coin heads, so the probability is .
And indeed, because of this "reference class" business you suddenly started treating individual awakening of Sleeping Beauty as mutually exclusive outcome, even though it's absolutely not the case in the experiment as stated. I don't see how you would make such mistake if you kept using the term "probability experiment" without switching to speculate about "reference classes".
Among the iterations of Sleeping Beauty probability experiment that a participant awakes, half the time the coin is Heads so the probability is 1/2.
Here there are no difficulties to address - everything is crystal clear. You just need to calm the instinctive urge to weight the probability by the number of awakenings, which would be talking about a different mathematical concept.
EDIT: @ProgramCrafter the description of the experiment clearly states that that when the coin is Tails the Beauty is to be awaken twice in the same iteration of the experiment. Therefore, individual awakennings are not mutually exclisive with each other: more than one can happen in the same iteration of the experiment.
Alternatively I started out confused.
Debating this problem here and with LLMs convinced me that I'm not confused and the thirders are actually just doing epistemological nonsense.
It feels arrogant, but it's not a poor reflection of my epistemic state?
Welcome to the club.
I have read some of the LW posts on the canonical problem here. I won't be linking them due to laziness.
I suppose my posts are among the ones that you are talking about here?