SIA > SSA, part 1: Learning from the fact that you exist

post by Joe Carlsmith (joekc) · 2021-10-01T05:43:25.247Z · LW · GW · 16 comments

Contents

  I. Surprised I Am and ASS-backwards
  II. Storytelling
  III. Can’t we just use the minimal reference class?
  IV. Better and worse ways to understand SIA (or: how to actually stop using reference classes)
None
16 comments

(Cross-posted from Hands and Cities)

This post is the first in a four-part sequence explaining why I think that one prominent approach to anthropic reasoning is better than another. Consider:

God’s extreme coin toss: You wake up alone in a white room. There’s a message written on the wall: “I, God, tossed a fair coin. If it came up heads, I created one person in a room like this. If it came up tails, I created a million people, also in rooms like this.” What should your credence be that the coin landed heads?

The approach I like better — the “Self Indication Assumption” (SIA) — says: ~one in a million. SIA thinks you’re more likely to exist in worlds with more people in your epistemic situation. Here, this is the tails-world by far.

The approach I like worse — the “Self-Sampling Assumption” (SSA) — says: one half. SSA thinks you’re more likely to exist in worlds where the people in your epistemic situation are a larger fraction of the people in your “reference class.” Don’t ask me what a reference class is, but in this case, let’s assume that the people in your epistemic situation are 100% of it either way. So SSA sticks with the one half prior. 

I open with this case because it’s one of the worst for SIA, the approach I favor. In particular, we can construct scientific analogs, in which SIA becomes ludicrously confident in a given cosmology, simply in virtue of that cosmology positing more people in our epistemic situation. For many, this implication (known as the “Presumptuous Philosopher”) is a ~decisive objection to SIA. 

But I think that the objections to SSA are stronger, and that in the absence of an alternative approach superior to both SSA and SIA (“Anthropic Theory X”), the Presumptuous Philosopher is a bullet we should consider biting. 

I proceed as follows. The first part of the sequence (“Learning from the fact that you exist”) describes SIA and SSA. In particular, I emphasize that pace some presentations in the literature, SIA should not be seen as an additional assumption you add to SSA [LW · GW] — one that “cancels out” SSA’s bad implications, but accepts SSA’s worldview. Rather, SIA is a different (and more attractive) picture altogether.

The second part (“Telekinesis, reference classes, and other scandals [LW · GW]”) lays out the bulk of my case against SSA. In particular, SSA implies: 

The third part (“An aside on betting in anthropics [LW · GW]”) briefly discusses betting in anthropics. In particular: why it’s so gnarly, why I’m not focusing on it, and why I don’t think it’s the only desiderata. 

The fourth part (“In defense of the presumptuous philosopher [LW · GW]”) discusses prominent objections to SIA in more detail. In particular: 

That said, even if SSA is worse than SIA, it’s not like SIA is sitting pretty (I especially don’t like how it breaks in infinite cases, and there are presumably many other objections I’m not considering). I briefly discuss whether we should expect to find a better alternative — the “Anthropic Theory X” above [LW · GW]. My current answer is: maybe (and maybe it’s already out there), but Anthropic Theory X should probably keep SIA’s good implications (like “thirding” in Sleeping Beauty). And the good implications seem closely tied to (some of) the bad. 

I close by quickly mentioning some of SIA’s possible implications in the real world [LW · GW] (for example, re: doomsday arguments). I think we should tread carefully, here, but stay curious.

Acknowledgments: This sequence owes a huge amount to discussion with Katja Grace, and to her work on anthropics (see summary here, her honors thesis here, and the many links throughout the sequence). My thanks, as well, to Amanda Askell, Nick Beckstead, Paul Christiano, Tom Davidson, Carl Shulman, Bastian Stern, and Ben Weinstein-Raun for discussion. 

I. Surprised I Am and ASS-backwards

Cases like God’s extreme coin toss involve reasoning about hypotheses that specify both an objective world (e.g., a heads world with one person, or a tails world with a million), and a “location” of the “self” within that world (e.g., in the tails world, the “self” could be the person in the first room, the second room, etc). Call hypotheses of this form “centered worlds.” The question is how to assign probabilities both to objective worlds and centered worlds, granted (a) some prior over objective worlds, (b) knowledge that you exist, and (c) your other knowledge about your situation. I’ll call this broad topic “anthropics,” though others might define the term differently.

A classic reference here is Bostrom (2002), which I’ll be focusing on a lot — it’s where I’ve spent most of my time. I’m going to be disagreeing with Bostrom quite a bit in this sequence, but I want to say up front that I think his book is great, and that it clarifies a lot of stuff. In fact, this whole sequence is very much “living in the world that Bostrom built,” and a lot of the points I’m going to make are made by Bostrom himself — it’s just that I’m making them with much more of a “this is why Bostrom’s view is untenable” flavor. 

SIA and SSA are two prominent approaches to anthropic reasoning (Bostrom favors a version of SSA, and dismisses SIA in a few short pages). Unfortunately, the names and standard glosses of these principles seem almost optimized for obscurity, and for many years, casual exposure left me unable to consistently remember which was which, or what they really meant. Katja Grace once suggested to me that partisans of SIA remember it as “Surprised I Am” (e.g., the view that updates on your own existence) and SSA as “ASS-backward” (e.g., the bad view). Another option would be to rename them entirely, but I won’t attempt that here. For those familiar with the Sleeping Beauty problem, though, you can think of SIA as “thirding,” and SSA as “halfing” — at least to a first approximation.

(Note: Bostrom presents SIA as an assumption you can add to SSA, yielding “SSA + SIA.” This formulation ends up equivalent to my own, but I think it’s worse, and I explain why in section IV [LW · GW]. For now, I’ll treat them as distinct and competing theories.)

How do SIA and SSA approach cases like God’s extreme coin toss? Quantitatively: SIA updates the “prior” in proportion to the number of people in your epistemic situation in each objective world. SSA updates it in proportion to the fraction of the people-in-your-epistemic situation who are in the reference class, in that world. Then they both apportion their new credence on each objective world equally amongst the centered worlds (e.g., the hypotheses about “who you are”) compatible with that objective world (e.g., among the people in that world you might be).

To see how this works, consider the following case: 

God’s coin toss with equal numbers: God tosses a fair coin, and creates ten people in white rooms either way. If heads, he gives one person a red jacket, and the rest, blue jackets. If tails, he gives everyone red jackets. You wake up and see that you have a red jacket. What should your credence be on heads?

Here, both SSA and SIA give the same verdict, but for different reasons. SIA reasons: “Well, my prior is 1:1. But on tails, there are 10x the number of people in my epistemic situation — e.g., red-jacketed people. So I update 10:1 in favor of tails. So, 1/11th on heads.” 

SSA, by contrast, reasons: “Well, my prior is 1:1. But on heads, the people in my epistemic situation are a smaller fraction of the reference class. In particular, on heads, the red-jacketed people are 1/10, but on tails, they’re 10/10, assuming that we don’t include God (note from Joe: this is the type of “assuming X about the reference class” that you have to say all the time if you’re SSA). Thus, I update the prior 10:1 in favor of tails. So, 1/11th on heads.”

Having made this update about the objective world, SIA and SSA then both think of themselves as 1/11th likely to be each of the red-jacketed people.

This case is useful to keep in mind, because it’s a kind of “square one” for anthropics. In particular, it helps answer the question: “Wait, why are we updating the prior at all? Why play this game to begin with?” A key answer is: if you don’t update the prior, and instead skip straight to apportioning your prior credence amongst the red-jacketed people in each world, you say silly things about this case. Thus, you reason: “Well, 50% on heads. So 50% that I’m the one red-jacketed heads-world person. And 50% on tails, so 5%, for each of the tails-world people, that I’m them.” But notice: you’ve failed to learn the right thing from your red jacket. In particular, you’ve failed to learn that the coin probably landed tails. 

To illustrate why you need to learn this, suppose you haven’t yet seen your jacket. Then, surely, you should be 50-50, and split your credence equally amongst all the people in each world. Then suppose you see that your jacket is red. This observation was much more likely conditional on tails rather than heads. Thus, it seems like basic Bayesianism to update. 

(Bostrom actually ends up endorsing a version of SSA that fails to make this update — but that’s not to its credit. I discuss this in part 2, section VIII [LW · GW].)

II. Storytelling

SIA and SSA both get this “square one” right; but they differ in their verdicts in other cases. Before getting to those cases, though, can we say anything about what SIA and SSA are doing on a qualitative level? What is the “story” or “conception of the world” motivating these theories, and their differences? 

It’s actually pretty unclear in both cases. But, here’s a shot at story-telling, which will hopefully illustrate how I, at least, tend to think about these views.

SIA treats you as a specific possible person-in-your-epistemic-situation, who might or might not have existed, even conditional on there being someone in that situation. And it thinks of worlds as “pulling” some number people-in-your-epistemic-situation from the “hat” of the platonic realm. That is, and put fancifully: before you were created with a red jacket in a white room, God said to himself “I need to create X number of people with red jackets in white rooms.” He then reached into the platonic realm and groped around blindly in the area labeled “people with red jackets in white rooms.” You were there, in your red jacket, huddled together with some untold number of other red-jacketed souls (a number large enough, indeed, that God can draw as many people as he wants out, without altering the probability that he draws you). But yet, by the most (infinitely?) ridiculous luck, God’s great fingers wrapped around your ghostly non-body. You got pulled, as the other red-jacketed souls looked on in awe and horror and jealousy and relief. Thus, you found yourself alive. It was, indeed, quite a lottery-win. But importantly, it was more likely in worlds where God reached in more times. Or at least, that’s the idea. (Notably, if the space of red-jacketed-white-roomed-people is infinite, then the probability that you get pulled by a finite world is zero, however finitely-many the pulls. And yes, SIA does imply certainty that you live in an infinite world. And yes, this is indeed a problem. See discussion in part 4, section XIV [LW · GW].)

To be clear: I don’t especially like this story. And we can look for others, perhaps less exotic. Thus, for example, we can also think of SIA as treating you as a random sample from the people-in-your-epistemic-situation who might exist, weighted by the probability that they do exist. I discuss this conception more in part 4, section XV [LW · GW]. However, I think it may run into instabilities, so I tend to stick with the story above.

Let’s turn to SSA’s story. Or at least, SSA’s story as I tend to tell it. It’s not a neutral rendition.

Like SIA, SSA learns something from the fact that you exist. In particular, SSA learns that you would’ve necessarily existed in any world that you can’t currently rule out — e.g., any world with anyone in your epistemic situation. That is, granted that you do exist, SSA assumes that if God were going to create any world compatible with your current evidence, then He would have “gone looking for you” in the hat of possible people, then “inserted you” into that world — regardless of how many people it contains. He was, apparently, hell-bent on creating you, come what may, in all of the worlds you haven’t yet figured out don’t contain you — after all, you exist. It’s a strange sort of relationship you have, you and God.

(Here I think the SSA-er says: “no, it’s not like that. Rather, it’s that given that I exist, then if any of those other worlds are real, then it’s the case that I exist in those worlds. So I am licensed, in reasoning about which possible worlds are actual, in assuming that I get created in all of them.” I discuss the dialectic here in a bit more detail in part 2, section X [LW · GW].)

Importantly, though, on SSA, when God creates you and inserts you into the world, he does so in a particular way: namely, he makes you a random member of some “reference class” other than the people in your epistemic situation. What sort of reference class? No one knows. It’s entirely made up. (I’ll return to this problem later.) Still, on SSA, that’s how God operates: he picks some set of people who in some sense “you could have been” — even though for some of them, you often know you aren’t — and then makes one of them, at random, you.

Bostrom is at pains to emphasize that SSA doesn’t involve positing any actual physical mechanism — akin to a time-traveling stork — for randomly distributing souls across members of the reference class. Rather, SSA is just a way of assigning credences. That said, we might wonder what would make such a way of assigning credences track the truth, absent such a mechanism — and I don’t remember Bostrom offering an account. We can ask similar question about SIA, though, and the “hat of possible people” story I offered above isn’t exactly an “oh of course no problems with that one.”

To see where the reference class bit of SSA starts to make an important difference, consider this variation on God’s coin toss with equal numbers:

God’s coin toss with chimpanzees: God tosses a fair coin. If heads, he creates one person in a white room, and nine chimpanzees in the jungle. If tails, he creates ten people in white rooms. You wake up in a white room. What should your credence be on heads?

Here, SIA reasons as it did in the original case, when people in blue jackets were in the role of the chimps. Thus, and using the language of the “story” above: “On tails, there are 10x the number of people in my epistemic situation, and so 10x the number of ‘draws’ from the hat of the platonic realm, and so 10x the chance of drawing me. Thus, I update 10:1 in favor of tails: 1/11th on heads.”

SSA, though, to its great discredit, gives different answers depending on whether you count chimpanzees in the jungle as in your reference class or not. Thus, and using the language of the story above, it reasons: “Well, I know I exist, and I can’t yet rule out heads or tails. So, regardless of whether the coin landed heads vs. tails, I was going to exist. (This is where SIA says: what? That’s wrong.) What’s more, if heads, then I was randomly inserted into a reference class of nine chimps in the jungle, and one human in a white room. Thus, on heads, it would have been only 10% likely that I find myself in my epistemic situation; I would have expected to be a chimp instead. By contrast, on tails, I was randomly inserted into a reference class consisting entirely of humans in white rooms, so it would have been 100% that I find myself in my epistemic situation. So I update 1:10 in favor of tails: 1/11th on heads.”

By contrast, if SSA doesn’t count chimps in the jungle as in your reference class, then it reasons as before: “It’s 100%, on either heads or tails, that I’d find myself a human in a white room, so I don’t update at all: 50%.” Thus, whether you “could have been a chimp,” in the sense relevant to the reference class, ends up a crucial question. And the same will be true, in other cases, of whether you could have been a bacteria, an ant, a genetically engineered post-human, a brain emulation, a nano-bot, a paperclipping AI, a grabby alien, and so on. Indeed, as I’ll discuss below in the context of the “Doomsday Argument,” on SSA, the very future of humanity plausibly hinges on such questions.

(Note that the “could have” here need not be the “could” of metaphysical possibility. But somehow, on SSA, the reference class needs to be such as to license surprise, conditional on heads and chimps-in-the-reference-class, that you find yourself a human — and if you “couldn’t have been a chimpanzee,” it’s unclear why you’d be surprised that you’re not one. Regardless, I’ll continue to use “could have been a chimpanzee” in whatever sense is required to justify such surprise — I’m happy for the sense to be minimal.)

This chimp case may be the earliest and simplest result where I basically just get off the boat with SSA. I take one look at those chimps, and the question of whether they’re in the reference class, and I feel like: “I’m out.” But I don’t necessarily expect others to feel the same way, and there’s much more to say on either side regardless.

III. Can’t we just use the minimal reference class?

Perhaps you’re wondering, for example: can’t SSA just use the simple and attractive reference class of “people in my epistemic situation” (call this the “minimal” reference class)? No, it can’t, because then it loses the ability to update on the number of people in your epistemic situation at all, since the percentage of observers in your reference class who are in your epistemic situation will always be 100%. Thus, with a red jacket in God’s coin toss with equal numbers above, it ends up at 50% on heads, and 50% on tails — even though on heads, only one person out of ten had a red jacket, but on tails, everyone did. In this sense, it starts reasoning like the “heads is always 50% no matter what I’ve learned about my jacket color” person  — and it falls afoul of basic Bayesianism in the same way. 

Indeed, a central problem motivating Bostrom is that he thinks that if you can’t make updates like favoring tails in cases like God’s coin toss with equal numbers, then you can’t do science given the possibility of “big worlds” — that is, worlds where, for any given observation, there is some observer (for example, a Boltzmann brain) who makes it, even if it is false. In comparing big world hypotheses, Bostrom thinks, we need to be able to favor the worlds in which a larger fraction of observers in the relevant reference class make the observation in question — but the minimal reference class makes this impossible. That said, I haven’t thought very much about Bostrom’s “science in big worlds” considerations, and I don’t think the argument against SSA-with-the-minimal-reference-class hinges on them. Regardless of the situation with Boltzmann brains, we should have the resources to favor tails in the “square one” case.

Note how elegantly SIA gets around this problem. SIA honors the “minimal reference class” intuition that what matters here is people in your epistemic situation, and that focusing attention elsewhere is arbitrary. But those people don’t need to be some “fraction” of some larger (and hence more arbitrary) set, in order for their numbers given tails vs. heads to provide information. Rather, the bare fact that there are more people in your epistemic situation given tails vs. heads is enough.

SSA, though, seems stuck with some sort of non-minimal reference class. Exactly how non-minimal is a further question — one that I’ll return to in part 2, section VII [LW · GW]. 

IV. Better and worse ways to understand SIA (or: how to actually stop using reference classes)

I want to pause here to distinguish between the version of SIA I just presented, and a version often presented in the literature — a version I consider less attractive, even though extensionally equivalent.

A bit of notation will be helpful. Let’s call n the number of people in your epistemic situation, in a given objective world. And let’s call r the number of people in your reference class, in that world. As I presented it, SIA updates the prior over objective worlds in proportion to n. SSA updates it proportion to n/r.

Now consider a different theory, which I’ll call “Reference-class-SIA” (or R-SIA) and which corresponds more closely to one type of presentation in the literature. Like SSA, R-SIA thinks of you as a member of some reference class. But it also thinks that you are more likely to exist if more members of your reference class exist. That is, it imagines that God populates the reference class with souls, by pulling them out of the possible-people-in-that-reference-class hat, then throwing them randomly into the bodies of reference class people. And since you are in that hat, more people in the reference class means more chances for you to get pulled. Thus, unlike SIA as presented above, which scales the prior in proportion to n, R-SIA scales the prior in proportion to r.

If you combine R-SIA with SSA, you get SIA as I presented it above. That is, if you first scale in proportion to r, and then in proportion of n/r, the r cancels out, and n is the only thing that matters. Thus, tacking R-SIA onto SSA is sometimes said to “eliminate” the problematic dependence on the reference class that SSA otherwise implies: whatever reference class you choose, you get the same answer. And it is also said to “exactly cancel” some of SSA’s other counterintuitive implications, like the doomsday argument (discussed below). The image, here, is of what I’ll call an “inflate-and-claw-back” dynamic: that is, first you inflate your credence on worlds with many people in your reference class, via R-SIA, and then you claw it back in proportion to the fraction of those people who are in your epistemic situation, via SSA. And after doing this extravagant dance, you’re left with good ol’ n (SIA).

But I think this framing undersells SIA’s appeal. The appeal of SIA with respect to reference classes isn’t that “you can pick whatever reference class you want.” It’s that you don’t have to think in terms of made-up reference classes at all. Rather, you can just think entirely in terms of “people in your epistemic situation” — that is, in terms of n. Somehow, R-SIA + SSA feels to me like its ceding too much ground to SSA’s narrative. It’s living too much in SSA’s weird, reference-classes-are-somehow-a-fundamental-thing-even-though-no-one-has-any-account-of-them world. It’s trying to patch SSA with some extra band-aid, rather than rejecting it entirely.

Similarly, the appeal of SIA with respect to SSA’s counterintuitive implications isn’t that it adds just the right additional ridiculous update to counteract SSA’s other ridiculous update. It’s not that SIA lunges a million miles left, to cleverly (luckily? trickily?) balance out SSA’s lunging a million miles right. Rather, the appeal is that (at least in doomsday-like cases) SIA doesn’t lunge at all. It just stays put, at home, where you always wanted to be. In this sense, SIA as I presented it above feels to me “simpler” than R-SIA + SSA — and I think the simple version captures better what reasoning using SIA actually feels like.

What’s more, thinking in terms of R-SIA leads people to attach slogans to SIA that it doesn’t strictly imply in practice. In particular, my sense is that people think of SIA as the view that favors worlds with more “observers” — and if you’re using R-SIA with the reference class “observers,” this is indeed a natural gloss. But if SIA as I presented it above doesn’t actually care about observers per se (and neither does R-SIA, once you tack on SSA as well). Rather, it only cares about observers in your epistemic situation. You can try to sell me on a hypothesis that contains a zillion extra observers wearing blue jackets; but if I am wearing a red jacket, then on SIA, this feature of the hypothesis leaves me cold (though if it implies something about the number of red-jacketed people as well, or the number of people who could, for all I know, have been given red jackets, that’s a different story). The same holds for bug-eyed aliens, chimps in the jungle, paper-clipping superintelligences, civilizations like our own on planets we can tell that we’re not on, and all the rest of the cosmic zoo. SIA doesn’t like observers; it likes uncertainty about “who/where I am.” And we already know lots of stuff about ourselves.

That said, this consideration only goes so far. In particular, if you don’t know anything about yourself except that you’re an observer, then SIA does indeed like observers per se; and if you “forget” everything about yourself, then on SIA your credence in lots-of-observers-per-se worlds does indeed inflate. And more generally, the number of observers-per-se may correlate strongly with the number of observers in your epistemic situation, and/or the ones that could, for all you know, be in your epistemic situation, and hence be you (added 10/2: I say a bit more about the distinction between "people in your epistemic situation" and "people who, for all you know about about a given objective world, might be in your epistemic situation" here [EA · GW]). Ultimately, though, it’s the people-you-could-actually-be that SIA is really after.

Leaving R-SIA + SSA to the side, then, I’ll focus on comparing SIA and SSA. Which theory is better?

Note that I say “better,” not “true” or “best.” These aren’t the only approaches to anthropics, and given the various weird implications and uncertainties I’m about to discuss, it seems plausible that the true/best theory (is there are “true theory” of what your credence “should be”?) lies elsewhere (see discussion in part 4, section XVI [LW · GW]). Indeed, there’s a whole literature on anthropics out there, which I haven’t attempted to survey. Rather, I’m sticking to a comparison between two basic, prominent, first-pass views.

Indeed, really I’d prefer to ask a narrower question about these views. Not “which is better?”, but “which is better mostly in light of the considerations discussed in Bostrom (2002), plus a few other considerations that Joe encountered while writing this blog post?”. That is, I’m not, here, really attempting to exhaustively canvass all the relevant arguments and counterarguments (though I’m interested, readers, to hear which of the arguments I don’t include you find most persuasive). Rather, I’m trying to report my (admittedly sometimes strong) inclinations after looking into the topic a bit and thinking about it. 

All that said: SIA currently seems better to me. Part 2 [LW · GW] and Part 4 [LW · GW] of this sequence explain why. (Part 3 [LW · GW] is a bit of an interlude.)

(Next post in this sequence: SIA > SSA, part 2: Telekinesis, reference classes, and other scandals [LW · GW])

16 comments

Comments sorted by top scores.

comment by justinpombrio · 2021-10-01T19:38:37.911Z · LW(p) · GW(p)

Isn't the conclusion to the Sleeping Beauty problem that there are two different but equally valid ways of applying probability theory to the problem; that natural language and even formal notation makes it very easy to gloss over the difference; and that which one you should use depends on exactly what question you mean to ask? Would those same lessons apply to SIA vs. SSA?

In Sleeping Beauty, IIRC the distinction is between "per-experiment" probabilities and "per-observation" probabilities. My interpretation of these was to distinguish between the question "what's the probability that the coin came up heads" (a physical event that happened exactly once, when the coin landed on the table) from "what's the probability that Beauty will witness the coin being heads" (an event in Beauty's brain that will occur once or twice). The former having probability 1/2 and the latter having probability 1/3. Though it might be a bit more subtle than that.

For SSA vs. SIA, who do you want to be right most often? Do you want a person chosen uniformly at random from among all people in all possible universes to be right most often? If so, use SIA. Or do you want to maximize average-rightness-per-universe? If so, use SSA, or something like it, I'm not exactly clear.

Let's be concrete, and look at the "heads: 1 person in a white room and 9 chimps in a jungle; tails: 10 people in a white room" situation.

If God says "I want you to guess whether the coin landed heads or tails. I will exterminate everyone who guesses wrong.", then you should guess tails because that saves the most people in expectation. But if God says "I want to see how good the people of this universe are at reasoning. Guess whether the coin landed heads or tails. If most people in your universe guess correctly, then your universe will be rewarded with the birth of a single happy child. Oh and also the coin wasn't perfectly fair; it landed heads with probability 51%.", then you should guess heads because that maximizes the chance that the child is born.

I'm not sure that's all exactly right. But the point I'm trying to make is, are we sure that "the probability that you're in the universe with 1 person in the white room" has an unambiguous answer?

Replies from: Zach Stein-Perlman
comment by Zach Stein-Perlman · 2021-10-03T05:00:52.382Z · LW(p) · GW(p)

I agree with all of this (and I admire its clarity). In addition, I believe that the SIA-formulated questions are generally the important ones, for roughly the reason that the consequences of our choices are generally more like value is proportional to correct actions than value is proportional to fraction of actions correct (across all observers subjectively indistinguishable from me). (Our choices seem to be local in some sense; their effects are independent of the choices of our faraway subjectively-indistinguishable counterparts, and their effects seem to scale with our numbers. Perhaps some formalization of bigger universes matter more is equivalent.)

I'm not sure about this, but perhaps with some kind of locality assumption, the intuitive sense of probability as something like odds at which I'm indifferent to bet (under certain idealizations) reduces to SIA probability, whereas SSA probability would correspond to something like odds at which I'm indifferent to bet if the value from winning is proportional to the fraction rather than the number of correct bets. Again, SSA is in conflict with bigger universes matter more; assuming locality, this is particularly disturbing since it roughly means that the value of a choice is inversely proportional to the number of similarly-situated choosers.

comment by evhub · 2021-10-01T06:14:35.445Z · LW(p) · GW(p)

I thought this was a great post, and am excited about reading more, but having just skimmed through the rest of the posts in the sequence, I was disappointed to find no discussion of UDASSA [LW · GW], which I personally find to be by far the most compelling theory of anthropics. Is there a reason for that? What do you think about UDASSA as an alternative to either SSA or SIA?

Replies from: joekc
comment by Joe Carlsmith (joekc) · 2021-10-01T09:49:44.256Z · LW(p) · GW(p)

Glad you liked it :). I haven’t spent much time engaging with UDASSA — or with a lot other non-SIA/SSA anthropic theories — at this point, but UDASSA in particular is on my list to understand better. Here I wanted to start with the first-pass basics.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-10-01T18:34:29.503Z · LW(p) · GW(p)

While I don't expect my opinion to have much weight, I strongly second considering UDASSA over SIA/SSA. I've spent a lot of time thinking about anthropics and came up with something very similar to UDASSA, without knowing that UDASSA existed.

comment by Charlie Steiner · 2021-10-02T20:23:53.240Z · LW(p) · GW(p)

My picture of anthropics is that it's not like you start as a bodiless ghost and then wake up and find out what species you are, what your name is, etc. Instead you start as yourself, but it's the rest of the universe you're uncertain about.

As for why I am myself rather than someone else, my mantra is "it's a tough job, but someone's gotta do it."

comment by clone of saturn · 2021-10-02T03:00:15.438Z · LW(p) · GW(p)

I'm perplexed when you say the Doomsday Argument is counterintuitive. To me it seems extremely intuitive, almost to the point of being transparently obvious. So if SIA gets rid of the Doomsday Argument, to me it just sounds like I have to stick with SSA in spite of whatever other problems it may have. Has this been addressed anywhere?

comment by dadadarren · 2021-10-25T17:55:32.611Z · LW(p) · GW(p)

> For those familiar with the Sleeping Beauty problem, though, you can think of SIA as “thirding,” and SSA as “halfing” — at least to a first approximation.

I think it should be noted while majority of thirders endorses SIA, most halfers do not support SSA. So a more accurate description would be SSA as a (minor) camp of halving.

comment by dadadarren · 2021-10-07T18:43:07.073Z · LW(p) · GW(p)

For the last point, is it fair to say that you don't have to consider SIA in terms of any made-up reference class?

Even if you only consider “people in your epistemic situation”, epistemic situation is a subjective state, and the only subjective experience you have is that of your own. So what qualifies “people in your epistemic situation” has to be a judgment call, or in a sense, made up. Do brains in jars that are being fed with similar neural signals count? as Adam Elga discussed in "Defending earth with self-locating probability". What about computer programs like Bostrom's Simulation Argument? Depending on your judgment of what “people in your epistemic situation” include, the answer to those problems would be drastically different. You are either certain you are a physical person or be quite confident that you are just a brain/program.

The only problems where such kinds of judgments won't affect the conclusions are those cases the effect of reference class cancels out. Like in Doomsday's argument.

Don't get me wrong I am not supporting SSA in any way. The reference class problem is definitely worse for SSA. But SIA is not free from it, after all, the people-you-could-actually-be is still a made-up concept that lacks a solid definition.

comment by Donald Hobson (donald-hobson) · 2023-11-20T19:16:00.833Z · LW(p) · GW(p)

Personally I think both SSA and SIA are wrong.

comment by Charlie Steiner · 2021-10-02T18:14:53.687Z · LW(p) · GW(p)

I think a reasonable "Theory X" candidate is "treat different centered worlds as different hypotheses, and construct objective worlds as emergent higher-level objects." It extends nicely to cases where you want to include a bunch more information that ruins the nice symmetrical "S*A" cleanliness.

Combining this with Solomonoff induction solves things like Boltzmann brains, but also leads to some weirdness [LW · GW] that I'd be interested in your thoughts on (though I think part of it is just confusion on how to locate people within a physical hypothesis in Solomonoff induction).

comment by avturchin · 2021-10-01T11:42:05.902Z · LW(p) · GW(p)

What do you think of the role of modal realism in SIA vs SSA discussion?

I mean that if everything possible does exist, then any SIA situation is actually SSA situation. For example, if some process (God, or panspermia) creates a million copies of you in one region of the multiverse, there always will be another region, where this process failed and created only one your copy.  Presumptuous philosopher is now telling that you just more likely to be in the region of multiverse where there are many your copies. 

comment by Teerth Aloke · 2021-10-01T08:29:06.044Z · LW(p) · GW(p)

My intuition was that the prior probability of head and tails hypotheses should be 50% each. Since existence is not evidence to update in favour of any of these theories, being 100% likely under both, we end up with the 1:1 odds ratio.

comment by avturchin · 2021-10-02T09:33:22.039Z · LW(p) · GW(p)

This is rather obvious: I am randomly selected from all beings how can think about reference classes. Ants can't think about that. Even most humans before 20 century were not able to think on this topic as the needed ideas were not known. 

comment by Zach Stein-Perlman · 2021-10-01T22:45:25.317Z · LW(p) · GW(p)

I strongly support SIA over SSA. I haven't read this sequence yet. But it looks like the sequence is about why the consequences of SIA are superior to those of SSA. This is a fine project. But a set of reasons for SIA over SSA just as strong as its more acceptable consequences, I think, is its great theoretical coherence.

SIA says: given your prior, multiply every possible universe by the number/volume of observers indistinguishable from you in that universe, then normalize. This is intuitive, it has a nice meaning,* and it doesn't have a discontinuity at zero observers.

*Namely: I'm a random member of the prior-probability-weighted set of possible observers indistinguishable from me.

For SSA, on the other hand, it's hard to even explicate the anthropic update. But I think any formalization will require treating the update to zero probability for zero-indistinguishable-observers as a special case.

comment by jbash · 2021-10-01T22:08:00.865Z · LW(p) · GW(p)

It seems to me that the idea of probability is tied up with the assumption that the universe you're in is lawful enough for probability to be meaningful, and that you can expect to be able to validate that through your own past and future experiences. Anthropics thought experiments, and "God's extreme coin toss" in particular, usually seem to violate that.

I feel as though I could equally well come up with conclusions like...

  • "I have been locked up in this room by some psycho who is sending me obviously false notes".
  • "I am living in a simulation and I have no reason at all to trust the person running it to give me any valid information".
  • "I and my surroundings are products of some absurd and probably acausal universe".
  • "I am a Boltzmann brain and this will all be over shortly".
  • "I do not have any information that I can legitimately use to assign probabilities to anything in any way at all".