Paradoxes in all anthropic probabilities

post by Stuart_Armstrong · 2018-06-19T15:31:17.177Z · LW · GW · 26 comments

Contents

  FNC and evidence non-conservation
  SIA and evidence non-conservation
  SSA and changing the past
None
26 comments

In a previous post [LW · GW], I re-discovered full non-indexical updating (FNC), an anthropic theory I'm ashamed to say I had once known and then forgot. Thanks to Wei Dai for reminding me [LW(p) · GW(p)] of that.

There is a problem with FNC, though. In fact, there are problems with all anthropic probability theories. Both FNC and SIA violate conservation of expected evidence [LW · GW]: you can be in a situation where you know with certainty that your future probability will be different in a particular direction, from your current one. SSA has a different problem: it allows you to make decisions that change the probability of past events.

These paradoxes are presented to illustrate the fact that anthropic probability is not a coherent concept, and that dealing with multiple copies of a single agent is in the realm [LW · GW] of decision theory.

FNC and evidence non-conservation

Let's presume that the bandwidth of the human brain is bits per minute. Then we flip a coin. Upon it coming up heads, we create identical copies of you. Upon it coming up tails, we create copies of you.

Then if we assume that the experiences of your different copies are random, for the first minute, you will give equal probability to heads and tails. That's because there is a being with exactly the same observations as you, in both universes.

After two minutes, you will shift to odds in favour of tails: you're certain there's a being with your observations in the tails universe, and, with probability , there's one in the heads universe.

After a full three minutes, you will finally stabilise on odds in favour of tails, and stay there.

Thus, during the first minute, you know that FNC will be giving you different odds in the coming minutes, and you can predict the direction those odds will take.

If the observations are non-random, then the divergence will be slower, and the FNC odds will be changing for a longer period.

SIA and evidence non-conservation

If we use SIA instead of FNC, then, in the above situation, the odds of tails will be and will stay there, so that setting is not an issue for SIA.

To show a problem with SIA, assume there is one copy of you, that we flip a coin, and, if comes out tails, we will immediately duplicate you (putting the duplicate in a separate room). If it comes out heads, we will wait a minute before duplicating you.

Then SIA implies in favour of tails during that minute, but equal odds afterwards.

You can't get around this with tweaked references classes: one of the good properties of SIA is that it works the same whatever the reference class, as long as it includes agent currently subjectively indistinguishable from you.

SSA and changing the past

SSA has a lot of issues. It has the whole problem with reference classes; these are hard to define coherently, and agents in different reference classes with the same priors can agree to disagree (for instance, if we expect that there will be a single gender in the future, then if I'm in the reference class of males, I expect that single gender will be female - and the opposite will be expected for someone in the reference class of females). It violates causality: it assigns different probabilities to an event, purely depending on the future consequence of that event.

But I think I'll focus on another way it violates causality: your current actions can change the probability of past events.

Suppose that the proverbial coin is flipped, and that if it comes up heads, one version of you is created, and, if it comes up tails, copies of you are created. You are the last of these copies: either the only one in the heads world, or the last one in the tails world, you don't know. Under SSA, you assign odds of in favour of heads.

You have a convenient lever, however. If you pull it, then future copies of you will be created, in the heads world only (nothing will happen in the tails world). Therefore, of you pull it, the odds of the coin being tails - an event long past, and known to be past - will shift to from to in favour.

26 comments

Comments sorted by top scores.

comment by cousin_it · 2018-06-19T22:15:16.728Z · LW(p) · GW(p)

Does your post also show that selfish preferences are incoherent, because any selfish preference must rely on a weighting of your copies and every such weighting has weird properties?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-06-20T10:42:41.579Z · LW(p) · GW(p)

I agree with that, but I don't think the post shows it directly. My video https://www.youtube.com/watch?v=aiGOGkBiWEo does look at two possible versions of selfishness; my own position is that selfishness is incoherent, unless it's extreme observer-moment selfishness, which is useless.

Replies from: cousin_it
comment by cousin_it · 2018-06-20T12:16:11.978Z · LW(p) · GW(p)

Interesting! Can you explain that in text?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-06-20T13:24:24.634Z · LW(p) · GW(p)

For most versions of selfishness, if you're duplicated, then the two copies will have divergent preferences. However, if one of the copies is destroyed during duplication, this just counts as transportation. So the previous self values either future copies if only one exists. Therefore it seems incoherent for the previous self not to value both future copies if both exist, and hence for the two future copies not to value each other.

(btw, the logical conclusion is that the two copies have the same preferences, not that the two agents must value each other - it's possible that copy A only cares about themselves, and copy B only cares about copy A).

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2018-06-21T03:08:55.172Z · LW(p) · GW(p)

As reductios of anthropic views go, these are all pretty mild. Abandoning conservation of expected evidence isn't exactly an un-biteable bullet. And "Violating causality" is particularly mild, especially for those of us who like non-causal decision theories. As a one-boxer I've been accused of believing in retrocausality dozens of times... sticks and stones, you know. This sort of "causality violation" seems similarly frivolous. Oh, and the SSA reference class arbitrariness thing can be avoided by steelmanning SSA to make it more elegant--just get rid of the reference class idea and do it with centered worlds. SSA is what you get if you just do ordinary Bayesian conditionalization on centered worlds instead of on possible worlds. (Which is actually the more elegant and natural way of doing it, since possible worlds are a weird restriction on the sorts of sentences we use. Centered worlds, by contrast, are simply maximally consistent sets of sentences, full stop.) As for changing the probability of past events... this isn't mysterious in principle. We change the probability of past events all the time. Probabilities are just our credences in things! More seriously though, let A be the hypothetical state of the past light-cone that would result in your choosing to stretch your arm ten minutes from now, and B be the hypothetical state of the past light-cone that would result in your choosing to not stretch your arm. A and B are past events, but you should be uncertain about which one obtained until about ten minutes from now, at which point (depending on what you choose!) the probability of A will increase or decrease.

There are strong reductios in the vicinity though, if I recall correctly. (I did my MA on this stuff, but it was a while ago so I'm a little rusty.)

FNC-type views have the result that (a) we almost instantly become convinced, no matter what we experience, that the universe is an infinite soup of random noise occasionally coalescing to form Boltzmann Brains, because this is the simplest hypothesis that assigns probability 1 to the data; (b) we stay in this state forever and act accordingly--which means thinking happy thoughts, or something like that, whether we are average utilitarians or total utilitarians or egoists.

SIA-type views are as far as I can tell incoherent, in the following sense: The population size of universes grows much faster than their probability can shrink. So if you want to say that their probability is proportional to their population size... how? (Flag: I notice I am confused about this part.) A more down-to-earth way of putting this problem is that the hypothesis in which there is one universe is dominated by the hypothesis in which there are 3^^^^3 copies of that universe in parallel dimensions, which in turn is dominated by the hypothesis in which there are 4^^^^^4...

SSA-type views are the only game in town, as far as I'm concerned--except for the "Let's abandon probability entirely and just do decision theory" idea you favor. I'm not sure what to make of it yet. Anyhow, the big problem I see for SSA-type views is the one you mention about using the ability to create tons of copies of yourself to influence the world. That seems weird all right. I'd like to avoid that consequence if possible. But it doesn't seem worse than weird to me yet. It doesn't seem... un-biteable.

EDIT: I should add that I think your conclusion is probably right--I think your move away from probability and towards decision theory seems very promising. As we went updateless in decision theory, so too should we go updateless in probability. Something like that (I have to think & read about it more). I'm just objecting to the strong wording in your arguments to get there. :)

comment by Chris_Leong · 2018-06-20T23:59:38.129Z · LW(p) · GW(p)

I'd also love some clarifications:

For instance, if we expect that there will be a single gender in the future, then if I'm in the reference class of males, I expect that single gender will be female - and the opposite will be expected for someone in the reference class of females

Further:

It violates causality: it assigns different probabilities to an event, purely depending on the future consequence of that event.

Unfortunately the link doesn't work.

comment by Chris_Leong · 2018-06-20T17:13:30.424Z · LW(p) · GW(p)

Why shouldn't the probability predictably update for the Self-Indicative Assumption? This version of probability isn't supposed to refer to a property of the world, but some combination of the state of the world and the number of agents in various scenarios.

Regarding the self-sampling assumption, if you knew ahead of time that the lever would be pulled, then you could have updated before it was pulled. If you didn't know that the lever would be pulled, then you gained information: specifically about the number of copies in the heads case. It's not that you knew there was only one copy in the heads world, it's just that you thought there was only one copy, because you didn't think a mechanism like the lever would exist and be pulled. In fact, if you knew that the probability of the lever existing and then being pulled was 0, then the scenario would contradict itself.

As Charlie Steiner notes, it looks like you've lost information, but that's only because you've gained information that your information is inaccurate. Here's an analogy: suppose I give you a scientific paper that contains what appears to be lots of valuable information. Next I tell you that the paper is a fraud. It seems like you've lost information as you are less certain about the world, but you've actually gained it. I've written about this before for the Dr Evil problem [LW · GW].

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-06-20T20:50:24.755Z · LW(p) · GW(p)

You are deciding whether or not to pull the lever. The probability of a past event, known to be in the past, depends on your actions now.

To use your analogy, it's you deciding whether to label a scientific paper inaccurate or not - your choice of label, not anything else, makes it inaccurate or not.

Replies from: Chris_Leong
comment by Chris_Leong · 2018-06-20T23:52:14.626Z · LW(p) · GW(p)

Oh, actually I think what I wrote above was wrong. The self-sampling assumption is supposed to preserve probabilities like this. It's the self-indicative assumption that is relative to agents.

That said, I have a different objection. I'm confused about why pulling the lever would change the odds though. Your reference class is all copies that were the <last copy that was originally created>. So any further clones you create fall outside the reference class.

If you want to set your reference class to <the last copy that was created at any point> then:

  • Heads case - first round: if you pull the lever then you fall outside the reference class
  • Heads case - second round: lever doesn't do anything anymore as it has already been used
  • Tails case: pulling the lever does nothing

So you don't really have the option to pull the lever to create clones. If you were using a different reference class, what was it.

comment by cousin_it · 2018-07-02T08:41:30.592Z · LW(p) · GW(p)

To show a problem with SIA, assume there is one copy of you, that we flip a coin, and, if comes out tails, we will immediately duplicate you (putting the duplicate in a separate room). If it comes out heads, we will wait a minute before duplicating you.

We could make it simpler: flip a coin and duplicate you in a minute if it comes up heads. A.k.a. sleeping beauty. We already know that SIA forces you to update when you go through an anthropic "split" (like waking up in sleeping beauty), not just when you learn something.

comment by Charlie Steiner · 2018-06-19T18:35:42.013Z · LW(p) · GW(p)

The probability is only different when you think the world is in a different state. This no more violates conservation of expected evidence than putting the kettle on violates conservation of expected evidence by predictably changing the probability of hot water. The weird part is which part of the world is different.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-06-19T18:41:30.846Z · LW(p) · GW(p)

All the odds are about the outcome of a past coin flip, known to be in the past. This should not change in the ways described here.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2018-06-19T20:09:59.080Z · LW(p) · GW(p)

Hm, you're right, I guess there is something weird here (I'm not talking about FNC - I think that part is weird too - I mean "ordinary" anthropic probabilities).

If I had to try to put my finger on what's different, there is an apparent deviation from normal Bayesian updating. Normally, when you add some sense-data to your big history-o'-sense-data, you update by setting all incompatible hypotheses to zero and renormalizing what's left. But this "anthropic update" seems to add hypotheses rather than only removing them - when you're duplicated there's now more possible explanations for your sense-data, rather than the normal case of less and less possible explanations.

I think a Cartesian agent wouldn't do anthropic reasoning, though it might learn to simulate it if put through a series of Sleeping Beauty type games.

comment by Dacyn · 2018-06-22T22:42:25.387Z · LW(p) · GW(p)

With regards to your SIA objection, I think it is important to clarify exactly what we mean by evidence conservation here. The usual formulation is something like "If I expect to assign credence X to proposition P at future time T, then I should assign credence X to proposition P right now, unless by time T I expect to have lost information in a predictable way". Now if you are going to be duplicated, then it is not exactly clear what you mean by "I expect to assign ... at future time T", since there will be multiple copies of you that exist at time T. So, maybe you want to get around this by saying that you are referring to the "original" version of you that exists at time T, rather than any duplicates. But then the problem seems to be that by waiting, you will actually lose information in a predictable way! Namely, right now you know that you are not a duplicate, but the future version of you will not know that it is not a duplicate. Since you are losing information, it is not surprising that your probability will predictably change. So, I don't think SIA violates evidence conservation.

Incidentally, here is an intuition pump that I think supports SIA: suppose I flip a coin and if it is heads then I kill you, tails I keep you alive. Then if you are alive at the end of the experiment, surely you should assign 100% probability to tails (discounting model uncertainty of course). But you could easily reason that this violates evidence conservation: you predictably know that all future agents descended from you will assign 100% probability to tails, while you currently only assign 50% to tails. This points to the importance of precisely defining and analyzing evidence conservation as I have done in the previous paragraph. Additionally, if we generalize to the setting where I make/keep X copies of you if the coin lands heads and Y copies if tails, then SIA gives the elegant formula X/(X+Y) as the probability for heads after the experiment, and it is nice that our straightforward intuitions about the cases X=0 and Y=0 provide a double-check for this formula.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-06-25T08:49:22.636Z · LW(p) · GW(p)

Remember that this is about a coin flip that is in the past and known to be in the past. And that the future duplicates can remember everything their past potential-non-duplicate knew. So they might believe "now I'm not sure I'm not a duplicate, but it used to be the case that I thought that being a non-duplicate was more likely". So if that information was relevant, they can just put themselves in the shoes of their past selves.

Replies from: Dacyn
comment by Dacyn · 2018-06-25T18:55:19.534Z · LW(p) · GW(p)

They can't put themselves in the shoes of their past selves, because in some sense they are not really sure whether they have past selves at all, rather than merely being duplicates of someone. Just because your brain is copied from someone else doesn't mean that you are in the same epistemological state as them. And the true descendants are also not in the same epistemological state, because they do not know whether they are copies or not.

comment by cousin_it · 2018-07-04T17:43:52.493Z · LW(p) · GW(p)

Does MWI imply SIA?

Here’s a model of Sleeping Beauty under MWI. The universe has two apartments with multiple rooms. Each apartment has a room containing a copy of you. You have 50:50 beliefs about which apartment you’re in. One minute from now, a new copy of you will be created in the first apartment (in another identical room). At that moment, despite getting no new information, should you change your belief about which apartment you’re in? Obviously yes.

So it seems like your "conservation of expected evidence" argument has a mistake somewhere, and SIA is actually fine.

Replies from: primer, Stuart_Armstrong
comment by Primer (primer) · 2019-11-26T16:22:48.966Z · LW(p) · GW(p)

I'm just getting started with SIA, SSA, FNC and the like, so probably I'm missing some core understanding, but: A minute from now you do gain new information: One minute has passed.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2019-11-26T17:17:19.963Z · LW(p) · GW(p)

Was that unexpected?

comment by Stuart_Armstrong · 2018-07-05T10:06:41.234Z · LW(p) · GW(p)
should you change your belief about which apartment you’re in? Obviously yes.

Do you want the majority of your copies to be correct as to what branch of the multiverse they are in? Go SIA. Do you want your copies to be correct in the majority of branches of the multiverse? SSA, then.

Replies from: cousin_it
comment by cousin_it · 2018-07-05T10:14:11.921Z · LW(p) · GW(p)

This experiment doesn't have branches though, it has apartments and rooms. You could care about being right in the majority of apartments, or you could care about being right in the majority of rooms, but these are arbitrary divisions of space and give conflicting answers. Or you could care about majority of copies being right, which is objective and doesn't have a conflicting counterpart. You can reproduce it by creating three identical copies and then distributing them into rooms. So SIA has an objective basis and SSA doesn't.

The next question is whether apartments are a good analogy for MWI, but according to what we know so far, that seems likely. Especially if it turns out that quantum and cosmological multiverses are the same.

comment by Lukas Finnveden (Lanrian) · 2018-06-22T20:25:41.992Z · LW(p) · GW(p)

My preferred way of doing anthropics while keeping probabilities around is to update your probabilities according to the chance that at least one of the decision making agents that your decision is logically linked to exists, and then prioritise the worlds where there are more of those agents by acknowledging that you're making the decision for all of them. This yields the same (correct) conclusions as SIA when you're only making decisions for yourself, and FNC when you're making decisions for all of your identical copies, but it avoids the paradoxes brought up in this article and it allows you to take into account that you're making decisions for all of your similar copies, which you want to have for newcombs problem like situations.

However, I think it's possible to construct even more contorted scenarios where conservation of expected evidence is violated for this as well. If there are 2 copies of you, a coin is flipped, and:

  • If it's heads the copies are presented with two different choices.
  • If it's tails the copies are presented with the same choice.

then you know that you will update towards heads when you're presented with a choice after a minute, since heads make it twice as likely that anyone would be presented with that specific choice. I don't know if there's any way around this. Maybe if you update your probabilities according to the chance that someone following your decision theory is around, rather than someone making your exact choice, or something like that?

comment by Rafael Harth (sil-ver) · 2018-06-20T16:40:01.698Z · LW(p) · GW(p)

Is there a strong argument as to why we should care about the conservation of expected evidence? My belief at the moment is that something very close to SIA is true, and that conservation of expected evidence is a principle which simply doesn't hold in scenarios of multiple observers.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2018-06-20T20:57:37.737Z · LW(p) · GW(p)

If probability makes sense at all, then "I believe that the odds are 2:1, but I *know* that in a minute I'll believe that it's 1:1" destroys it as a coherent formalisation of beliefs. Should the 2:1 you force their future copy to stick with 2:1 rather than 1:1? If not, why do they think their own beliefs are right?

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2018-06-21T03:20:01.973Z · LW(p) · GW(p)

Which interpretation of probability do you use? I go with standard subjective bayesianism: Probabilities are your credences are your degrees of belief.

So, there's nothing contradictory or incoherent about believing that you will believe something else in the future. Trivial case: Someone will brainwash you in the future and you know this. Why do you think your own beliefs are right? First of all, why do I need to answer that question in order to coherently have those beliefs? Not every belief can be justified in that way. Secondly, if I follow SSA, here's my justification: "Well, here are my priors. Here is my evidence. I then conditionalized on the evidence, and this is what I got. That future version of me has the same priors but different evidence, so they got a different result." Why is that not justification enough?

Yes, it's weird when you are motivated to force your future copy to do things. Perhaps we should do for probability what we did for decision theory, and talk about agents that have the ability to irrevocably bind their future selves. (Isn't this basically what you think we should do?)

But it's not incoherent or senseless to think that yes, I have credence X now and in the future I will have credence Y. Just as it isn't incoherent or senseless to wish that your future self would refuse the blackmail even though your future self would actually decide to give in.

Replies from: Lanrian
comment by Lukas Finnveden (Lanrian) · 2018-06-22T14:01:54.400Z · LW(p) · GW(p)
Yes, it's weird when you are motivated to force your future copy to do things

If you couple these probability theories with the right decision theories, this should never come up. FNC yields the correct answer if you use a decision theory that lets you decide for all your identical copies (but not the ones who has had different experiences), and SIA yields the correct answer if you assume that you can't affect the choices of the rest of your copies.