Why (anthropic) probability isn't enough

post by Stuart_Armstrong · 2012-12-13T16:09:58.698Z · LW · GW · Legacy · 21 comments

Contents

  Anthropics: why probability isn't enough
None
21 comments

A technical report of the Future of Humanity Institute (authored by me), on why anthropic probability isn't enough to reach decisions in anthropic situations. You also have to choose your decision theory, and take into account your altruism towards your copies. And these components can co-vary while leaving your ultimate decision the same - typically, EDT agents using SSA will reach the same decisions as CDT agents using SIA, and altruistic causal agents may decide the same way as selfish evidential agents.

 

Anthropics: why probability isn't enough

This paper argues that the current treatment of anthropic and self-locating problems over-emphasises the importance of anthropic probabilities, and ignores other relevant and important factors, such as whether the various copies of the agents in question consider that they are acting in a linked fashion and whether they are mutually altruistic towards each other. These issues, generally irrelevant for non-anthropic problems, come to the forefront in anthropic situations and are at least as important as the anthropic probabilities: indeed they can erase the difference between different theories of anthropic probability, or increase their divergence. These help to reinterpret the decisions, rather than probabilities, as the fundamental objects of interest in anthropic problems.

 

21 comments

Comments sorted by top scores.

comment by Irgy · 2012-12-14T00:56:09.476Z · LW(p) · GW(p)

I have an interesting solution to the non-anthropic problem. Firstly, the reward of 0 for voting differently is ignored in all the calculations, as it is assumed the other agent is acting identically. Therefore, its value is irrelevant (unless of course it becomes so high that the agents start deliberately employing randomisation in an attempt to try and vote differently, which would distort the problem).

However, consider what happens if you set the value to 9. In this case, you can forget about the other agent entirely. Voting heads if the coin was tails always loses exactly 1, while voting tails if the coin was heads loses 3. Since no method gives a probability higher than 3/4 for the coin being tails, the answer is simple: vote heads. Of course, this is a different problem, but it highlights the fact that any method which tells you to vote tails, and yet does not include the 0 anywhere in the calculations (since it assumes the agents can't possibly vote differently) is clearly suspect.

Replies from: Manfred
comment by Manfred · 2012-12-15T12:03:45.683Z · LW(p) · GW(p)

So you'd, for example, multiply by that zero.

Replies from: Irgy
comment by Irgy · 2012-12-15T19:51:43.748Z · LW(p) · GW(p)

? multiply what by that zero? There's so many things you might mean by that, and if even one of them made any sense to me I'd just assume that was it, but as it stands I have no idea. Not a very helpful comment.

Replies from: Manfred
comment by Manfred · 2012-12-16T01:00:49.880Z · LW(p) · GW(p)

Well, suppose you're doing an expected utility calculation, and the utility of outcome one is U1, the utility of outcome 2 is U2, and so on.

Then your expected utility looks like (some stuff)*U1 + (some other stuff)*U2, and so on. The stuff in parentheses is usually the probability of outcome N occurring, but some systems might include a correction based on collective decision-making or something, and that's fine.

Now suppose that U1=0. Then your expected utility looks like (some stuff)*0 + (some other stuff)*U2, and so on. Which is equal to (that other stuff)*U2, etc, because you just multiplied the first term by 0. So the zero is in there. You've just multiplied by it.

Replies from: Irgy
comment by Irgy · 2012-12-16T11:25:50.993Z · LW(p) · GW(p)

Ok, thanks, that makes more sense than anything I'd guessed.

There's a difference between shortcutting a calculation and not accounting for something in the first place. In the debate between all the topics mentioned in the paper (e.g. SSI/SSA, split responsibility, precommitments and so on) not one method would give a different answer if that 0 was a 5, a 9, or a -100. It's not because they're shortcutting the maths, it's because, as I said in my first comment, they assume that it's effectively not possible for the two people to vote differently anyway. Which is fine in the abstract, even if it's a little suspect in practice (since this, for once, is a quite realisable experiment).

I'll rephrase my final line then: "If a method says to vote tails, and yet would give the same answer with the 0 changed to a 9, then it is clearly suspect". Incidentally I don't know of a method which says "vote tails" and would give a different answer if you changed the 0 to a 9 either.

I think the reason I didn't get your comment originally is that the first thing I do with this problem is work with the differences - which in this case means subtracting everything from 10 and think in terms of money lost on bad votes, not absolute values. So I wouldn't be multiplying by 0. It's neither better nor worse, just explains why I didn't know what you meant.

Replies from: Manfred
comment by Manfred · 2012-12-16T13:49:21.160Z · LW(p) · GW(p)

Oh, okay. Looks like I didn't really understand your point when I commented :)

Perhaps I still don't - you say "no method gives a probability higher than 3/4 for the coin being tails," but you've in fact been given information that should cause you to update that probability. It's like someone had a bag with 10 balls in it. That person flipped a coin, and if the coin was heads the bag has 9 black balls and 1 white ball, but if the coin was tails the bag has 9 white balls and 1 black ball. They reach into the bag and hand you a ball at random, and it's black - what's the probability the coin was heads?

If you reward disagreement, then what you're really rewarding in this case are mixed (probabilistic) actions. The reward only pays out if the coin landed tails, so that there's someone else to disagree with. So people will give what seems to them to be the same honest answer when you change the result of disagreeing from 0 to 0+epsilon. But when the payoff from disagreeing passes the expected payoff of honesty, agents will pick mixed actions.

To be more precise: If we simplify a little and only let them choose 50/50 if they want to disagree, then we have that the expected utility of honesty is P(heads)U(choice,heads) + P(tails)U(choice,heads), while the expected utility of coin-flipping is pretty much P(heads)U(average,heads) + P(tails)*U(disagree,tails). These will pass each other at different values of U(disagree, tails) depending on that you think P(heads) and P(tails) are, and also depending on which choice you think is best.

Replies from: Irgy
comment by Irgy · 2012-12-16T23:23:35.647Z · LW(p) · GW(p)

I tried to cover what you're talking about with my statement in brackets at the end of the first paragraph. Set the value for disagreeing too high and you're rewarding it, in which case people start deliberately making randomised choices in order to disagree. Too low and they ought to be going out of their way to try and agree above all else - except there's no way to do that in practice, and no way not to do it in the abstract analysis that assumes they think the same. A value of 9 though is actually in between these two cases - it's exactly the average of the two agreement options, and it neither punishes nor rewards disagreement. It treats disagreement "fairly", and in doing so entirely un-links the two agents. Which is exactly why I picked it, and why it simplifies the problem. Again I think I'm thinking of these values relatively while you're thinking absolutely - a value of epsilon for disagreeing is not rewarding disagreeing slightly, it's still punishing it severely relative to the other outcomes.

To me what it illustrates is that the linking between the two agents is something of an illusion in the first place. Punishing disagreement encourages the agents to collaborate on their vote, but the problem provides no explicit means for them to do so. Introducing an explicit means to co-operate, such as pre-commitment or having the agents run identical decision algorithms, would dissolve the problem into a clear solution (actually, explicitly identical algorithms makes it a version Newcomb's Paradox, but that's at least a well studied problem). It's the ambiguity of how to co-operate combined with the strong motivation, lack of explicit means, and abundance of theoretical means to hand-wave agreement that creates the paradox.

As for the stuff you say about the probability and the bucket of coloured balls, I get all that. The original probability of the coin flip was 1/2 each way. The evidence that you've been asked to vote makes the subjective likelihood of tails 2/3. Also somehow the number 3/4 appears in the SSA solution to the Sleeping Beauty problem (which to me seems just flat-out wrong, and enough for me to write off that method unless I see a very good defence of it), which made me worry that somewhere out there was a method which somehow comes up with 3/4. So I covered my bases by saying "no method gives probability higher than 3/4", which was the minimum neccesary requirement and what I figured was fairly safe statement. The reality is 2/3 is simply just correct for the subjective probability of tails, for reasons like you say, and maybe I just confuse things by mucking about trying to cover all possible bad solutions. It is I admit a little confusing to talk about whether anything is "more than 3/4" when the only two values under serious consideration are the a-priori 1/2 and the subjective posterior 2/3.

Replies from: Manfred
comment by Manfred · 2012-12-17T00:01:14.049Z · LW(p) · GW(p)

Yeah, I didn't know exactly what problem statement you were using (the most common formulation of the non-anthropic problem I know is this one), so I didn't know "9" was particularly special.

Though since the point at which I think randomization becomes better than honesty depends on my P(heads) and on what choice I think is honest. So what value of the randomization-reward is special is fuzzy.

I guess I'm not seeing any middle ground between "be honest," and "pick randomization as an action," even for naive CDT where "be honest" gets the problem wrong.

which made me worry that somewhere out there was a method which somehow comes up with 3/4.

Somewhere in Stuart Armstrong's bestiary of non-probabilistic decision procedures you can get an effective 3/4 on the sleeping beauty problem, but I wouldn't worry about it - that bestiary is silly anyhow :P

comment by gwillen · 2012-12-13T22:33:01.696Z · LW(p) · GW(p)

I know that the right way for me to handle this is to read the paper, but it might be helpful to expand your summary to define SSA and SIA, and causal versus evidential agents? (And presumably EDT versus CDT too, though I already know those.)

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-12-13T23:20:48.197Z · LW(p) · GW(p)

SIA and SSA are defined in http://lesswrong.com/lw/892/anthropic_decision_theory_ii_selfindication/

(post http://lesswrong.com/lw/891/anthropic_decision_theory_i_sleeping_beauty_and/ sets up the Sleeping Beauty problem).

Replies from: drnickbone
comment by drnickbone · 2012-12-14T23:55:57.598Z · LW(p) · GW(p)

I've already read your (excellent) paper "Anthropic Decision Theory". Is the FHI technical report basically a summary of this, or does it contain additional results? (Just want to know before taking the time to read the report.)

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-12-15T00:33:31.334Z · LW(p) · GW(p)

excellent

Thanks :-)

This tech report is more a motivation as to why anthropic decision theory might be needed - it shows that you can reach the same decision in different ways, and that SIA or SSA aren't enough to fix your decision. It's philosophically useful, but doesn't give any prescriptive results.

comment by beoShaffer · 2012-12-14T18:17:41.028Z · LW(p) · GW(p)

Very nice report, but I did note some typos. inseparate and situation.SSA need spaces.

comment by Luke_A_Somers · 2012-12-13T21:01:56.323Z · LW(p) · GW(p)

I drew the distinction earlier between subjective probability and betting behavior with a tale rather like the non-anthropic sleeping beauty table presented here.

It seems to me like the only difference between SSA + total, and SIA + divided, is which of these you're talking about when you speak of probability (SSA brings you to subjective probability, which must then be corrected to get proper bets; SIA gives you the right bets to make, which must be corrected to get the proper subjective probability)

Replies from: Stuart_Armstrong, Benja
comment by Stuart_Armstrong · 2012-12-13T23:22:32.936Z · LW(p) · GW(p)

To deal with these ideas correctly, you need to use anthropic decision theory.

The best current online version of this is on less wrong, split into six articles (I'm finishing up an improved version for hopefully publication):

http://lesswrong.com/search/results?cx=015839050583929870010%3A-802ptn4igi&cof=FORID%3A11&ie=UTF-8&q=anthropic+decision+theory&sa=Search&siteurl=lesswrong.com%2Flw%2Ffxb%2Fwhy_anthropic_probability_isnt_enough%2F81pw%3Fcontext%3D3&ref=lesswrong.com%2Fmessage%2Finbox%2F&ss=3562j568790j25

comment by Benya (Benja) · 2012-12-13T21:33:45.398Z · LW(p) · GW(p)

It seems to me like the only difference between SSA + total, and SIA + divided, is which of these you're talking about when you speak of probability

Doesn't the isomorphism between them only hold if your SSA reference class is exactly the set of agents responsible for your decision?

(This question is also for Stuart -- by the way, thanks for writing this, the exposition of the divided responsibility idea was useful!)

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-12-13T23:26:30.007Z · LW(p) · GW(p)

In the anthropic decision theory formalism (see the link I posted in answer to Luke_A_Somers) SSA-like behaviour emerges from average utilitarianism (also selfish agents, but that's more complicated). The whole reference class complexity, in this context, is the complexity of deciding the class of agents that you average over.

Replies from: Benja
comment by Benya (Benja) · 2012-12-14T00:10:49.220Z · LW(p) · GW(p)

Yes, I haven't studied the LW sequence in detail, but I've read the arxiv.org draft, so I'm familiar with the argument. :-) (Are there important things in the LW sequence that are not in the draft, so that I should read that too? I remember you did something where agents had both a selfish and a global component to their utility function, that wasn't in the draft...) But from the techreport I got the impression that you were talking about actual SSA-using agents, not about the emergence of SSA-like behavior from ADT; e.g. on the last page, you say

Finally, it should be noted that a lot of anthropic decision problems can be solved without needing to work out the anthropic probabilities and impact responsibility at all (see for instance the approach in (Armstrong, 2012)).

which sounds as if you're contrasting two different approaches in the techreport and in the draft, not as if they're both about the same thing?

[And sorry for misspelling you earlier -- corrected now, I don't know what happened there...]

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2012-12-14T00:24:00.938Z · LW(p) · GW(p)

What I really meant is - the things in the tech report are fine as far as they go, but the Anthropic decision paper is where the real results are.

I agree with you that the isomorphism only holds if your reference class is suitable (and for selfish agents, you need to mess around with precommitments). The tech report does make some simplifying assumptions (as it's point was not to find the full condition for rigorous isomorphism results, but to illustrate that anthropic probabilities are not enough on their own).

Replies from: Benja
comment by Benya (Benja) · 2012-12-14T01:56:34.932Z · LW(p) · GW(p)

Thanks!

comment by timtyler · 2012-12-15T13:32:53.987Z · LW(p) · GW(p)

It seems to me that you're trying to invent a theory of kin selection between agents in possible worlds. Biology has a rich theory for how agents which resemble each other behave towards each other - kin selection. Biology too has to deal with other ways that agents can come to resemble each other - e.g. mimicry, convergent evolution and chance. However, in terms of producing cooperative behaviour, relatedness is the big one.