Zero-sum conversion: a cute trick for decision problems

post by Manfred · 2012-07-26T21:36:57.813Z · LW · GW · Legacy · 15 comments

Contents

15 comments

A while ago, we were presented with an interesting puzzle, usually just called "Psy-kosh's non-anthropic problem."  This problem is not, as is made clear, an anthropic problem, but it generates a similar sort of confusion by having you cooperate with people who think like you, and you're unsure which of these people you are.

In the linked post, cousin_it declares "no points for UDT," which is why this post is not called a total solution, but a cute trick :)  What I call zero-sum conversion is just a way to make the UDT calculations (that is, the things you do when calculating what the actual best choice is) seem obvious - which is good, since they're the ones that give you the right answer.  This trick also makes the UDT math obvious on the absent-minded driver problem and the Sleeping Beauty problem (though that's trickier).

The basic idea is to pretend that your decision is part of a zero-sum game against a non-anthropic, non-cooperating, generally non-confusing opponent.  In order to do this, you must construct an imaginary opponent such that for every choice you could make, their expected utility for that choice is the negative, the opposite of your expected utility.  Then you simply do the thing your opponent likes least, and it is equivalent to doing the thing you'll like best.

 

Example in the case of the non-anthropic problem (yes, you should probably have that open in another tab):

Your opponent here is the experimenter, who really dislikes giving money to charity (characterization isn't necessary, but it's fun).  For every utilon that you, personally, would get from money going to charity when you say "yea" or "nay," the experimenter gets a negative utilon.

Proof that the experimenter's expected utilities are negative yours is trivial in this case, since the utilities are opposites for every possible outcome, including cases where you're not a decider.  But things can be trickier in other problems, since expected utilities can be opposites without the utilities being exactly opposite for all outcomes.  For example, what happens in the case where the participants in the non-anthropic problem get individual candybars instead of collective money to charity?

Anyhow, now that we have our opponent whose expected utilities are the opposite of yours for every decision you make, you just have to make the decision that's worst for your opponent.  This is pretty easy, since our opponent doesn't have to deal with any confusing stuff - they just flip a coin, which to them is an ordinary 50/50 situation, and then pay out based on your decision.  So their expected value of "yea" is -550, while their expected value of "nay" is -700.

This valuation already takes into account cooperation and all that stuff - it's simply correct.  It's merely a coincidence that this seems like you didn't update the evidence of whether you're a decider or not.  Though, now that you mention it, it's a general fact that in cooperate problems like this, you can construct a suitable opponent by just reversing your utility in all situations, giving you this "updatelessness."

 

Disclaimer: I haven't looked very hard for people writing up this trick before me.  Katja or someone quite possibly already has this on their blog somewhere.

15 comments

Comments sorted by top scores.

comment by wedrifid · 2012-07-27T00:55:27.238Z · LW(p) · GW(p)

I don't understand how this helps. It doesn't seem to allow anything I couldn't do before. Is it just that you find it easier to justify to yourself substituting the decision of the enemy for your own than the decision you would precommit to for your current one?

Replies from: Manfred, Xachariah, OrphanWilde
comment by Manfred · 2012-07-27T07:19:45.287Z · LW(p) · GW(p)

It doesn't seem to allow anything I couldn't do before.

Yes, basically. This is "secretly" just a different way of looking at UDT, and this particular way is easy to get to from a standard game-theoretic starting point, but harder to get to from a "rationality is what wins" starting point.

Given that the non-anthropic problem is interesting because it introduces tension between these two viewpoints (sorta), this trick is interesting because it reduces that tension.

Replies from: wedrifid
comment by wedrifid · 2012-07-27T08:08:08.799Z · LW(p) · GW(p)

Given this framing I like it!

Replies from: Manfred
comment by Manfred · 2012-07-27T16:25:01.792Z · LW(p) · GW(p)

Yay!

comment by Xachariah · 2012-07-27T04:29:24.381Z · LW(p) · GW(p)

Manfred could answer better, but I think this trick is designed to help with point of view.

The problem with anthropic problems is that you aren't sure which you is you. There's all sorts of branches that occur, and you don't know which branch you're on. You're trying your damnedest to look backwards up the branching probability tree and hoping you don't lose track of any branches.

By pretending you're the researcher, you're looking at possible branching futures the other way. You always have a frame of reference that doesn't change subjectively, and doesn't need updates. At least, that's how I think it's supposed to work.

comment by OrphanWilde · 2012-07-27T04:34:10.215Z · LW(p) · GW(p)

The helpfulness described here is this: The mathematics are simpler. [Xachariah's response explains why.]

Explanations for decision trees can also be simpler. Newcomblike problems become almost trivial to consider from Omega's perspective, for example, even in the counterfactual mugging case.

Replies from: wedrifid
comment by wedrifid · 2012-07-27T04:41:50.129Z · LW(p) · GW(p)

The mathematics are simpler.

I can do all the same mathematics without creating an imaginary enemy. The only thing that is changing here is how I choose to describe the mathematics in question to myself. This evidently allows Manfred to feel comfortable doing specific mathematics that he would not be comfortable doing without describing it in terms of a contrived enemy's perspective.

comment by Oscar_Cunningham · 2012-07-28T01:12:18.117Z · LW(p) · GW(p)

So their expected value of "yea" is -550, while their expected value of "nay" is -700.

This is only true if the experimenter doesn't know the result of a coin flip (otherwise it's either 1000/700 or 100/700, but you don't know which). But how do you decide to model your opponent as being someone who doesn't know the result, rather than someone who does? The only way I can think of is to follow UDT and always specify that your opponent is in a state of complete ignorance. But once we've borrowed this rule from UDT it seems like we're just plain using all of UDT. We've just made it more complicated by sticking a minus sign on the utilities and then picking the least favoured one. The use of an "opponent" doesn't seem to add any insight.

Suppose I rephrase UDT this way: Visualise a version of yourself before you had any evidence. Do what they would want you to do. As far as I can tell, this is just the above post with the minus signs taken out.

Replies from: Manfred
comment by Manfred · 2012-07-29T02:26:18.092Z · LW(p) · GW(p)

we're just plain using all of UDT

Yep. The exposition is merely different, and a few more of the assumptions hidden behind common sense :P

If this exposition doesn't "work" for you, then that's fine too.

comment by cousin_it · 2012-07-27T10:53:55.941Z · LW(p) · GW(p)

That's a nice trick, but it seems to me that a confused person could still manage to stay confused. They could say that being a decider provides information about the coinflip, which can be used to make the opponent suffer more...

comment by Luke_A_Somers · 2012-07-27T13:58:44.759Z · LW(p) · GW(p)

The chances of all 9 deciders agreeing are very low - lower than 10% - unless we can arrange a strong precommitment. Therefore, we can discount the $1000 award as unlikely, and go for the $700. Nay it is.

Replies from: endoself
comment by endoself · 2012-07-27T18:28:17.481Z · LW(p) · GW(p)

Please don't fight the hypothetical.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-07-30T14:30:24.099Z · LW(p) · GW(p)

Then make better hypotheticals?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-07-30T15:30:28.220Z · LW(p) · GW(p)

These considerations are not opposed. Both are good ideas: not fighting hypotheticals, and making better hypotheticals. Fallibility shouldn't be perceived as an egalitarian right: one person's flaw doesn't make another person's flaw OK.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-07-31T14:04:19.075Z · LW(p) · GW(p)

Yes, that's true as a general rule.

In this case, it's a 'meet in the middle' thing. This hypothetical is asking us to completely ignore something not on the grounds of 'ceteris paribus' or some other conventional hypothetical framing device, but to ignore the dominant effect in the system.