Timeless Decision Theory: Problems I Can't Solve

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-20T00:02:59.721Z · LW · GW · Legacy · 156 comments

Contents

156 comments

Suppose you're out in the desert, running out of water, and soon to die - when someone in a motor vehicle drives up next to you.  Furthermore, the driver of the motor vehicle is a perfectly selfish ideal game-theoretic agent, and even further, so are you; and what's more, the driver is Paul Ekman, who's really, really good at reading facial microexpressions.  The driver says, "Well, I'll convey you to town if it's in my interest to do so - so will you give me $100 from an ATM when we reach town?"

Now of course you wish you could answer "Yes", but as an ideal game theorist yourself, you realize that, once you actually reach town, you'll have no further motive to pay off the driver.  "Yes," you say.  "You're lying," says the driver, and drives off leaving you to die.

If only you weren't so rational!

This is the dilemma of Parfit's Hitchhiker, and the above is the standard resolution according to mainstream philosophy's causal decision theory, which also two-boxes on Newcomb's Problem and defects in the Prisoner's Dilemma.  Of course, any self-modifying agent who expects to face such problems - in general, or in particular - will soon self-modify into an agent that doesn't regret its "rationality" so much.  So from the perspective of a self-modifying-AI-theorist, classical causal decision theory is a wash.  And indeed I've worked out a theory, tentatively labeled "timeless decision theory", which covers these three Newcomblike problems and delivers a first-order answer that is already reflectively consistent, without need to explicitly consider such notions as "precommitment".  Unfortunately this "timeless decision theory" would require a long sequence to write up, and it's not my current highest writing priority unless someone offers to let me do a PhD thesis on it.

However, there are some other timeless decision problems for which I do not possess a general theory.

For example, there's a problem introduced to me by Gary Drescher's marvelous Good and Real (OOPS: The below formulation was independently invented by Vladimir Nesov; Drescher's book actually contains a related dilemma in which box B is transparent, and only contains $1M if Omega predicts you will one-box whether B appears full or empty, and Omega has a 1% error rate) which runs as follows:

Suppose Omega (the same superagent from Newcomb's Problem, who is known to be honest about how it poses these sorts of dilemmas) comes to you and says:

"I just flipped a fair coin.  I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000.  And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads.  The coin came up heads - can I have $1000?"

Obviously, the only reflectively consistent answer in this case is "Yes - here's the $1000", because if you're an agent who expects to encounter many problems like this in the future, you will self-modify to be the sort of agent who answers "Yes" to this sort of question - just like with Newcomb's Problem or Parfit's Hitchhiker.

But I don't have a general theory which replies "Yes".  At the point where Omega asks me this question, I already know that the coin came up heads, so I already know I'm not going to get the million.  It seems like I want to decide "as if" I don't know whether the coin came up heads or tails, and then implement that decision even if I know the coin came up heads.  But I don't have a good formal way of talking about how my decision in one state of knowledge has to be determined by the decision I would make if I occupied a different epistemic state, conditioning using the probability previously possessed by events I have since learned the outcome of...  Again, it's easy to talk informally about why you have to reply "Yes" in this case, but that's not the same as being able to exhibit a general algorithm.

Another stumper was presented to me by Robin Hanson at an OBLW meetup.  Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote.  Let's say that six of them form a coalition and decide to vote to divide the pie among themselves, one-sixth each.  But then two of them think, "Hey, this leaves four agents out in the cold.  We'll get together with those four agents and offer them to divide half the pie among the four of them, leaving one quarter apiece for the two of us.  We get a larger share than one-sixth that way, and they get a larger share than zero, so it's an improvement from the perspectives of all six of us - they should take the deal."  And those six then form a new coalition and redivide the pie.  Then another two of the agents think:  "The two of us are getting one-eighth apiece, while four other agents are getting zero - we should form a coalition with them, and by majority vote, give each of us one-sixth."

And so it goes on:  Every majority coalition and division of the pie, is dominated by another majority coalition in which each agent of the new majority gets more pie.  There does not appear to be any such thing as a dominant majority vote.

(Robin Hanson actually used this to suggest that if you set up a Constitution which governs a society of humans and AIs, the AIs will be unable to conspire among themselves to change the constitution and leave the humans out in the cold, because then the new compact would be dominated by yet other compacts and there would be chaos, and therefore any constitution stays in place forever.  Or something along those lines.  Needless to say, I do not intend to rely on such, but it would be nice to have a formal theory in hand which shows how ideal reflectively consistent decision agents will act in such cases (so we can prove they'll shed the old "constitution" like used snakeskin.))

Here's yet another problem whose proper formulation I'm still not sure of, and it runs as follows.  First, consider the Prisoner's Dilemma.  Informally, two timeless decision agents with common knowledge of the other's timeless decision agency, but no way to communicate or make binding commitments, will both Cooperate because they know that the other agent is in a similar epistemic state, running a similar decision algorithm, and will end up doing the same thing that they themselves do.  In general, on the True Prisoner's Dilemma, facing an opponent who can accurately predict your own decisions, you want to cooperate only if the other agent will cooperate if and only if they predict that you will cooperate.  And the other agent is reasoning similarly:  They want to cooperate only if you will cooperate if and only if you accurately predict that they will cooperate.

But there's actually an infinite regress here which is being glossed over - you won't cooperate just because you predict that they will cooperate, you will only cooperate if you predict they will cooperate if and only if you cooperate.  So the other agent needs to cooperate if they predict that you will cooperate if you predict that they will cooperate... (...only if they predict that you will cooperate, etcetera).

On the Prisoner's Dilemma in particular, this infinite regress can be cut short by expecting that the other agent is doing symmetrical reasoning on a symmetrical problem and will come to a symmetrical conclusion, so that you can expect their action to be the symmetrical analogue of your own - in which case (C, C) is preferable to (D, D).  But what if you're facing a more general decision problem, with many agents having asymmetrical choices, and everyone wants to have their decisions depend on how they predict that other agents' decisions depend on their own predicted decisions?  Is there a general way of resolving the regress?

On Parfit's Hitchhiker and Newcomb's Problem, we're told how the other behaves as a direct function of our own predicted decision - Omega rewards you if you (are predicted to) one-box, the driver in Parfit's Hitchhiker saves you if you (are predicted to) pay $100 on reaching the city.  My timeless decision theory only functions in cases where the other agents' decisions can be viewed as functions of one argument, that argument being your own choice in that particular case - either by specification (as in Newcomb's Problem) or by symmetry (as in the Prisoner's Dilemma).  If their decision is allowed to depend on how your decision depends on their decision - like saying, "I'll cooperate, not 'if the other agent cooperates', but only if the other agent cooperates if and only if I cooperate - if I predict the other agent to cooperate unconditionally, then I'll just defect" - then in general I do not know how to resolve the resulting infinite regress of conditionality, except in the special case of predictable symmetry.

You perceive that there is a definite note of "timelessness" in all these problems.

Any offered solution may assume that a timeless decision theory for direct cases already exists - that is, if you can reduce the problem to one of "I can predict that if (the other agent predicts) I choose strategy X, then the other agent will implement strategy Y, and my expected payoff is Z", then I already have a reflectively consistent solution which this margin is unfortunately too small to contain.

(In case you're wondering, I'm writing this up because one of the SIAI Summer Project people asked if there was any Friendly AI problem that could be modularized and handed off and potentially written up afterward, and the answer to this is almost always "No", but this is actually the one exception that I can think of.  (Anyone actually taking a shot at this should probably familiarize themselves with the existing literature on Newcomblike problems - the edited volume "Paradoxes of Rationality and Cooperation" should be a sufficient start (and I believe there's a copy at the SIAI Summer Project house.)))

156 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2009-07-20T08:18:50.121Z · LW(p) · GW(p)

There does not appear to be any such thing as a dominant majority vote.

Eliezer, are you aware that there's an academic field studying issues like this? It's called Social Choice Theory, and happens to be covered in chapter 4 of Hervé Moulin's Fair Division and Collective Welfare, which I recommended in my post about Cooperative Game Theory.

I know you're probably approaching this problem from a different angle, but it should still be helpful to read what other researchers have written about it.

A separate comment I want to make is that if you want others to help you solve problems in "timeless decision theory", you really need to publish the results you've got already. What you're doing now is like if Einstein had asked people to help him predict the temperature of black holes before having published the general theory of relativity.

As far as needing a long sequence, are you assuming that the reader has no background in decision theory? What if you just write to an audience of professional decision theorists, or someone who has at least read "The Foundations of Causal Decision Theory" or the equivalent?

Replies from: cousin_it, Wei_Dai
comment by cousin_it · 2009-07-20T09:18:00.932Z · LW(p) · GW(p)

Seconded. I, for one, would be perfectly OK with posts requiring a lot of unfamiliar background math as long as they're correct and give references. For example, Scott Aaronson isn't afraid of scary topics and I'm not afraid of using his posts as entry points into the maze.

Replies from: None
comment by [deleted] · 2009-07-23T15:43:14.904Z · LW(p) · GW(p)

For that matter, I'm sure someone else would be willing to write a sequence on decision theory to ensure everyone has the required background knowledge. This might work even better if Eliezer suggested some topics to be covered in the sequence so that the background was more specific.

In fact, I would happily do that and I'm sure others would too.

comment by Wei Dai (Wei_Dai) · 2010-07-06T20:56:43.920Z · LW(p) · GW(p)

There does not appear to be any such thing as a dominant majority vote.

I want to note that this is also know in Cooperative Game Theory as an "empty core". (In Social Choice Theory it's studied under "majority cycling".) See http://www.research.rmutp.ac.th/research/Game%20Theory%20and%20Economic%20Analysis.pdf#page=85 for a good explanation of how Cooperative Game Theory views the problem. Unfortunately it doesn't look like anyone has a really good solution.

comment by PhilGoetz · 2009-08-06T00:57:43.498Z · LW(p) · GW(p)

Unfortunately this "timeless decision theory" would require a long sequence to write up, and it's not my current highest writing priority unless someone offers to let me do a PhD thesis on it.

  • But it is the writeup most-frequently requested of you, and also, I think, the thing you have done that you refer to the most often.

  • Nobody's going to offer. You have to ask them.

comment by Vladimir_Nesov · 2009-07-20T01:52:38.352Z · LW(p) · GW(p)

In case you're wondering, I'm writing this up because one of the SIAI Summer Project people asked if there was any Friendly AI problem that could be modularized and handed off and potentially written up afterward, and the answer to this is almost always "No"

Does it mean that the problem isn't reduced enough to reasonably modularize? It would be nice if you written up the outline of state of research at SIAI (even a brief one with unexplained labels) or an explanation of why you won't.

comment by Psychohistorian · 2009-07-20T06:31:53.059Z · LW(p) · GW(p)

Hanson's example of ten people dividing the pie seems to hinge on arbitrarily passive actors who get to accept and make propositions instead of being able to solicit other deals or make counter proposals, and it is also contingent on infinite and costless bargaining time. The bargaining time bit may be a fair (if unrealistic) assumption, but the passivity does not make sense. It really depends on the kind of commitments and bargains players are able to make and enforce, and the degree/order of proposals from outgroup and ingroup members.

When the first two defectors say, "Hey, you each get an eighth if you join us," the four could pick another two of the in-crowd and say, "Hey, they offered us Y apiece, but we'll join you instead if you each give us X, Y<X (which is actually profitable to the other four so long as X < 1/4 - they get cut out entirely if they can't bargain)." No matter how it is divided, there will always be a subgroup in the in-crowd that could profitably bargain with the out-crowd, and there will always be a different subgroup in the in-crowd that will be able to make a better offer. So long as there is an out-crowd, there are people who can bargain profitably, and so longer as the in-crowd is > 6, people can be profitably removed.

If bargaining time is finite (or especially if it has non-zero cost), I suspect, but can't prove (for lack of effort/technical proficiency, not saying it's unprovable) that each actor will opt for the even 10-person split (especially if risk-averse) because it is (statistically) equivalent (or superior) to the sum*probability of other potential arrangements.

Replies from: CronoDAS
comment by CronoDAS · 2009-07-20T06:57:25.275Z · LW(p) · GW(p)

What if we try a simpler model?

Let's go from ten agents to two, with the stipulation that nobody gets any pie until both agents agree on the split...

Replies from: cousin_it
comment by cousin_it · 2009-07-20T10:31:54.001Z · LW(p) · GW(p)

This is the Nash bargaining game. Voting plays no role there, but it's a necessary ingredient in our game; this means we've simplified too much.

Replies from: Velochy
comment by Velochy · 2009-07-20T11:00:44.704Z · LW(p) · GW(p)

But three people should do already. Im fairly convinced that this game is unstable in the sense it would not make sense for any of them to agree to get 1/3 as they can always guarantee themselves more by defecting with someone (even by offeing them 1/6 - epsilon which is REALLY hard to turn down). It seems that a given majority getting 1/2 each would be a more probable solution but you would really need to formalize the rules before this can be proven. Im a cryptologist so this is sadly not really my area...

Replies from: Psychohistorian
comment by Psychohistorian · 2009-07-20T20:29:09.075Z · LW(p) · GW(p)

I almost posted on the three-person situation earlier, but what I wrote wasn't cogent enough. It does seem like it should work as an archetype for any N > 2.

The problem is how the game is iterated. Call the players A, B, and C. If A says, "B, let's go 50-50," and you assume C doesn't get to make a counter-offer and they vote immediately, 50-50-0 is clearly the outcome. This is probably also the case for the 10-person if there's no protracted bargaining.

If there is protracted bargaining, it turns into an infinite regression as long as there is an out-group, and possibly even without an outgroup. Take this series of proposals, each of which will be preferred to the one prior (format is Proposer:A gets-B gets-C gets):

A:50-50-0

C:0-55-45

A:50-0-50

B: 55-45-0

C:0-55-45

A:50-0-50 ...

There's clearly no stable equilibrium. It seems (though I'm not sure how to prove this) that an equal split is the appropriate efficient outcome. Any action by any individual will create an outgroup that will spin them into an infinite imbalance. Moreover, if we are to arbitrarily stop somewhere along that infinite chain, the expected value for each player is going to be 100/3 (they get part of a two-way split twice which should average to 50 each time overall, and they get zero once per three exchanges). Thus, at 33-33-33, one can't profitably defect. At 40-40-20, C could defect and have a positive expected outcome.

If the players have no bargaining costs whatsoever, and always have the opportunity to bargain before a deal is voted on, and have an infinite amount of time and do not care how long it takes to reach agreement (or if agreement is reached), then it does seem like you get an infinite loop, because there's always going to be an outgroup that can outbid one of the ingroup. This same principle should also apply to the 10-person model; with infinite free time and infinite free bargaining, no equilibrium can be reached. If there is some cost to defecting, or a limitation on bargaining, there should be an even N/2+1-way split (depending admittedly on how those costs and limits are defined). If there is no limitation on bargaining and no cost to defecting, but time has a cost or time will be arbitrarily "called," an even N-way split seems like the most likely/efficient outcome. The doubly-infinite situations is so far divorced from reality that it does not seem worth losing sleep over.

Also, the problem may stem from our limitation of thinking of this as a linear series of propositions, because that's how people would have to actually bargain. In the no-repeated bargaining game, whether it's 50-50-0 or 0-50-50 all depends on who asks first, which seems like an improper and unrealistic determining factor. This linear, proposer-centered view may not be how such beings would actually bargain.

Replies from: cousin_it
comment by cousin_it · 2009-07-21T05:28:15.021Z · LW(p) · GW(p)

The example of the Rubinstein bargaining model suggests that you could make players alternate offers and introduce exponential temporal discounting. An equal split isn't logically necessary in this case: a player's payoff will likely depend on their personal rate of utility discounting, also known as "impatience", and others' perceptions of it. The search keyword is "n-person bargaining"; there seems to be a lot of literature that I'm too lazy and stupid to quickly summarize.

comment by cousin_it · 2009-12-28T20:58:27.982Z · LW(p) · GW(p)

Here's a comment that took me way too long to formulate:

On the Prisoner's Dilemma in particular, this infinite regress can be cut short by expecting that the other agent is doing symmetrical reasoning on a symmetrical problem and will come to a symmetrical conclusion...

Eliezer, if such reasoning from symmetry is allowed, then we sure don't need your "TDT" to solve the PD!

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-12-28T22:47:04.390Z · LW(p) · GW(p)

TDT allows you to use whatever you can prove mathematically. If you can prove that two computations have the same output because their global structures are isomorphic, it doesn't matter if the internal structure is twisty or involves regresses you haven't yet resolved. However, you need a license to use that sort of mathematical reasoning in the first place, which is provided by TDT but not CDT.

Replies from: Perplexed
comment by Perplexed · 2010-08-01T00:39:49.131Z · LW(p) · GW(p)

Strategies are probability (density) functions over choices. Behaviors are the choices themselves. Proving that two strategies are identical (by symmetry, say) doesn't license you to assume that the behaviors are the same. And it is behaviors you seem to need here. Two random variables over the same PDF are not equal.

Seldin got a Nobel for re-introducing time into game theory (with the concept of subgame perfect equilibrium as a refinement of Nash equilibrium). I think he deserved the prize. If you think that you can overturn Seldin's work with your TDT, then I say "To hell with a PhD. Write it up and go straight to Stockholm."

Replies from: timtyler, Sniffnoy
comment by timtyler · 2010-11-30T22:57:32.748Z · LW(p) · GW(p)

Strategies are probability (density) functions over choices.

After looking at this: http://lesswrong.com/lw/vp/worse_than_random/

...I figure Yudkowsky will not be able to swallow this first sentence - without indigestion.

Replies from: Sniffnoy
comment by Sniffnoy · 2010-11-30T23:51:00.038Z · LW(p) · GW(p)

In this case, I can only conclude that you haven't read thoroughly enough.

(There are exceptions to this rule, but they have to do with defeating cryptographic adversaries - that is, preventing someone else's intelligence from working on you. Certainly entropy can act as an antidote to intelligence!)

I think EY's restriction to "cryptographic adversaries" is needlessly specific; any adversary (or other player) will do.

Of course, this is still not really relevant to the original point, as, well, when is there reason to play a mixed strategy in Prisoner's Dilemma?

Replies from: Vaniver, timtyler
comment by Vaniver · 2010-11-30T23:56:30.821Z · LW(p) · GW(p)

Even if your strategy is (1,0) or (0,1) on (C,D), isn't that a probability distribution? It might not be valuable to express it that way for this instance, but you do get the benefits that if you ever do want a random strategy you just change your numbers around instead of having to develop a framework to deal with it.

comment by timtyler · 2010-12-01T09:15:10.103Z · LW(p) · GW(p)

The rule in question is concerned with improving on randomness. It may be tricky to improve on randomness by very much if, say, you face a highly-intelligent opponent playing the matching pennies game. However, it is usually fairly simple to equal it - even when facing a smarter, crpytography-savvy opponent - just use a secure RNG with a reasonably secure seed.

comment by Sniffnoy · 2010-08-03T04:51:28.383Z · LW(p) · GW(p)

Strategies are probability (density) functions over choices. Behaviors are the choices themselves. Proving that two strategies are identical (by symmetry, say) doesn't license you to assume that the behaviors are the same.

...unless the resulting strategies are unmixed, as will usually be the case with Prisoner's Dilemma?

comment by A1987dM (army1987) · 2011-09-14T12:50:12.766Z · LW(p) · GW(p)

Is Parfit's Hitchhiker essentially the same as Kavka's toxin, or is there some substantial difference between the two I'm missing?

Replies from: casebash
comment by casebash · 2016-01-05T13:38:17.503Z · LW(p) · GW(p)

I think the question is very similar, but there is a slight difference in focus. Kavka's toxin focuses of whether a person can intent something if they also intend to change their mind. Parfit's Hitchhiker focuses on another person's prediction.

comment by Will_Newsome · 2011-07-31T05:28:29.915Z · LW(p) · GW(p)

For example, there's a problem introduced to me by Gary Drescher's marvelous Good and Real (OOPS: The below formulation was independently invented by Vladimir Nesov

For a moment I was wondering how the Optimally Ordered Problem Solver was relevant.

Replies from: katydee
comment by katydee · 2011-07-31T07:19:51.889Z · LW(p) · GW(p)

Come on, why would anyone downvote this?

comment by cousin_it · 2009-07-20T09:49:50.853Z · LW(p) · GW(p)

Is your majority vote problem related to Condorcet's paradox? It smells so, but I can't put a handle on why.

I cheated the PD infinite regress problem with a quine trick in Re-formalizing PD. The asymmetric case seems to be hard because fair division of utility is hard, not because quining is hard. Given a division procedure that everyone accepts as fair, the quine trick seems to solve the asymmetric case just as well.

Post your "timeless decision theory" already. If it's correct, it shouldn't be that complex. With your intelligence you can always write a PhD on some other AI topic should the opportunity arise. But after conversations with Vladimir Nesov I was kinda under the impression that you could solve the asymmetric PD-like cases too; if not, I'm a little disappointed in advance. :-(

comment by Jonathan_Graehl · 2009-07-20T01:31:49.497Z · LW(p) · GW(p)

"I believe X to be like me" => "whatever I decide, X will decide also" seems tenuous without some proof of likeness that is beyond any guarantee possible in humans.

I can accept your analysis in the context of actors who have irrevocably committed to some mechanically predictable decision rule, which, along with perfect information on all the causal inputs to the rule, gives me perfect predictions of their behavior, but I'm not sure such an actor could ever trust its understanding of an actual human.

Maybe you could aspire to such determinism in a proven-correct software system running on proven-robust hardware.

Replies from: Eliezer_Yudkowsky, SoullessAutomaton
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-20T01:52:43.564Z · LW(p) · GW(p)

"I believe X to be like me" => "whatever I decide, X will decide also" seems tenuous without some proof of likeness that is beyond any guarantee possible in humans...

Maybe you could aspire to such determinism in a proven-correct software system running on proven-robust hardware.

Well, yeah, this is primarily a theory for AIs dealing with other AIs.

You could possibly talk about human applications if you knew that the N of you had the same training as rationalists, or if you assigned probabilities to the others having such training.

Replies from: Tedav
comment by Tedav · 2014-02-28T16:44:00.208Z · LW(p) · GW(p)

For X to be able to model the decisions of Y with 100% accuracy, wouldn't X require a more sophisticated model?

If so, why would supposedly symmetrical models retain this symmetry?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2014-02-28T20:10:49.963Z · LW(p) · GW(p)

For X to be able to model the decisions of Y with 100% accuracy, wouldn't X require a more sophisticated model?

Nope. http://arxiv.org/abs/1401.5577

comment by SoullessAutomaton · 2009-07-20T01:49:23.533Z · LW(p) · GW(p)

I'm not sue such an actor could ever trust its understanding of an actual human.

Let's play a little game; you and an opponent, 10 rounds of the prisoner's dilemma. It will cost you each $5 to play, with the following payouts on each round:

  • (C,C) = $0.75 each
  • (C,D) = $1.00 for D, $0 for C
  • (D,D) = $0.25 each

Conventional game theory says both people walk away with $2.50 and a grudge against each other, and I, running the game, pocket the difference.

Your opponent is Eliezer Yudkowsky.

How much money do you expect to have after the final round?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-20T01:54:25.318Z · LW(p) · GW(p)

But that's not the true PD.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-07-20T02:02:03.849Z · LW(p) · GW(p)

The statistical predictability of human behavior in less extreme circumstances is a much weaker constraint. I thought the (very gentle) PD presented sufficed to make the point that prediction is not impossible even in a real-world scenario.

I don't know that I have confidence in even you to cooperate on the True PD--sorry. A hypothetical transhuman Bayesian intelligence with your value system? Quite possibly.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-20T04:06:15.285Z · LW(p) · GW(p)

Well, let me put it this way - if my opponent is Eliezer Yudkowsky, I would be shocked to walk away with anything but $7.50.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-07-20T10:23:30.500Z · LW(p) · GW(p)

Well, obviously. But the more interesting question is what if you suspect, but are not certain, that your opponent is Eliezer Yudkowsky? Assuming identity makes the problem too easy.

My position is that I'd expect a reasonable chance that an arbitrary, frequent LW participant playing this game against you would also end with 10 (C,C)s. I'd suggest actually running this as an experiment if I didn't think I'd lose money on the deal...

Replies from: Jonathan_Graehl, Gavin
comment by Jonathan_Graehl · 2009-07-20T19:28:50.678Z · LW(p) · GW(p)

Harsher dilemmas (more meaningful stake, loss from an unreciprocated cooperation that may not be recoverable in the remaining iterations) would make me increasingly hesitant to assume "this person is probably like me".

This makes me feel like I'm in "no true Scotsman" territory; nobody "like me" would fail to optimistically attempt cooperation. But if caring more about the difference in outcomes makes me less optimistic about other-similarity, then in a hypothetical where I am matched up against essentially myself (but I don't know this), I defeat myself exactly when it matters - when the payoff is the highest.

Replies from: MBlume
comment by MBlume · 2009-07-20T19:35:25.144Z · LW(p) · GW(p)

and this is exactly the problem: If your behavior on the prisoner's dilemma changes with the size of the outcome, then you aren't really playing the prisoner's dilemma. Your calculation in the low-payoff case was being confused by other terms in your utility function, terms for being someone who cooperates -- terms that didn't scale.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2009-07-21T01:27:28.327Z · LW(p) · GW(p)

Yes, my point was that my variable skepticism is surely evidence of bias or rationalization, and that we can't learn much from "mild" PD. I do also agree that warm fuzzies from being a cooperator don't scale.

comment by Gavin · 2009-07-20T19:14:49.534Z · LW(p) · GW(p)

If we wanted to be clever we could include Eliezer playing against himself (just report back to him the same value) as a possibility, though if it's a high probability that he faces himself it seems pointless.

I'd be happy to front the (likely loss of) $10.

It might be possible to make it more like a the true prisoner's dilemma if we could come up with two players each of whom want the money donated to a cause that they consider worthy but the other player opposes or considers ineffective.

Though I have plenty of paperclips, sadly I lack the resources to successfully simulate Eliezer's true PD . . .

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-07-20T22:11:21.798Z · LW(p) · GW(p)

I'd be happy to front the (likely loss of) $10.

Meaningful results would probably require several iterations of the game, though, with different players (also, the expected loss in my scenario was $5 per game).

I seem to recall Douglas Hofstadter did an experiment with several of his more rational friends, and was distressed by the globally rather suboptimal outcome. I do wonder if we on LW would do better, with or without Eliezer?

comment by SoullessAutomaton · 2009-07-20T00:22:32.835Z · LW(p) · GW(p)

As a first off-the-cuff thought, the infinite regress of conditionality sounds suspiciously close to general recursion. Do you have any guarantee that a fully general theory that gives a decision wouldn't be equivalent to a Halting Oracle?

ETA: If you don't have such a guarantee, I would submit that the first priority should be either securing one, or proving isomorphism to the Entscheidungsproblem and, thus, the impossibility of the fully general solution.

Replies from: JulianMorrison, Eliezer_Yudkowsky
comment by JulianMorrison · 2009-07-20T01:02:50.526Z · LW(p) · GW(p)

Hah! Same thought!

What's the moral action when the moral problem seems to diverge, and you don't have the compute resources to follow it any further? Flip a coin?

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-07-20T01:11:34.119Z · LW(p) · GW(p)

I would suggest that the best move would be to attempt to coerce the situation into one where the infinite regress is subject to analysis without Halting issues, in a way that is predicted to be least likely to have negative impacts.

Remember, Halting is only undecidable in the general case, and it is often quite tractable to decide on some subset of computations.

Replies from: JulianMorrison
comment by JulianMorrison · 2009-07-20T01:45:57.517Z · LW(p) · GW(p)

Unless you're saying "don't answer the question, use the answer from a different but closely related one", then a moral problem is either going to be known transformable into a decidable halting problem, or not. And if not, my above question remains unanswered.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-07-20T01:57:17.300Z · LW(p) · GW(p)

I meant something more like "don't make a decision, change the context such that there is a different question that must be answered". In practice this would probably mean colluding to enforce some sort of amoral constraints on all parties.

I grant that at some point you may get irretrievably stuck. And no, I don't have an answer, sorry. Chosing randomly is likely to be better than inaction, though.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-20T01:51:02.030Z · LW(p) · GW(p)

Obviously any game theory is equivalent to the halting problem if your opponents can be controlled by arbitrary Turing machines. But this sort of infinite regress doesn't come from a big complex starting point, it comes from a simple starting point that keeps passing the recursive buck.

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-07-20T02:41:01.498Z · LW(p) · GW(p)

I understand that much, but if there's anything I've learned from computer science it's that turing completeness can pop up in the strangest places.

I of course admit it was an off-the-cuff, intuitive thought, but the structure of the problem reminds me vaguely of the combinatorial calculus, particularly Smullyan's Mockingbird forest.

Replies from: thomblake
comment by thomblake · 2009-07-20T02:58:06.105Z · LW(p) · GW(p)

This was a clever ploy to distract me with logic problems, wasn't it?

Replies from: SoullessAutomaton
comment by SoullessAutomaton · 2009-07-20T03:00:37.861Z · LW(p) · GW(p)

No, but mentioning the rest of Smullyan's books might be.

comment by [deleted] · 2012-12-19T22:24:11.767Z · LW(p) · GW(p)

If I were forced to pay $100 upon losing, I'd have a net gain of $4950 each time I play the game, on average. Transitioning from this into the game as it currently stands, I've merely been given an additional option. As a rationalist, I should not regret being one [? · GW]. Even knowing I won't get the $10,000, as the coin came up heads, I'm basically paying $100 for the other quantum me to receive $10,000. As the other quantum me, who saw the coin come up tails, my desire to have had the first quantum me pay $100 outweighs the other quantum me's desire to not lose $100. I'm not going to defect against myself for want of $100, and if you (general you, not eliezer specifically) would you should probably carry a weapon and a means of defending yourself against it because you can't trust yourself.

comment by wedrifid · 2009-07-29T02:06:42.105Z · LW(p) · GW(p)

Unfortunately this "timeless decision theory" would require a long sequence to write up, and it's not my current highest writing priority unless someone offers to let me do a PhD thesis on it.

Can someone tell me the matrix of pay-offs for taking on Eleizer as a PhD student?

Replies from: Larks
comment by Larks · 2010-07-30T23:24:53.546Z · LW(p) · GW(p)

Well, for starters you have a 1/3^^^3 chance of 3^^^3 utils...

comment by JamesAndrix · 2009-07-20T16:00:07.640Z · LW(p) · GW(p)

I swear I'll give you a PhD if you write the thesis. On fancy paper and everything.

Would timeless decision theory handle negotiation with your future self? For example if a timeless decision agent likes paperclips today but you knows it is going to be modified to like apples tomorrow, (and not care a bit about paperclips,) will it abstain from destroying the apple orchard, and its future self abstain from destroying the paperclips in exchange?

And is negotiation the right way to think about reconciling the difference between what I now want and what a predicted smarter, grown up, more knowledgeable version of me would want? or am I going the wrong way?

Replies from: MBlume, Peter_de_Blanc
comment by MBlume · 2009-07-20T18:09:20.447Z · LW(p) · GW(p)

to talk about turning a paperclip maximizer into an apple maximizer is needlessly confusing. Better to talk about destroying a paperclip maximizer and creating an apple maximizer. And yes, timeless decision theory should allow these two agents to negotiate, though it gets confusing fast.

comment by Peter_de_Blanc · 2009-07-20T18:03:19.239Z · LW(p) · GW(p)

In what sense is that a future self?

Replies from: JamesAndrix
comment by JamesAndrix · 2009-07-20T18:33:41.362Z · LW(p) · GW(p)

In the paperclip->apple scenario, in the sense that it retains the memory and inherits the assets of the original, and everything else that keeps you 'you' when you start wanting something different.

In the simulation scenario, I'm not sure.

comment by Liron · 2009-07-20T12:06:35.658Z · LW(p) · GW(p)

But I don't have a general theory which replies "Yes" [to a counterfactual mugging].

You don't? I was sure you'd handled this case with Timeless Decision Theory.

I will try to write up a sketch of my idea, which involves using a Markov State Machine to represent world states that transition into one another. Then you distinguish evidence about the structure of the MSM, from evidence of your historical path through the MSM. And the best decision to make in a world state is defined as the decision which is part of a policy that maximizes expected utility for the whole MSM.

OK, I just tried for four hours but couldn't successfully describe a useful formalism that provides a good analysis of counterfactual mugging. Will keep trying later.

comment by Ronny Fernandez (ronny-fernandez) · 2012-04-23T05:02:55.269Z · LW(p) · GW(p)

Here's a crack at the coin problem.

Firstly TDT seems to answer correctly under one condition, if P(some agent will use my choice as evidence about how I am going to act in these situations and make this offer.) = 0. Then certainly, our AI shouldn't give omega any money. On the other hand, if P(some agent will use my choice as evidence about how I am going to act in these situations and make this offer.) = 0.5, then the expected utility =-100 + 0.5 ( 0.5 (1,000,000) + 0.5(-100)) So my general solution is this, add a node that represents the probability of repeating one of these trials, keep track of its value like any other node, carefully and gradually. Giving money would only be winning if you had the opportunity to make more money later because omega or someone else knows you give money, otherwise you shouldn't give money.

comment by chaosmosis · 2012-04-12T16:15:29.376Z · LW(p) · GW(p)

I THINK I SOLVED ONE - EDIT - Sorry, not quite.

"Suppose Omega (the same superagent from Newcomb's Problem, who is known to be honest about how it poses these sorts of dilemmas) comes to you and says: "I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?" Obviously, the only reflectively consistent answer in this case is "Yes - here's the $1000", because if you're an agent who expects to encounter many problems like this in the future, you will self-modify to be the sort of agent who answers "Yes" to this sort of question - just like with Newcomb's Problem or Parfit's Hitchhiker."

To me, this seems like losing. I disagree that a reflectively consistent agent would give Omega money in these instances.

Since Omega always tells the truth, we should program ourselves to give him $1000 if and only if we don't know that the coin came up heads. If we know that the coin came up heads, it no longer makes sense to give him $1000 dollars.

This in no way prevents us from maximizing utility. An opponent of this strategy would contend that this prevents us from receiving the larger cash prize. That assertion would be false, because this strategy only occurs in situations where we have literally zero possibility of receiving any money from Omega. "Not giving Omega $1000" in instance H does not mean that we wouldn't give Omega $1000 in instances where we don't know whether or not H.

The fact that Omega has already told us it came up heads completely precludes any possibility of a reward. The fact that we choose to not give Omega money in situations where we know the coin comes up heads in no way precludes us from giving him $1000 in scenarios where we have a chance that we'll receive the larger cash prize. By not giving $1000 when H, there is still no precedent set which precludes the larger reward. Therefore we should give Omega $1000 if and only if we do not know that the coin landed on heads.

Yay. That seems too easy, I'm kind of worried I made a super obvious logical mistake. But I think it's right.

Sorry for not using the quote feature, but I'm awful at editing. I even tried using the sandbox and couldn't get it right.

EDIT: So, unfortunately, I don't think this solves the issue. It technically does, but it really amounts to more of a reason that Omega is stupid and should rephrase his statements. Instead of Omega saying "I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads" he should say "I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads and I told you that the coin came up heads".

So it's a minor improvement but it's nothing really important.

Replies from: thomblake
comment by thomblake · 2012-04-12T16:22:06.265Z · LW(p) · GW(p)

Just prefix the quote with a single greater-than sign.

Replies from: chaosmosis
comment by chaosmosis · 2012-04-12T16:27:28.046Z · LW(p) · GW(p)

I did, but I don't know how to stop quoting. I can start but I don't know how to stop.

Also, one of the times I tried to quote it I ended up with an ugly horizontal scroll bar in the middle of the text.

Replies from: thomblake
comment by thomblake · 2012-04-12T16:29:18.855Z · LW(p) · GW(p)

A blank newline between the quote and non-quote will stop the quote.

>quoted text
more quoted text

non-quoted text

comment by Matt_Young · 2011-06-17T00:51:10.543Z · LW(p) · GW(p)

Suppose Omega (the same superagent from Newcomb's Problem, who is known to be honest about how it poses these sorts of dilemmas) comes to you and says:

"I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?"

Obviously, the only reflectively consistent answer in this case is "Yes - here's the $1000", because if you're an agent who expects to encounter many problems like this in the future, you will self-modify to be the sort of agent who answers "Yes" to this sort of question - just like with Newcomb's Problem or Parfit's Hitchhiker.

Compute the probabilities P(0)..P(n) that this deal will be offered to you again n times in the future. Sum over 499500 P(n) (n) for all n and agree to pay if the sum is greater than 1,000.

Replies from: CuSithBell
comment by CuSithBell · 2011-06-17T01:12:45.883Z · LW(p) · GW(p)

What if it's offered just once - but if the coin comes up tails, Omega simulates a universe in which it came up heads, asks you this question, then acts based on your response? (Do whatever you like to ignore anthropics... say, Omega always simulates the opposite result, at the appropriate time.)

Replies from: Matt_Young
comment by Matt_Young · 2011-06-17T01:23:18.317Z · LW(p) · GW(p)

To be clear:

  • Are both I and my simulation told this is a one-time offer?

  • Is a simulation generated whether the real coin is heads or tails?

  • Are both my simulation and I told that one of us is a simulation?

  • Does the simulation persist after the choice is made?

I suppose the second and fourth points don't matter particularly... as long as the first and third are true, then I consider it plus EV to pay the $1000.

Replies from: CuSithBell
comment by CuSithBell · 2011-06-17T01:44:12.970Z · LW(p) · GW(p)

Should you pay the money even if you're not told about the simulations, because Omega is a good predictor (perhaps because it's using simulations)?

Replies from: Matt_Young
comment by Matt_Young · 2011-06-17T01:48:12.116Z · LW(p) · GW(p)

If I judge the probability that I am a simulation or equivalent construct to be greater than 1/499500, yes.

(EDIT: Er, make that 1/999000, actually. What's the markup code for strikethrough 'round these parts?)

(EDIT 2: Okay, I'm posting too quickly. It should be just 10^-6, straight up. If I'm a figment then the $1000 isn't real disutility.)

(EDIT 3: ARGH. Sorry. 24 hours without sleep here. I might not be the sim, duh. Correct calculations:

u(pay|sim) = 10^6; u(~pay|sim) = 0; u(pay|~sim) = -1000; u(~pay|~sim) = 0

u(~pay) = 0; u(pay) = P(sim) 10^6 - P(~sim) (1000) = 1001000 * P(sim) - 1000

pay if P(sim) > 1/1001.

Double-checking... triple-checking... okay, I think that's got it. No... no... NOW that's got it.)

comment by PhilGoetz · 2009-08-06T01:16:48.775Z · LW(p) · GW(p)

Another stumper was presented to me by Robin Hanson at an OBLW meetup. Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote. Let's say that six of them form a coalition and decide to vote to divide the pie among themselves, one-sixth each. But then two of them think, "Hey, this leaves four agents out in the cold. We'll get together with those four agents and offer them to divide half the pie among the four of them, leaving one quarter apiece for the two of us. We get a larger share than one-sixth that way, and they get a larger share than zero, so it's an improvement from the perspectives of all six of us - they should take the deal." And those six then form a new coalition and redivide the pie. Then another two of the agents think: "The two of us are getting one-eighth apiece, while four other agents are getting zero - we should form a coalition with them, and by majority vote, give each of us one-sixth."

How I would approach this problem:

Suppose that it is easier to adjust the proportions within your existing coalitions than to switch coalitions. An agent will not consider switching coalitions until it cannot improve its share in its present coalition. Therefore, any coalition will reach a stable configuration before you need consider agents switching to another coalition. If you can show that the only stable configuration is an equal division, then there will be no coalition-switching.

You can probably show that any agent receiving less than its share can receive a larger share by switching to a different coalition. Assume the other agents know this proof. You may then be able to show that they can hold onto a larger share by giving that agent its fair share than by letting it quit the coalition. You may need to use derivatives to do this. Or not.

comment by MrHen · 2009-07-22T17:50:36.113Z · LW(p) · GW(p)

"I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?"

Err... pardon my noobishness but I am failing to see the game here. This is mostly me working it out audibly.

A less Omega version of this game involves flipping a coin, getting $100 on tails, losing $1 on heads. Using humans, it makes sense to have an arbiter holding $100 from the Flipper and $1 from the Guesser. With this setup, the Guesser should always play.

If the Flipper is Omega and offered the same game with the same fair arbiter there is no reason to not play. If Omega was a perfect predictor and knew what the coin would do before flipping it, should we play? If Omega commits to playing the game regardless of the prediction, yes, we should play.

If the arbiter is removed and Omega stands in as the arbiter, we should still play because it is assumed that Omega is honest and will pay out if tails appears. Even if we prepay before the coin flip, we should still play.

If the Flipper flips the coin before we prepay the arbiter, it should not matter. This is equivalent to the scenario of Omega being a perfect predictor.

The only two changes remaining are:

  • Us knowing the coin flip before we agree to play
  • Us not paying before we see the coin flip

The latter assumes we could renege on payment after seeing the coin but I highly doubt Omega would play the game with someone like this since this would be known to a perfect predictor. This means we can completely eliminate the arbiter.

This leaves us at the following scenario:

I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. I know the result of the coin but will wait for you to agree to the game before I tell you what it is. Do you want to play?

The answer is "Yes." Why does it matter if Omega blurts out the answer beforehand? Because we know we will "lose"?

In my opinion this is a trivial problem. If we assume that Omega is (a) fair and (b) accurate we would always play the game. Omega is predefined to not take advantage of us. We just got unlucky, which is perfectly acceptable as long as we do not know the answer beforehand.

So... what am I missing? It seems like there is mental warning when imagining myself before Omega and handing him $1000 when I "never had a shot". But I did have a shot. I would never pay anyone other than Omega, but I am assuming Omega is being completely honest.

Why would anyone answer "No"? The basic answer, "Because you do not want to lose $1000" seems completely irrational to me. I can see why it would appear rational, but Omega's definition makes it irrational.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-22T20:28:29.999Z · LW(p) · GW(p)

See counterfactual mugging for an extended discussion in comments.

Replies from: MrHen
comment by MrHen · 2009-07-22T20:35:26.135Z · LW(p) · GW(p)

Thanks.

comment by Richard_Kennaway · 2009-07-20T16:35:20.972Z · LW(p) · GW(p)

If the ten pie-sharers is to be more than a theoretical puzzle, but something with applicability to real decision problems, then certain expansions of the problem suggest themselves. For example, some of the players might conspire to forcibly exclude the others entirely. And then a subset of the conspirators do the same.

This is the plot of "For a Few Dollars More".

How do criminals arrange these matters in real life?

Replies from: RobinZ
comment by RobinZ · 2009-07-20T16:43:23.095Z · LW(p) · GW(p)

Dagnabbit, another movie I have to see now!

(i.e. thanks for the ref!)

Replies from: Jotaf
comment by Jotaf · 2009-07-21T03:52:16.067Z · LW(p) · GW(p)

The Dark Knight has an even better example - in the bank robbery scene, each subgroup excludes only one more member, until the only man left is... That's enough of a spoiler I guess.

Replies from: RobinZ
comment by RobinZ · 2009-07-21T13:21:40.268Z · LW(p) · GW(p)

Yeah ... guess which scene I came in during the middle of? :P

comment by paulfchristiano · 2010-12-19T19:32:13.683Z · LW(p) · GW(p)

Is this equivalent to the modified Newcomb's problem?

Omega looks at my code and produces a perfect copy of me which it puts in a separate room. One of us (decided by the toss of a coin if you like) is told, "if you put $1000 in the box, I will give $1000000 to your clone."

Once Omega tells us this, we know that putting $1000 in the box won't get us anything, but if we are the sort of person who puts $1000 in the box then we would have gotten $1000000 if we were the other clone.

What happens now if Omega is able to change my utility function? Maybe I am a paperclipper, but my copy has been modified so that every instance of "paperclip" in its decision calculus has been replaced by "paperweight" (or more precisely, the copy is what would have happened if my entire history had been modified by replacing paperclips by paperweights). Omega then offers one of the copies of me, chosen randomly, the choice between producing 1000 paperclips (resp. paperweights) and 1000000 paperweights (resp. paperclips). This seems like just as reasonable a question, if changing an agent's utility function makes sense. But now suppose I remove the randomness, and just always give the paperweighter the choice between making 1000 paperweights and 1000000 paperclips. Now I can't find a reasonable argument for making the paperclips.

Replies from: dankane
comment by dankane · 2010-12-19T21:56:11.967Z · LW(p) · GW(p)

There is at least a slight difference in that in the stated version it is at least question whether any version of you is actually getting anything useful out of giving Omega money.

comment by Jayson_Virissimo · 2009-07-20T21:03:16.020Z · LW(p) · GW(p)

"Now of course you wish you could answer "Yes", but as an ideal game theorist yourself, you realize that, once you actually reach town, you'll have no further motive to pay off the driver."

Can't you contract your way out of this one?

Replies from: Nanani
comment by Nanani · 2009-07-21T04:03:14.005Z · LW(p) · GW(p)

Indeed. It would seem sufficient to push a bit further and take in the desirebility of upholding verbal contracts. Unless of course, the driver is so harsh as to drive away for a mere second of considering non-payment.

comment by [deleted] · 2009-07-20T04:35:23.217Z · LW(p) · GW(p)

Obviously, the only reflectively consistent answer in this case is "Yes - here's the $1000", because if you're an agent who expects to encounter many problems like this in the future, you will self- modify to be the sort of agent who answers "Yes" to this sort of question - just like with Newcomb's Problem or Parfit's Hitchhiker.

But I don't have a general theory which replies "Yes".

If you think being a rational agent includes an infinite ability to modify oneself, then the game has no solution because such an agent would be unable to guarantee the new trait's continued, unmodified existence without sacrificing the rationality that is a premise of the game.

So, for the game to be solvable, the self-modification ability must have limits, and the limits appear as parameters in the formalism.

Replies from: Liron
comment by Liron · 2009-07-20T08:20:26.667Z · LW(p) · GW(p)

An agent can guarantee the persistence of a trait by self-modifying into code that provably can never lead to the modification of that trait. A trivial example is that the agent can self-modify into code that preserves a trait and can't self-modify.

Replies from: None
comment by [deleted] · 2009-07-20T16:54:28.953Z · LW(p) · GW(p)

But more precisely, an agent can guarantee the persistence of a trait only "by self-modifying into code that provably can nevenrlead to the modification of that trait." Anything tied to rationality that guarantees the existence of a conforming modification at the time of offer must guarantee the continued existence of the same capacity after the modification, making the proposed self-modification self-contradictory.

comment by Ishaan · 2013-10-11T00:38:39.673Z · LW(p) · GW(p)

"I can predict that if (the other agent predicts) I choose strategy X, then the other agent will implement strategy Y, and my expected payoff is Z"

...are we allowed to use self-reference?

"I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?"

X = "if the other agent is trustworthy (implicitly: I can trust that the other agent analyzed past-me sufficiently to know that I always use strategy X) then I agree to the bet.

Z = whatever the expected payoff of the bet was before the flip.

In effect, I only agree to a counterfactual bet if :

I know that you are aware that I've pre-committed to taking bets with those who are aware that I've pre-committed to taking bets with those who are aware I've pre-committed to taking bets with those who are aware ...

comment by CCC · 2012-10-18T13:18:28.092Z · LW(p) · GW(p)

Youmight find some recent work on the iterated prisoner's dilemma interesting.

comment by VKS · 2012-04-12T17:48:37.975Z · LW(p) · GW(p)

In an undergraduate seminar on game theory I attended, it was mentioned in an answer to a question posed to the presenter that, when computing a payoff matrix, the headings in the rows and columns aren't individual actions, but are rather entire strategies; in other words it's as if you pretty much decide what you do in all circumstances at the beginning of the game. This is because when evaluating strategies nobody cares when you decide, so might as well act as if you had them all planned out in advance. So in that spirit, I'm going to use the following principle:

An Agent should chose the strategy that it predicts gives the greatest outcome, weighted by probability of that outcome etcetc...

to approach every one of these problems.

On Omega's coin flip: Omega has given you the function you have to apply to your strategy, you just apply it, and the result's bigger if you answer "yes". Although, realistically, there's no way that Omega has provided nearly enough information for you to trust him, but whatever, that's the premise.

On Parfit's hitchhiker: Again, Ekman has access to your strategy. Just pick one that does benefit him, since those are the only ones that have you not dying at the outcome. If you don't have 100$, find something else you could give him.

On the Democratic Pie: Well, your problem has no strong Nash equilibrium. No solution is going to be stable. I don't really know how this works when you have more than two players, (undergraduate, remember) but I suggest looking into not using a pure strategy. If each voter votes randomly, but choses the probability of his votes appropriately, things work out a little better. You can then compute which semi-random strategy gets you the highest expected size of your slice, etcetc.... Find a book, I don't know how this works. (If I wanted to solve this problem on my own, I would trying to do it with 8 coconuts first, rather than a Continuum of cake.) (This also spells Doom for the AIs respecting whatever Constitution is given then. Not just Doom, Somewhat Unpredictable Doom.)

On the Prisoner's Dilema's Infinite Regress: I don't know.

Replies from: VKS
comment by VKS · 2012-04-12T18:14:12.643Z · LW(p) · GW(p)

Further elaboration on the cake problem's discrete case:

Suppose there are two slices of cake, and three people who can chose how these will be distributed, by majority vote. Nobody votes so that they alone get both slices, since they can't get a majority that way. So everybody just votes to get one slice for themselves, and randomly decides who gets the other slice. There can be ties, but you're getting an expected 2/3 of a slice whenever a vote is finally not a tie.

To get the continuous case:

It's tricky, but find a way to extend the previous reasoning to n slices and m players, and then take the limit as n goes to infinity. The voting sessions do get longer and longer before consensus is reached, but even when consensus is forever away, you should be able to calculate your expectation of each outcome...

comment by duckduckMOO · 2011-12-09T00:40:15.431Z · LW(p) · GW(p)

For the coin came up tails give me 1000 please case does it reduce to this?

"I can predict that if (the other agent predicts) I choose strategy X: for any gamble I'd want to enter if the consequences were not already determined, I will pay when i lose, then the other agent will implement strategy Y: letting me play, and my expected payoff is Z:999,000",

"I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?" Obviously, the only reflectively consistent answer in this case is "Yes - here's the $1000", because if you're an agent who expects to encounter many problems like this in the future, you will self-modify to be the sort of agent who answers "Yes" to this sort of question - just like with Newcomb's Problem or Parfit's Hitchhiker."

I think more generally if there's any possibility that you'll encounter anything like this in the future you should precommit to paying up. Precommiting to paying up acts as an entry cost for positive utility gambles.

And it would definetely have to be utility rather than, say, money because rich enemies could bankrupt you if you were willing to take 51;49 bets of all your stakes vs some of theirs. This might be trivial but for a second I wasn't sure if this would make you more open to exploitation than if you would take that potential bankruptcy risk anyway.

Though I'm not sure provisioning for this kind of thing would be a good idea. Who's going to do this kind of thing anyway instead of just setting forth the terms of the bet. When are you going to know, as is stipulated here that there was a decent chance of you getting anything and they haven't just somehow tricked you. A simple solution might just be to refuse this kind of thing altogether, and then people who want to bet with you will have to ask in advance. I can't think of any scenario where someone would have a legitimate reason to bet like this rather than normally.

comment by Matt_Young · 2011-06-17T00:39:05.376Z · LW(p) · GW(p)

Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote....

...Every majority coalition and division of the pie, is dominated by another majority coalition in which each agent of the new majority gets more pie. There does not appear to be any such thing as a dominant majority vote.

I suggest offering the following deal at the outset:

"I offer each of you the opportunity to lobby for an open spot in a coalition with me, to split the pie equally six ways, formed with a mutual promise that we will not defect, and if any coalition members do defect, we agree to exclude them from future dealings and remain together as a voting bloc, offering the defectors' spots to the remaining agents not originally aligned with us, for a 1/6 + epsilon share, the cost of the excess portion divided among those of us remaining. I will award spots in this coalition to the five of you who are most successful at convincing me you will adhere to these terms."

comment by Matt_Young · 2011-06-17T00:13:15.111Z · LW(p) · GW(p)

Here's yet another problem whose proper formulation I'm still not sure of, and it runs as follows. First, consider the Prisoner's Dilemma. Informally, two timeless decision agents with common knowledge of the other's timeless decision agency, but no way to communicate or make binding commitments, will both Cooperate because they know that the other agent is in a similar epistemic state, running a similar decision algorithm, and will end up doing the same thing that they themselves do. In general, on the True Prisoner's Dilemma, facing an opponent who can accurately predict your own decisions, you want to cooperate only if the other agent will cooperate if and only if they predict that you will cooperate. And the other agent is reasoning similarly: They want to cooperate only if you will cooperate if and only if you accurately predict that they will cooperate.

But there's actually an infinite regress here which is being glossed over - you won't cooperate just because you predict that they will cooperate, you will only cooperate if you predict they will cooperate if and only if you cooperate. So the other agent needs to cooperate if they predict that you will cooperate if you predict that they will cooperate... (...only if they predict that you will cooperate, etcetera).

On the Prisoner's Dilemma in particular, this infinite regress can be cut short by expecting that the other agent is doing symmetrical reasoning on a symmetrical problem and will come to a symmetrical conclusion, so that you can expect their action to be the symmetrical analogue of your own - in which case (C, C) is preferable to (D, D). But what if you're facing a more general decision problem, with many agents having asymmetrical choices, and everyone wants to have their decisions depend on how they predict that other agents' decisions depend on their own predicted decisions? Is there a general way of resolving the regress?

Yes. You can condition on two prior probabilities: that an agent will successfully predict your actual action, and that an agent will respond in a particular way based on the action they predict you to take. For the solution in the case of the Truly Iterated Prisoner's Dilemma, see here.

(EDIT, 6/18/2011:

On further consideration, my assertion -- that the indicated solution to the Prisoner's Dilemma constitutes a general method for resolving infinite regress in the full class of problems specified -- is a naive oversimplification. The indicated solution to a specific dilemma is suggestive of an area of solution space to search for the general solution or solutions to specific similar problems, but considerable work remains to be done before a general solution to the problem class can be justifiably claimed. I'll analyze the full problem further and see what I come up with.)

comment by diegocaleiro · 2011-03-16T10:45:11.513Z · LW(p) · GW(p)

I'm curious as to what extend is Timeless Decision Theory compared to this proposal: by Arntzenius http://uspfiloanalitica.googlegroups.com/web/No+regrets+%28Arntzenius%29.pdf?gda=0NZxMVIAAABcaixQLRmTdJ3- x5P8Pt_4Hkp7WOGi_UK-R218IYNjsD-841aBU4P0EA-DnPgAJsNWGgOFCWv8fj8kNZ7_xJRIVeLt2muIgCMmECKmxvZ2j4IeqPHHCwbz-gobneSjMyE

Replies from: Manfred
comment by Manfred · 2011-03-16T11:28:41.696Z · LW(p) · GW(p)

It's different because the problems it talks about aren't determined by what decision is made in the end, but by the state of mind of the person making the decision (in a particular and perhaps quite limited way).

You could probably show that a mixed-strategy-aware problem could make the proposed theory fail in a similar way to how causal decision theory fails (i.e. is reflectively inconsistent) on Newcomb's problem. But it might be easy to extend TDT in the same way to resolve that.

comment by JoshBurroughs · 2011-02-17T21:07:34.835Z · LW(p) · GW(p)

Agents A & B are two TDT agents playing some prisoner's dilemma scenario. A can reason:

u(c(A)) = P(c(B))u(C,C) + P(d(B))u(C,D)

u(d(A)) = P(c(B))u(D,C) + P(d(B))u(D,D)

( u(X) is utility of X, P() is probability, c() & d() are cooperate & defect predicates )

A will always pick the option with higher utility, so it reasons B will do the same:

p(c(B) u'(c(B)) > u'(d(B)) --> c(B)

(u'() is A's estimate of B's utility function)

But A can't perfectly predict B (even though it may be quite good at it), so A can represent this uncertainty as a random variable e:

u'(c(B)) + e > u'(d(B)) - e --> c(B)

In fact, we can give e a parameter, N, which is given by the depth of recursion, like a game of telephone:

u'(c(B)) + e(N) > u'(d(B)) - e(N) --> c(B)

Intuitively, it seems e(N) will tend to overwhelm u() for high enough N (since utilities don't increase as you recurse.) At that recursion depth:

p(c(B)) = p(d(B))

so:

u(c(A)) = u(C,C) +u(C,D)

u(d(A)) = u(D,C) + u(D,D)

u(D,C) > u(C,C) > u(D,D) > u(C,D)

so u(d(A)) > u(c(A)), meaning defection at the recursive depth where uncertainty overwhelms other considerations.

Does this mean a TDT agent must revert to CDT if it is not smart enough (or does not believe its opponent is smart enough) to transform the recursion to a closed-form solution?

Replies from: JoshBurroughs
comment by JoshBurroughs · 2011-02-17T21:15:42.067Z · LW(p) · GW(p)

A simpler way to say all this is "Pick a depth where you will stop recursing (due to growing uncertainty or computational limits) and at that depth assume your opponent acts randomly." Is my first attempt needlessly verbose?

comment by dankane · 2010-12-19T00:19:43.034Z · LW(p) · GW(p)

I think I have a general theory that gives the "correct" answer to Omega problem here and Newcomb's problem.

The theory depends on the assumption that Omega makes his prediction by evaluating the decision of an accurate simulation of you (or does something computationally equivalent, which should be the same). In this case there are two of you, real-you and simulation-you. Since you are identical to your simulation the two of you can reasonably be assumed to share an identity and thus have common goals (presumably that real-you gets the money because simulation-you will likely not exist once the simulation ends). Additionally since it is an accurate simulation you have no way to tell whether you are real-you or simulation-you. So in the problem here:

  • There's a 50% chance that you are really the simulation, in which case handing over the $1000 has a 50% chance of earning real-you $1000000, and a 50% chance of doing nothing.

  • There's a 50% chance that you are actually real-you, in which case giving Omega $1000 has a 100% chance of loosing $1000.

So giving Omega the money nets real-you an expected $249000.

On the other hand, this might just say that you should put off deciding for as long as possible so that if you are the simulation you don't cease to exist when the simulation ends.

Anyway, this also handles Newcomb's problem similarly. Unfortunately it seems to me to be a little too silly to be a real decision theory.

Replies from: Manfred
comment by Manfred · 2010-12-19T00:37:53.041Z · LW(p) · GW(p)

Some interesting things to think about:

Why is it a 50/50 chance? Why not a 1% chance you're in the simulation, or 99%?

This approach requires that Omega work a certain way. What if Omega didn't work that way?

Why would simulation you care what happens to real you? Why not just care about your own future?

A way of resolving problems involving Omega would be most useful if it wasn't just a special-purpose tool, but was instead the application of logic to ordinary decision theory. What could you do to ordinary decision theory that would give you a satisfactory answer in the limit of the other person becoming a perfect predictor?

Replies from: dankane
comment by dankane · 2010-12-19T01:05:31.153Z · LW(p) · GW(p)

1) I agree that this depends on Omega operating a certain way, but as I argue here: http://lesswrong.com/lw/nc/newcombs_problem_and_regret_of_rationality/35zm?c=1, your answer should depend on how Omega operates.

2) 50/50 assuming that there is one real you and one simulation.

3) The simulation should care because your goal as a self-interested being to "do what is best for me" should really be "do what is best for people with my identity" (because how else do you define "me"). Your simulation has the same identity (except for maybe the 10s of experience it has had since the simulation started, which doesn't feel like it should matter), so it should therefore care also about your outcome.

4) This is a modification of classical decision theory, modifying your objective to also care about the well-being of your simulations/ people that you are simulations of.

Replies from: Manfred
comment by Manfred · 2010-12-19T02:42:27.405Z · LW(p) · GW(p)

1) Interesting. But you didn't really argue why your answer should depend on Omega, you just illustrated what you would do for different Omegas. By the statement of the problem, the outcome doesn't depend on how Omega works, so changing your answers for different Omegas is exactly the same as having different answers for the same outcome. This contradicts ordinary decision theory.

I suspect that you are simply not doing Newcomb's problem in the other comments - you are trying to evaluate the chances of Omega being wrong, when you are directly given that Omega is always right. Even in any simulation of Omega's, simulated Omega already knows whether simulated you will 1-box or 2-box, or else it's no longer Newcomb's problem.

2) Why is this assumption better than some other assumption? And if it isn't, why should we assume something at all?

3) Well, you could define "me" to involve things like continuity or causal interactions to resolve the fact that cloning yourself doesn't give you subjective immortality.

4) But it's not broadly applicable, it's just a special purpose tool that depends not only on how Omega works, but also on how many of you it simulates under what conditions, something that you never know. It also has a sharp discontinuity as Omega becomes an imperfect predictor, rather than smoothly approaching a limit, or else it quickly becomes unusable as subjective questions about the moral value of things almost like you and maybe not conscious crop up.

Replies from: dankane
comment by dankane · 2010-12-19T04:34:02.020Z · LW(p) · GW(p)

1) I argue that my answer should depend on Omega because I illustrate different ways that Omega could operate that would reasonably change what the correct decision was. The problem statement can't specify that my answer shouldn't depend on how Omega works. I guess it can specify that I don't know anything about Omega other than that he's right and I would have to make assumptions based on what seemed like the most plausible way for that to happen. Also many statements of the problem don't specify that Omega is always right. Besides even if that were the case, how could I possibly know for sure that Omega was always right (or that I was actually talking to Omega).

2) Maybe its not the best assumption. But if Omega is running simulations you should have some expectation of the probability of you being a simulation and making it proportional to the number of simulations vs. real you's seems reasonable.

3) Well when simulated-you is initiated is it psychologically contiguous with real-you. After the not much time that the simulation probably takes, it hasn't diverged that much.

4) Why should it have a sharp discontinuity when Omega becomes an imperfect predictor? This would only happen if you had a sharp discontinuity about how much you cared about copies of yourself as they became less than perfect. Why should one discontinuity be bad but not the other?

OK. Perhaps this theory isn't terribly general but it is a reasonably coherent theory that extends to some class of other examples and produces the "correct" answer in these ones.

Replies from: AlephNeil, Manfred
comment by AlephNeil · 2010-12-19T08:31:08.090Z · LW(p) · GW(p)

Maybe its not the best assumption. But if Omega is running simulations you should have some expectation of the probability of you being a simulation and making it proportional to the number of simulations vs. real you's seems reasonable.

I don't think this is going to work.

Consider a variation of 'counterfactual mugging' where Omega asks for $101 if heads but only gives back $100 if (tails and) it predicts the player would have paid up. Suppose that, for whatever reason, Omega runs 2 simulations of you rather than just 1. Then by the logic above, you should believe you are a simulation with probability 2/3, and so because 2x100 > 1x101, you should pay up. This is 'intuitively wrong' because if you choose to play this game many times then Omega will just take more and more of your money.

In counterfactual mugging (the original version, that is), to be 'reflectively consistent', you need to pay up regardless of whether Omega's simulation is in some obscure sense 'subjectively continuous' with you.

Let me characterize your approach to this problem as follows:

  1. You classify each possible state of the game as either being 'you' or 'not you'.
  2. You decide what to do on the basis of the assumption that you are a sample of one drawn from a uniform distribution over the set of states classified as being 'you'.

Note that this approach gets the answer wrong for the absent-minded driver problem unless you can somehow force yourself to believe that the copy of you at the second intersection is really a mindless robot whose probability of turning is (for whatever reason) guaranteed to be the same as yours.

Replies from: dankane, dankane
comment by dankane · 2010-12-19T17:16:35.939Z · LW(p) · GW(p)

Though perhaps you are right and I need to be similarly careful to avoid counting some outcomes more often than others, which might be a problem, for example, if Omega ran different numbers of simulations depending on the coin flip.

comment by dankane · 2010-12-19T16:48:15.247Z · LW(p) · GW(p)

So if Omega simulates several copies of me, it can't be that both of them, by themselves have the power to make Omega decide to give real-me the money. So I have to give Omega money twice in simulation to get real-me the money once.

As for the absent-minded driver problem the problem here is that the probabilistic approach overcounts probability in some situations but not in others. It's like playing the following game:

Phase 1: I flip a coin If it was heads, there is a Phase 1.5 in which you get to guess its value and then are given an amnesia pill Phase 2: You get to guess whether the coin came up heads and if you are right you get $1.

Using the bad analysis from the absent-minded driver problem. Your strategy is to always guess that it is heads with probability p. Suppose that there is a probability alpha that you are in phase 1.5 when you guess, and 1-alpha that you are in phase 2.

Well your expected payoff is then (alpha)(p) + (1-alpha)*(1/2)

This is clearly silly. This is because if the coin came up heads they counted your strategy twice. I guess to fix this in general, you need to pick a time at which you make your averaging over possible yous. (For example, only count it at phase 2).

For the absent-minded driver problem, you could either choose at the first intersection you come to, in which case you have

1*(p^2+4p(1-p)) + 0(p+4(1-p))

Or the last intersection you come to in which case alpha = 1-p and you have

(1-p)*0 + p(p + 4(1-p))

(the 0 because if X is your last intersection you get 0) Both give the correct answer.

Replies from: AlephNeil
comment by AlephNeil · 2010-12-19T18:13:45.844Z · LW(p) · GW(p)

It's like playing the following game:

Phase 1: I flip a coin If it was heads, there is a Phase 1.5 in which you get to guess its value and then are given an amnesia pill Phase 2: You get to guess whether the coin came up heads and if you are right you get $1.

This is (a variant of) the Sleeping Beauty problem. I'm guessing you must be new here - this is an old 'chestnut' that we've done to death several times. :-)

(1-p)*0 + p(p + 4(1-p))

(the 0 because if X is your last intersection you get 0) Both give the correct answer.

Good stuff. But now here's a really stupid idea for you:

Suppose you're going to play Counterfactual Mugging with an Omega who (for argument's sake) doesn't create a conscious simulation of you. But your friend Bill has a policy that if you ever have to play counterfactual mugging, and the coin lands tails then he will create a simulation of you as you were just prior to the game and make your copy have an experience indistinguishable from the experience you would have had of Omega asking you for money (as though the coin had landed heads). Then following your approach, surely you ought now to pay up (whereas you wouldn't have previously)? Despite the fact that your friend Bill is penniless, and his actions have no effect on Omega or your payoff in the real world?

Replies from: dankane
comment by dankane · 2010-12-19T18:30:14.179Z · LW(p) · GW(p)

I don't see why you think I should pay if Bill is involved. Knowing Bill's behavior, I think that there's a 50% chance that I am real, any paying earns me -$1000, and there's a 50% chance that I am a Bill-simulation and paying earns me $0. Hence paying earns me an expected -$500.

Replies from: AlephNeil
comment by AlephNeil · 2010-12-19T19:01:43.691Z · LW(p) · GW(p)

If you know there is going to be a simulation then your subjective probability for the state of the real coin is that it's heads with probability 1/2. And if the coin is really tails then, assuming Omega is perfect, your action of 'giving money' (in the simulation) seems to be "determining" whether or not you receive money (in the real world).

(Perhaps you'll simply take this as all the more reason to rule out the possibility that there can be a perfect Omega that doesn't create a conscious simulation of you? Fair enough.)

Replies from: dankane
comment by dankane · 2010-12-19T20:09:10.128Z · LW(p) · GW(p)

I'm not sure I would buy this argument unless you could claim that my Bob-simulation's actions would cause Omega to give or not give me money. At very least it should depend on how Omega makes his prediction.

Replies from: AlephNeil, AlephNeil
comment by AlephNeil · 2010-12-20T02:09:49.986Z · LW(p) · GW(p)

Perhaps a clearer variation goes as follows: Bill arranges so that if the coin is tails then (a) he will temporarily receive your winnings, if you get any, and (b) he will do a flawless imitation of Omega asking for money.

If you pay Bill then he returns both what you paid and your winnings (which you're guaranteed to have, by hypothesis). If you don't pay him then he has no winnings to give you.

comment by AlephNeil · 2010-12-20T02:05:02.393Z · LW(p) · GW(p)

Well look: If the real coin is tails and you pay up, then (assuming Omega is perfect, but otherwise irrespectively of how it makes its prediction) you know with certainty that you get the prize. If you don't pay up then you would know with certainty that you don't get the prize. The absence of a 'causal arrow' pointing from your decision to pay to Omega's decision to pay becomes irrelevant in light of this.

(One complication which I think is reasonable to consider here is 'what if physics is indeterministic and so knowing your prior state doesn't permit Omega (or Bill) to calculate with certainty what you will do?' Here I would generalize the game slightly so that if Omega calculates that your probability of paying up is p then you receive proportion p of the prize. Then everything else goes through unchanged - Omega and Bill will now calculate the same probability that you pay up.)

Replies from: dankane
comment by dankane · 2010-12-20T03:53:57.090Z · LW(p) · GW(p)

OK. I am uncomfortable with the idea of dealing with the situation where Omega is actually perfect.

I guess this boils down to me being not quite convinced by the arguments for one-boxing in Newcomb's problem without further specification of how Omega operates.

Replies from: AlephNeil
comment by AlephNeil · 2010-12-20T04:38:15.737Z · LW(p) · GW(p)

Do you know about the "Smoking Lesion" problem?

At first sight it appears to be isomorphic to Newcomb's problem. However, a couple of extra details have been thrown in:

  1. A person's decisions are a product of both conscious deliberation and predetermined unconscious factors beyond their control.
  2. "Omega" only has access to the latter.

Now, I agree that when you have an imperfect Omega, even though it may be very accurate, you can't rule out the possibility that it can only "see" the unfree part of your will, in which case you should "try as hard as you can to two-box (but perhaps not succeed)." However, if Omega has even "partial access" to the "free part" of your will then it will usually be best to one-box.

Or at least this is how I like to think about it.

Replies from: dankane
comment by dankane · 2010-12-20T06:27:38.046Z · LW(p) · GW(p)

I did not know about it, thanks for pointing it out. It's Simpson's paradox the decision theory problem.

On the other hand (ignoring issues of Omega using magic or time travel, or you making precommitments), isn't Newcomb's problem always like this in that there is no direct causal relationship between your decision and his prediction, just that they share some common causation.

comment by Manfred · 2010-12-19T04:53:06.497Z · LW(p) · GW(p)

1) Yes, perfection is terribly unrealistic, but I think it gets too complicated to be interesting if it's done any other way. It's like a limit in mathematics - in fact, it should be the limit of relating to any prediction process as that process approaches perfection, or else you have a nasty discontinuity in your decision process, because all perfect processes can just be defined as "it's perfect."

2) Okay.

3) Statistical correlation, but not causal, so my definition would still tell them apart. In short, if you could throw me into the sun and then simulate me to atom-scale perfection, I would not want you to. This is because continuity is important to my sense of self.

4) Because any solution to the problem of consciousness and relationship between how much like you it is and how much you identify with it is going to be arbitrary. And so the picture in my head is is that the function of how much you would be willing to pay becomes multivalued as Omega becomes imperfect. And my brain sees a multivalued function and returns "not actually a function. Do not use."

Replies from: dankane
comment by dankane · 2010-12-19T05:17:33.691Z · LW(p) · GW(p)

1) OK taking a limit is an idea I hadn't thought of. It might even defeat my argument that your answer depends on how Omega achieves this. On the other hand:

a) I am not sure what the rest of my beliefs would look like anymore if I saw enough evidence to convince me that Omega was right all the time with probability 1-1/3^^^3 .

b) I doubt that the above is even possible, since given my argument you shouldn't be able to convince me that the probability is less than say 10^-10 that I am a simulation talking to something that is not actually Omega.

3) I am not sure why you think that the simulation is not causally a copy of you. Either that or I am not sure what your distinction between statistical and causal is.

3+4) I agree that one of the weaknesses of this theory is that it depends heavily, among other things, on a somewhat controversial theory of identity/ what it means to win. Though I don't see why the amount that you identify with an imperfect copy of yourself should be arbitrary, or at very least if that's the case why its a problem for the dependence of your actions on Omega's degree of perfection to be arbitrary, but not a problem for your identification with imperfect copies of yourself to be.

comment by Desrtopa · 2010-12-01T00:02:26.298Z · LW(p) · GW(p)

My first thought on the coalition scenario is that the solution might hinge on something as simple as the agents deciding to avoid a stable equilibrium that does not terminate in anyone ending up with pie.

Edit: this seems to already have been discussed at length. That'll teach me to reply to year old threads without an adequate perusal of the preexisting comments.

comment by PhilGoetz · 2009-08-06T01:00:54.828Z · LW(p) · GW(p)

"I just flipped a fair coin. I decided, before I flipped the coin, that if it came up heads, I would ask you for $1000. And if it came up tails, I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads. The coin came up heads - can I have $1000?"

Obviously, the only reflectively consistent answer in this case is "Yes - here's the $1000", because if you're an agent who expects to encounter many problems like this in the future, you will self-modify to be the sort of agent who answers "Yes" to this sort of question - just like with Newcomb's Problem or Parfit's Hitchhiker.

Why would I expect that, the next time I encounter an Omega, it will say, "I would give you $1,000,000 if and only if I predicted that you would give me $1000 if the coin had come up heads", instead of, "I would give you $1,000,000 if and only if I predicted that you would not give me $1000 if the coin had come up heads" ?

comment by Jotaf · 2009-07-21T03:47:58.324Z · LW(p) · GW(p)

I don't really wanna rock the boat here, but in the words of one of my professors, it "needs more math".

I predict it will go somewhat like this: you specify the problem in terms of A implies B, etc; you find out there's infinite recursion; you prove that the solution doesn't exist. Reductio ad absurdum anyone?

comment by ArthurB · 2009-07-21T03:10:39.497Z · LW(p) · GW(p)

Instead of assuming that other will behave as a function of our choice, we look at the rest of the universe (including other sentient being, including Omega) as a system where our own code is part of the data.

Given a prior on physics, there is a well defined code that maximizes our expected utility.

That code wins. It one boxes, it pays Omega when the coin falls on heads etc.

I think this solves the infinite regress problem, albeit in a very unpractical way,

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-21T13:34:42.273Z · LW(p) · GW(p)

This doesn't sound obviously wrong, but is too vague even for an informal answer.

Replies from: ArthurB
comment by ArthurB · 2009-07-21T15:05:11.508Z · LW(p) · GW(p)

Well, if you want practicality, I think Omega problems can be disregarded, they're not realistic. It seems that the only feature needed for the real world is the ability to make trusted promises as we encounter the need to make them.

If we are not concerned with practicality but the theoretical problem behind these paradoxes, the key is that other agents make prediction on your behavior, which is the same as saying they have a theory of mind, which is simply a belief distribution over your own code.

To win, you should take the actions that make their belief about your own code favorable to you, which can include lying, or modifying your own code and showing it to make your point.

It's not our choice that matters in these problem but our choosing algorithm.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-21T18:30:51.201Z · LW(p) · GW(p)

Again, if you can state same with precision, it could be valuable, while on this level my reply is "So?".

Replies from: ArthurB
comment by ArthurB · 2009-07-21T18:45:39.336Z · LW(p) · GW(p)

I confess I do not grasp the problem well enough to see where the problem lies in my comment. I am trying to formalize the problem, and I think the formalism I describe is sensible.

Once again, I'll reword it but I think you'll still find it too vague : to win, one must act rationally and the set of possible action includes modifying one's code.

The question was

My timeless decision theory only functions in cases where the other agents' decisions can be viewed as functions of one argument, that argument being your own choice in that particular case - either by specification (as in Newcomb's Problem) or by symmetry (as in the Prisoner's Dilemma). If their decision is allowed to depend on how your decision depends on their decision - like saying, "I'll cooperate, not 'if the other agent cooperates', but only if the other agent cooperates if and only if I cooperate - if I predict the other agent to cooperate unconditionally, then I'll just defect" - then in general I do not know how to resolve the resulting infinite regress of conditionality, except in the special case of predictable symmetry

I do not know the specifics of Eliezer's timeless decision theory, but it seems to me that if one looks at the decision process of other based on their belief of your code, not on your decisions, there is no infinite regression progress.

You could say : Ah but there is your belief about an agent's code, then his belief about your belief about his code, then your belief about his belief about your belief about his code, and that looks like an infinite regression. However, there is really no regression since "his belief about your belief about his code" is entirely contained in "your belief about his code".

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-07-21T21:06:51.577Z · LW(p) · GW(p)

Thanks, this comment makes your point clearer. See cousin_it's post Re-formalizing PD.

comment by nawitus · 2009-07-20T10:58:38.953Z · LW(p) · GW(p)

If you're an AI, you do not have to (and shouldn't) pay the first $1000, you can just self-modify to pay $1000 in all the following coin flips (if we assume that the AI can easily rewrite/modify it's own behaviour in this way). Human brains probably don't have this capability, so I guess paying $1000 even in the first game makes sense.

Replies from: JamesAndrix
comment by JamesAndrix · 2009-07-20T19:26:33.970Z · LW(p) · GW(p)

That assumes that you didn't expect to face problems like that in the future before omega presented you with the problem, but do expect to face problems like that in the future after omega presents you with the problem. It doesn't work at all if you only get one shot at it. (and you should already be a person who would pay, just in case you do)

comment by timtyler · 2009-07-20T07:03:33.876Z · LW(p) · GW(p)

I had a look at the existing literature. It seems as though the idea of a "rational agent" who takes one box goes quite a way back:

"Rationality, Dispositions, and the Newcomb Paradox" (Philosophical Studies, volume 88, number 1, October 1997)

Abstract: "In this article I point out two important ambiguities in the paradox. [...] I draw an analogy to Parfit's hitchhiker example which explains why some people are tempted to claim that taking only one box is rational. I go on to claim that although the ideal strategy is to adopt a necessitating disposition to take only one box, it is never rational to choose only one box. [...] I conclude that the rational action for a player in the Newcomb Paradox is taking both boxes, but that rational agents will usually take only one box because they have rationally adopted the disposition to do so."

comment by mariorz · 2009-07-20T04:25:35.179Z · LW(p) · GW(p)

Why can't Omega's coin toss game not be expressed formally simply by using expected values?

n = expected further encounters of Omega's coin toss game

EV(Yes) = -1000 + (.5 n -1000) + (.5 n 1000000)

EV(No) = 0

comment by Bo102010 · 2009-07-20T00:38:59.324Z · LW(p) · GW(p)

On dividing the pie, I ran across this in an introduction to game theory class. I think the instructor wanted us to figure out that there's a regress and see how we dealt with it. Different groups did different things, but two members of my group wanted to be nice and not cut anyone out, so our collective behavior was not particularly rational. "It's not about being nice! It's about getting the points!" I kept saying, but at the time the group was about 16 (and so was I), and had varying math backgrounds, and some were less interested in that aspect of the game.

I think at least one group realized there would always be a way to undermine the coalitions that assembled, and cut everyone in equally.

Replies from: dclayh
comment by dclayh · 2009-07-20T01:22:14.976Z · LW(p) · GW(p)

two members of my group wanted to be nice and not cut anyone out, so our collective behavior was not particularly rational.

One might guess that evolution granted us a strong fairness drive to avoid just these sorts of decision regresses.

Replies from: Eliezer_Yudkowsky, kpreid
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-07-20T01:55:30.466Z · LW(p) · GW(p)

Fail.

Replies from: dclayh
comment by dclayh · 2009-07-20T05:00:30.512Z · LW(p) · GW(p)

It's not group selection: if group A splits things evenly and moves on, while group B goes around and around with fractious coalitions until a tiger comes along and eats them, then being in group A confers an individual advantage.

Clearly evolution also gave us the ability to make elaborate justifications as to why we, particularly, deserve more than an equal share. But that hardly disallows the fairness heuristic as a fallback option when the discussion is taking longer than it deserves. (And some people just have the stamina to keep arguing until everyone else has given up in disgust. These usually become middle managers or Congressmen.)

Replies from: orthonormal
comment by orthonormal · 2009-07-20T05:47:04.083Z · LW(p) · GW(p)

What you just described is group selection, and thus highly unlikely.

It's to your individual benefit to be more (unconsciously) selfish and calculating in these situations, whether the other people in your group have a fairness drive or not.

Replies from: Vladimir_Nesov, timtyler, dclayh
comment by Vladimir_Nesov · 2009-07-20T11:40:03.370Z · LW(p) · GW(p)

It's to your individual benefit to be more (unconsciously) selfish and calculating in these situations, whether the other people in your group have a fairness drive or not.

Not if you are punished for selfishness. I'm not sure how reasonable the following analysis it (since I didn't study this kind of thing at all); it suggests that fairness is a stable strategy, and given some constraints a more feasible one than selfishness:

M. A. Nowak, et al. (2000). `Fairness versus reason in the ultimatum game.'. Science 289(5485):1773-1775. (PDF)

Replies from: orthonormal
comment by orthonormal · 2009-07-20T17:28:45.975Z · LW(p) · GW(p)

See reply to Tim Tyler.

comment by timtyler · 2009-07-20T07:39:00.933Z · LW(p) · GW(p)

...and if your companions have circuitry for detecting and punishing selfish behaviour - what then? That's how the "fairness drive" is implemented - get mad and punish cheaters until it hurts. That way, cheaters learn that crime doesn't pay - and act fairly.

Replies from: orthonormal
comment by orthonormal · 2009-07-20T17:27:15.974Z · LW(p) · GW(p)

I agree. But you see how this individual selection pressure towards fairness is different from the group selection pressure that dclayh was actually asserting?

Replies from: timtyler
comment by timtyler · 2009-07-20T19:17:23.238Z · LW(p) · GW(p)

You and EY seem to be the people who are talking about group selection.

comment by dclayh · 2009-07-20T06:36:48.024Z · LW(p) · GW(p)

It's to your individual benefit to be more (unconsciously) selfish and calculating in these situations

Not when the cost (including opportunity cost) of doing the calculating outweighs the benefit it would give you.

Replies from: orthonormal
comment by orthonormal · 2009-07-20T17:32:15.587Z · LW(p) · GW(p)

You're introducing weaker and less plausible factors to rescue a mistaken assertion. It's not worth it.

As pointed out below in this thread, the fairness drive almost certainly comes from the individual pressure of cheaters being punished, not from any group pressure as you tried to say above.

comment by kpreid · 2009-07-20T02:11:01.976Z · LW(p) · GW(p)

Statement of the obvious: Spending excessive time deciding is neither rational nor evolutionarily favored.

comment by dankane · 2010-12-19T21:51:31.354Z · LW(p) · GW(p)

Actually here's my argument why (ignoring the simulation arguments) you should actually refuse to give Omega money.

Here's what actually happened:

Omega flipped a fair coin. If it comes up heads the stated conversation happened. If it comes up tails and Omega predicts that you would have given him $1000, he steals $1000000 from you.

If you have a policy of paying you earn 10^6/4 - 10^3/4 -10^6/2 = -$250250. If you have a policy of not paying you get 0.

More realistically having a policy of paying Omega in such a situation could earn or lose you money if people interact with you based on a prediction of your policy, but there is no reason to suspect one over the other.

There's a similar problem with the Prisoner's Dilemma solution. If you formalize it as two of you are in a Prisoner's Dilemma and can see each other's code, then modifying your code to cooperate against the mirror matchup helps you in the mirror matchup, but hurts you if you are playing against a program that cooperates unless you would cooperate in a mirror matchup. Unless you have a reason to suspect that running into one is more likely than running into the other, you can't tell which would work better.

Replies from: Desrtopa, dankane
comment by Desrtopa · 2010-12-20T05:13:22.787Z · LW(p) · GW(p)

Unless I'm misunderstanding you, this is a violation of one of the premises of the problem, that Omega is known to be honest about how he poses dilemmas.

Replies from: dankane, wedrifid
comment by dankane · 2010-12-20T05:48:31.109Z · LW(p) · GW(p)

Fine if you think that Omega would have told me about the previous coin flip consider this:

There are two different supernatural entities who can correctly predict my response to the counterfactual mugging. There's Omega and Z.

Two things could theoretically happen to me:

a) Omega could present me with the counterfactual mugging problem.

b) Z could decide to steal $1000000 from me if and only if I would have given Omega $1000 in the counterfactual mugging.

When I am trying to decide on policy for dealing with counterfactual muggings I should note that my policy will affect my outcome in both situation (a) and (b). The policy of giving Omega money will win me $499500 (expected) in situation (a), but it will lose me $1000000 in situation (b). Unless I have a reason to suspect that (a) is at least twice as likely as (b), I have no reason to prefer the policy of giving Omega money.

Replies from: Desrtopa, Caspian
comment by Desrtopa · 2010-12-20T14:12:05.772Z · LW(p) · GW(p)

The basis of the dilemma is that you know that Omega, who is honest about the dilemmas he presents, exists. You have no evidence that Z exists. You can posit his existence, but it doesn't make the dilemma symmetrical.

Replies from: dankane
comment by dankane · 2010-12-20T16:26:26.499Z · LW(p) · GW(p)

But if instead Z exists, shows up on your doorstep and says (in his perfectly trustworthy way) "I will take your money if and only if you would have given money to Omega in the counterfactual mugging", then you have evidence that Z exists but no evidence that Omega does.

The point is that you need to make your policy before either entity shows up. Therefore unless you have evidence now that one is more likely than the other, not paying Omega is the better policy (unless you think of more hypothetical entities).

comment by Caspian · 2010-12-20T12:09:15.210Z · LW(p) · GW(p)

Agreed. Neither is likely to happen, but the chance of something analogous happening may be relevant when forming a general policy. Omega in Newcombe's problem is basically asking you to guard something for pay without looking at it or stealing it. The unrealistic part is being a perfect predictor and perfectly trustworthy and you therefore knowing the exact situation.

Is there a more everyday analogue to Omega as the Counterfactual Mugger?

Replies from: shokwave
comment by shokwave · 2010-12-20T12:35:03.750Z · LW(p) · GW(p)

Is there a more everyday analogue to Omega as the Counterfactual Mugger?

People taking bets for you in your absence.

It's probably a good exercise to develop a real-world analogue to all philosophical puzzles such as this wherever you encounter them; the purpose of such thought experiments is not to create entirely new situations, but to strip away extraneous concerns and heuristics like "but I trust my friends" or "but nobody is that cold-hearted" or "but nobody would give away a million dollars for the hell of it, there must be a trick".

Replies from: dankane
comment by dankane · 2010-12-20T16:40:18.027Z · LW(p) · GW(p)

Good point. On the other hand I think that Omega being a perfect predictor through some completely unspecified mechanism is one of the most confusing parts of this problem. Also as I was saying, it is also complicating issue that you do not know anything about the statistical behavior of possible Omegas (though I guess that there are ways to fix that in the problem statement).

Replies from: shokwave, wedrifid
comment by shokwave · 2010-12-21T00:28:19.706Z · LW(p) · GW(p)

I think that Omega being a perfect predictor through some completely unspecified mechanism is one of the most confusing parts of this problem.

It may be a truly magical power, but any other method of stipulating better-than-random prediction has a hole in it that lets people ignore the actual decision in favor of finding a method to outsmart said prediction method. Parfit's Hitchhiker, as usually formalised on LessWrong, involves a more believable good-enough lie-detector - but prediction is much harder than lie-detection, we don't have solid methods of prediction that aren't gameable, and so forth, until it's easier to just postulate Omega to get people to engage with the decision instead of the formulation.

Replies from: dankane
comment by dankane · 2010-12-21T08:15:57.617Z · LW(p) · GW(p)

Now if the method of prediction were totally irrelevant, I think I would agree with you. On the other hand, method of prediction can be the difference between your choice directly putting the money in the box in Newcomb's problem and a smoking lesion problem. If the method of prediction is relevant, than requiring an unrealistic perfect predictor is going to leave you with something pretty unintuitive. I guess that a perfect simulation or a perfect lie detector would be reasonable though. On the other hand outsmarting the prediction method may not be an option. Maybe they give you a psychology test, and only afterwords offer you a Newcomb problem. In any case I feel like confusing bits of problem statement are perhaps just being moved around.

comment by wedrifid · 2010-12-20T16:56:42.945Z · LW(p) · GW(p)

Also as I was saying about it is a complicating issue that you do not know anything about the statistical behavior of possible Omegas (though I guess that there are ways to fix that in the problem statement).

There is one Omega, Omega and Newcomb's problem gives his profile!

comment by wedrifid · 2010-12-20T05:27:27.696Z · LW(p) · GW(p)

And he makes a similar mistake in his consideration of the Prisoner's Dilemma. The prisoners are both attempted to maximise their (known) utility function. You aren't playing against an actively malicious agent out to steal people's lunch. You do have reason to expect agents to be more likely to follow their own self interest than not, even in cases where this isn't outright declared as part of the scenario.

Replies from: dankane
comment by dankane · 2010-12-20T05:52:00.238Z · LW(p) · GW(p)

Here I'm willing to grant a little more. I still claim that whether or not cooperating in the mirror match is a good strategy depends on knowing statistical information about the other players you are likely to face. On the other hand in this case, you may well have more reasonable grounds for your belief that you will see more mirror matches than matches against people who specifically try to punish those who cooperate in mirror matches.

comment by dankane · 2011-01-21T20:24:45.642Z · LW(p) · GW(p)

Having thought about it a little more, I think I have pinpointed my problem with building a decision theory in which real outcomes are allowed to depend on the outcomes of counterfactuals:

The output of your algorithm in a given situation will need to depend on your prior distribution and not just on your posterior distribution.

In CDT, your choice of actions depends only on the present state of the universe. Hence you can make your decision based solely on your posterior distribution on the present state of the universe.

If you need to deal with counterfactuals though, the output of your algorithm in a given situation should depend not only on the state of the universe in that situation, but on the probability that this situation appears in a relevant counterfactual and upon the results thereof. I cannot just consult my posterior and ask about the expected results of my actions. I also need to consult my prior and compute the probability that my payouts will depend on a counterfactual version of this situation.

comment by [deleted] · 2009-07-20T06:07:06.659Z · LW(p) · GW(p)

At the point where Omega asks me this question, I already know that the coin came up heads, so I already know I'm not going to get the million. It seems like I want to decide "as if" I don't know whether the coin came up heads or tails, and then implement that decision even if I know the coin came up heads. But I don't have a good formal way of talking about how my decision in one state of knowledge has to be determined by the decision I would make if I occupied a different epistemic state, conditioning using the probability previously possessed by events I have since learned the outcome of...

Well, it seems to me that you always want to do this. According to timeless-reflectively-consistent-yada-yada decision theory, the best decision to make is to follow the strategy that you would have chosen at the very beginning.

The precise constraint this problem places on you is that the context you make your decision in is that there is a 50% chance that your decision results in you getting $1,000,000 instead of nothing.

Treat your observations as putting you in the context in which you make your decision.

comment by CannibalSmith · 2009-07-20T10:37:22.568Z · LW(p) · GW(p)

I stopped reading at "Yes, you say". The correct solution is obviously obvious: you give him your credit card and promise to tell the PIN number once you're at the ATM.

You could also try to knock him off his bike.

Replies from: JGWeissman
comment by JGWeissman · 2009-07-20T23:03:39.733Z · LW(p) · GW(p)

It seems quite convenient that you can physically give him your credit card.

Replies from: CannibalSmith
comment by CannibalSmith · 2009-07-21T08:04:09.305Z · LW(p) · GW(p)

The least convenient possible world has the following problems:

  • It's possible, but how probable?
  • It's nothing like the real world.
  • You always lose in it.

Due to these traits, lessons learned in such a world are worthless in the real one and by invoking it you accomplish nothing.

comment by mendel · 2011-05-22T14:47:53.506Z · LW(p) · GW(p)

"Suppose you have ten ideal game-theoretic selfish agents and a pie to be divided by majority vote. "

Well then, the statistical expected (average) share any agent is going to get long-term is 1/10th of the pie. The simplest solution that ensures this is the equal division; anticipating this from the start cuts down on negotiation costs, and if a majority agrees to follow this strategy (i.e agrees to not realize more than their "share"), it is also stable - anyone who ponders upsetting it risks to be the "odd man out" who eats the loss of an unsymmetric strategy.

In practice (i.e. in real life) there are other situations that are relatively stable, i.e. after a few rounds of "outsiders" bidding low to get in, there might be two powerful "insiders" who get large shares in liaison with four smaller insiders who agree to a very small share because it is better than nothing; the best the insiders can do then is to offer the four outsiders small shares also, so that each small-share individual wil be faced with the choice of cooperating and receiving a small share, or not cooperating and receiving nothing. Whether the two insiders can pull this off will depend on how they frame the problem, and how they present themselves ("we are the stabilizers that ensure that "social justice" is done and nobody has to starve").

How you can get an AI to understand setups like this (and if it wants to move past the singularity, it probably will have to) seems to be quite a problem; to recognize that statistically, it can realize no more than 1/10th, and to push for the simplest solution that ensures this seems far easier (and yet some commentators seem to think that this solution of "cutting everyone in" is somehow "inferior" as a strategy - puny humans ;-).