The true prisoner's dilemma with skewed payoff matrix

post by Jonii · 2010-11-20T20:37:14.926Z · LW · GW · Legacy · 42 comments

Contents

42 comments

Related to The True Prisoner's Dilemma, Let's split the cake, lengthwise, upwise and slantwise, If you don't know the name of the game, just tell me what I mean to you

tl;dr: Playing the true PD, it might be that you should co-operate when expecting the other one to defect, or vice versa, in some situations, against agents that are capable of superrationality. This is because relative weight of outcomes for both parties might vary. This could lead this sort of agents to outperform even superrational ones.

So, it happens that our benevolent Omega has actually an evil twin, that is as trustworthy as his sibling, but abducts people into a lot worse hypothetical scenarios. Here we have one:

You wake up in a strange dimension, and this Evil-Omega is smiling at you, and explains that you're about to play a game with unknown paperclip maximizer from another dimension that you haven't interacted with before and won't interact with ever after. The alien is like GLUT when it comes to consciousness, it runs a simple approximation of rational decision algorithm but nothing that you could think of as "personality" or "soul". Also, since it doesn't have a soul, you have absolutely no reason to feel bad for it's losses. This is true PD.

You are also told some specifics about the algorithm that the alien uses to reach its decision, and likewise told that alien is told about as much about you. At this point I don't want to nail the algorithm the opposing alien uses down to one specific. We're looking for a method that wins when summing up all these possibilities. Next, especially, we're looking at the group of AI's that are capable of superrationality, since against other's the game is trivial.

The payoff matrix is like this:

DD=(lose 3 billion lives and be tortured, lose 4 paperclips), CC=(2 billion lives and be made miserable, lose 2 paperclips), CD=(lose 5 billion lives and be tortured a lot, nothing), DC=(nothing, lose 8 paperclips)

So, what do you do? Opponent is capable of superrationality. In the post "The True Prisoner's Dilemma", it was(kinda, vaguely, implicitly) assumed for simplicity's sake that this information is enough to decide whether to defect or not. Answer, based on this information, could be to co-operate. However, I argue that information given is not enough.

Back to the hypothetical: In-hypothetical you is still wondering about his/her decision, but we zoom out and observe that, unbeknownst to you, Omega has abducted your fellow LW reader and another paperclip maximizer from that same dimension, and is making them play PD. But this time their payoff matrix is like this:

DD=(lose $0.04, make 2 random, small changes to alien's utility function and 200 paperclips lost), CC=(lose $0.02, 1 change, 100 paperclips), CD=(lose $0.08, nothing), DC=(nothing, 4 changes, 400 paperclips)

Now, if it's not "rational" to take the relative loss into account, we're bound to find ourselves in a situation where billions of humans die. You could be regretting your rationality, even. It should become obvious now that you'd wish you could somehow negotiate both of these PD's so that you would defect and your opponent co-operate. You'd be totally willing to take a $0.08 hit for that, maybe paying it in its entirety for your friend. And so it happens, paperclip maximizers would also have an incentive to do this.

But, of course, players don't know about this entire situation, so they might not be able to operate in optimal way in this specific scenario. However, if they take into account how much the other cares about those results, using some unknown method, they just might be able to systematically perform better(if we made more of this sorts of problems, or if we selected payoffs at random for the one-shot game), than "naive" PD-players playing against each other. Naivity here would imply that they simply and blindly co-operate against equally rational opponents. How to achieve that is the open question.

-

Stuart Armstrong, for example, has an actual idea of how to co-operate when the payoffs are skewed, while I'm just pointing out that there's a problem to be solved, so this is not really news or anything. Anyways, I still think that this topic has not been explored as much as it should be.

Edit. Added this bit: You are also told some specifics about the algorithm that the alien uses to reach its decision, and likewise told that alien is told about as much about you. At this point I don't want to nail the algorithm the opposing alien uses down to one specific. We're looking for a method that wins when summing up all these possibilities. Next, especially, we're looking at the group of AI's that are capable of superrationality, since against other sort of agents the game is trivial.

Edit. Corrected some huge errors here and there, like, mixing hypothetical you and hypothetical LW-friend.

Edit. Transfer Discussion -> Real LW complete!

42 comments

Comments sorted by top scores.

comment by Scott Alexander (Yvain) · 2010-11-17T14:49:54.774Z · LW(p) · GW(p)

Humans seem to have a built-in solution to this dilemma, in that if I were presented with this situation and another human, where the payoff was something like minus ten, zero, or plus ten cents for me, versus insta-death, nothing, or ten billion dollars for the other human, I would voluntarily let the other person win and I would expect the other person to do the same to me if our situations were reversed. This means humans playing against other humans will all do exceptionally well in these sorts of dilemmas.

So this seems like an intelligent decision theoretic design choice, along the lines of "Precommit to maximizing the gains of the agent with the high gains now, in the hope of acausally influencing the other agent to do the same, thus making us both better off if we ever end up in a true prisoner's dilemma with skewed payoff matrix."

If I believe the alien to be sufficiently intelligent/well-programmed, and if I expect the alien to believe me to also be sufficiently intelligent/well-programmed, I would at least consider the alien graciously letting me win the first option in exchange for my letting the alien win the second. Even if only one of the two options is ever presented, and the second is the same sort of relevant hypothetical as a Counterfactual Mugging.

Replies from: Jonii
comment by Jonii · 2010-11-30T00:28:37.085Z · LW(p) · GW(p)

Yes, humans performing outstandingly well in this sort of problem was my inspiration for this. I am not sure how far it is possible to generalize this sort of winning. Humans themselves are kinda complex machines, so, if we start with perfectly rational LW reader and paperclip maximizer with one-shot PD with randomized payoff matrix, what's the least amount of handicaps we need to give them to reach this super-optimal solution? At first, I thought we could even remove the randomization alltogether, but it is making the whole problem more ambiguous I think.

comment by Manfred · 2010-11-16T05:18:24.367Z · LW(p) · GW(p)

Of course it would be nice if you could negotiate in things like the Prisoner's dilemma - that's pretty much why you can't. If you could find a common-ish real life analogue of the situation you're thinking about, though, I'd probably give a far less snippy answer :P

As for regretting your choice when the stakes are huge - I think "just dealing with it" is an interesting skill that us humans have, But more seriously, if ALL mistakes are scaled up it shouldn't make any difference to the best option, nor should it make you feel more guilty if you do your best and fail.

comment by Kingreaper · 2010-11-16T17:18:25.563Z · LW(p) · GW(p)

In order to define value, you need to have more than one possible trade.

So only by knowing about both dilemmas can you define value. Since, by the problem definition, you only know of one of them, defining relative value is impossible.

Replies from: Jonii
comment by Jonii · 2010-11-16T18:39:04.864Z · LW(p) · GW(p)

So only by knowing about both dilemmas can you define value. Since, by the problem definition, you only know of one of them, defining relative value is impossible.

You'd only need to know what sort of utility function for that. Sure, you wouldn't know that the other deal like that is happening right there, so it might not be possible to reach counterfactual agreement in this particular case, but as a general rule, it does seem possible to outperform agents that don't go for this sort of "I take a hit for you if you'd take a hit for me in this sort of scenario where stakes are reversed". Which leaves multiple questions open, I agree.

comment by Psychohistorian · 2010-11-20T21:33:30.437Z · LW(p) · GW(p)

TL;DR: If there were prisoner's dilemmas being run in parallel with vastly skewed payoffs in different directions, it would be beneficial if all parties could change their strategies to accommodate this.

Methinks we have gone well past the deep-end of barely-relevant hypotheticals, and are currently swimming somewhere in the concrete under the bleachers. This is doubly true when it's assumed you are ignorant of these other entities, simply because you have absolutely no reason to suspect they exist, or their relative frequencies. Why have you even bothered privileging this hypothesis?

Replies from: wedrifid, Bongo
comment by wedrifid · 2010-11-30T00:34:24.205Z · LW(p) · GW(p)

Why have you even bothered privileging this hypothesis?

This is a misuse of the 'privileging this hypothesis' phrase. A barely relevant hypothetical is not a hypothesis. Such a hypothetical could be used rhetorically in order to advocate an implicit privileged hypothesis but that is not what the author has done here.

(This means only that you need a different name for your objection.)

comment by Bongo · 2010-11-30T00:15:08.890Z · LW(p) · GW(p)

Methinks we have gone well past the deep-end of barely-relevant hypotheticals, and are currently swimming somewhere in the concrete under the bleachers.

I disagree. I think all these problems have real-world analogues.

Omega bothering you? Replace him with Paul Ekman

As for the problem in this post, forget parallel, skewed payoffs is enough. If players could coordinate in skewed prisoner's dilemmas to let the player who stands to lose/gain the most defect while the other player cooperates, they would expect huge gains. And skewed prisoner's dilemmas happen, they're not "barely-relevant hypotheticals".

Replies from: Psychohistorian
comment by Psychohistorian · 2010-11-30T19:05:37.298Z · LW(p) · GW(p)

It's not that they don't happen. The issue is where you need some ability to credibly precommit, and to bind other people in similar situations to credibly precommit, except you don't know that there are other people who you need to work with.

In the vast majority of cases, we have an incredibly elegant solution to the prisoner's dilemma: contract law. Once you create sufficiently odd hypotheticals - such as skewed payoffs, single-shot, no knowledge of positive-sum exchanges, no ability to discuss or agree to positive-sum exchanges - the issue is irrelevant enough to be absurd.

If you were offered a bad-for-humans deal, would you defect, or would you simply assume that there are many other deals out there that are skewed the other way and that the paperclip maximizers who are receiving them are cooperative conditioned on your cooperation?

Replies from: Bongo
comment by Bongo · 2010-11-30T20:35:34.534Z · LW(p) · GW(p)

Haven't you noticed that people are working on decision theories that do not need to precommit in these situations to achieve the optimal outcome?

That's what's interesting. Decision theories are being explored that output the correct action without need for the crutches of precommitment and negotiation.

Also, I simply disagree that skewed payoffs, single-shot, no knowledge of positive-sum exchanges(?), no ability to discuss make a problem "irrelevant enough to be absurd".

If you were offered a bad-for-humans deal, would you defect, or would you simply assume that there are many other deals out there that are skewed the other way and that the paperclip maximizers who are receiving them are cooperative conditioned on your cooperation?

If I had time to work out the decison theory I might very well come to expect that the paperclipper would submit to cooperating while I defect, in the bad-for-humans case, if I would similarly submit in the bad-for-paperclipper case.

Replies from: Psychohistorian
comment by Psychohistorian · 2010-12-01T22:37:56.672Z · LW(p) · GW(p)

Haven't you noticed that people are working on decision theories that do not need to precommit in these situations to achieve the optimal outcome?

If someone on the street approaches me and tells me he is counterfactually mugging me, I don't give him a dime. The odds that he is actually capable of counterfactually mugging me and is telling me the truth are virtually zero compared to the chance that he's trying to scam me out of my money. It is an essential element of every one of those weird-situation decision theories that you know you are in a weird world with certainty.

Your hypothetical removes this certainty. If you are unaware that other people face similarly skewed bargains, your decision theory cannot possibly adjust for their behaviour. If you are in a situation of full awareness of other such bargains existing, then the case seems relatively indistinguishable from the basic prisoner's dilemma with hyper-rationality.

(And "no knowledge of positive-sum exchange" means that you are ignorant of the fact that there are other PDs skewed in the opposite direction.)

Replies from: Bongo
comment by Bongo · 2010-12-02T00:30:30.609Z · LW(p) · GW(p)

About skewed PD:

Maybe I don't have to know that other skewed dilemmas are in fact happening. Maybe I just have to know that they could be happening. Or that they could have happened. Maybe it's enough to know a coin was flipped to determine in whose favor the dilemma is skewed, for example.

Here's another perpective. If I'm a UDT agent and my priors assign a roughly equal probability to ending up on either side of the skew in a skewed prisoner's dilemma against another UDT agent, the straightforward UDT answer is for the advantaged player to submit to the disadvantaged player, even if only one dilemma is "in fact" ever run.

Or if precommitment is your thing: it's in your interests to precommit to submit to a disadvantaged player in future skewed prisoner's dilemmas if the other player has similarly precommited, because you don't yet know what kinds of skewed dilemmas you're going to encounter in the future.

About counterfactual mugging:

I'd probably pay Yvain and wouldn't think I'm in a weird world.

Actually, I'm writing up more real-world versions of many popular decision problems, and I have a particularly clever one for counterfactual mugging called "Counterfactual Insurance Co.". When I write it up properly I'll post it on LW...

Replies from: Psychohistorian
comment by Psychohistorian · 2010-12-02T03:36:51.829Z · LW(p) · GW(p)

Maybe I don't have to know that other skewed dilemmas are in fact happening. Maybe I just have to know that they could be happening. Or that they could have happened. Maybe it's enough to know a coin was flipped to determine in whose favor the dilemma is skewed, for example.

What evidence do you have to believe things are balanced? All you know is that one skewed situation exists. What evidence leads you to believe that other situations exist that are skewed relatively equally in the opposite direction? It's irrational to end up with the worst possible outcome for a PD because there might, in theory, be other PDs in which if you opponent did what you did you would benefit.

For what I think is an completely unexaggerated analogy: It is theoretically possible that every time I eat a banana, some entity horribly tortures an innocent person. It could happen. Absent any actual evidence that it does, my banana consumption will not change. You should not change your behaviour in a PD because it's theoretically possible that other PDs exist with oppositely skewed outcomes.

As for the counterfactual mugging, Yvain will never do it, unless he's an eccentric millionaire, because he'd lose a fortune. For any other individual, you would need substantial evidence before you would trust them.

As for precommitment, the lack of an ability to credibly precommit is one of the essential elements of a prisoner's dilemma. If the prisoners could make an enforceable contract not to snitch, it'd be easy to end up at the optimal outcome.

Replies from: Bongo
comment by Bongo · 2010-12-02T04:32:32.831Z · LW(p) · GW(p)

What evidence do you have to believe things are balanced?

What evidence do you have to believe that things are 1) unbalanced 2) in your favor?

You don't know what kinds of PD's you're going to encounter, so you prepare for all of them by setting up the appropriate precommitments, if your decision theory requires precommitments. If it doesn't, you'll just figure out and do the thing that you would have wanted to precommit to doing, "on the fly".

Credibility is indeed assumed in these problems. If you can't verify that the other player really has made the precommitment or really is a UDT kind of guy, you can't take advantage of this kind of coordination.

comment by wedrifid · 2010-11-20T21:55:50.288Z · LW(p) · GW(p)

but we zoom out and observe that, unbeknownst to you, Omega has abducted your fellow LW reader and another paperclip maximizer from that same dimension, and is making them play PD. But this time their payoff matrix is like this:

Omega like entities are assumed not to be misleading us and we are assumed to have absolute faith in that fact. This means that while we may not know the specifics of what Omega is doing to change the behaviour of our opponent we will know that that kind of thing is part of the game. Since I certainly don't have enough information about the kinds of things that may change the clipper nor the processing ability to calculate likely outcomes from such interference (including, for example, predicting LW readers) it is obviously necessary for me to defect.

You are also told some specifics about the algorithm that the alien uses to reach its decision, and likewise told that alien is told about as much about you. At this point I don't want to nail the algorithm the opposing alien uses down to one specific. We're looking for a method that wins when summing up all these possibilities. Next, especially, we're looking at the group of AI's that are capable of superrationality, since against other's the game is trivial.

Had the decision not been determined elsewhere this would be insufficient. I need to know what has been told to the clipper about me and what I have been told about the me to the clipper. "Capable of superrationality" is not especially meaningful until I am told exactly what that means in the instance.

comment by PlaidX · 2010-11-21T05:20:34.449Z · LW(p) · GW(p)

TL;DR Entities chasing each other in spirals through their minds may eventually meet and shake hands, but logic alone does NOT give you the ability to do this. You need access to each other's source code.

It seems to me what "superrationality" is grasping towards is the idea that if both players can predict each other's actions, it provides pragmactic grounds for cooperation. All the other crap, (the skewed payoff matrix, hofstader's "sufficiently logical" terminology, even the connotations of the word "superrationality" itself) are red herrings.

This all hinges on the idea that your decision CAN affect their decision, through their mental emulation of you, and vice versa. If it's one-sided, we have newcomb's problem, except it collapses to a normal prisoner's dilemma, since although omega knows if you'll cooperate, you have no way of knowing if omega will cooperate, and thus he has no incentive to base his behavior on your decision, even though he knows it. He's better off always defecting.

This is a point that a lot of people here seem to get confused about. They think "but, if I could predict omega's actions, he'd have an incentive to conditionally cooperate, and so I'D have an incentive to cooperate, and we'd cooperate, and that'd be a better outcome, ergo that must be more rational, and omega is rational so he'll act in a way I can predict, and specifically he'll conditionally cooperate!!1"

But I think this is wrong. The fact that the world would be a better place if you could predict omega's actions, (and the fact that omega knows this) doesn't give omega the power to make you capable of predicting his actions, any more than it gives him the power to make your mom capable of predicting his actions, or to make a ladybug capable of predicting his actions, or another superintelligence capable of predicting his actions (although possibly it could to start with). He's in another room.

The fact that he knows what you're going to do means there's already been some information leakage, since even a superintelligence can't extrapolate from the fact that your name is jeff what decision you'll make in a complicated game. He apparently knows quite a bit about you.

And if you knew ENOUGH about him, including his superhuman knowledge of yourself, and were smart enough to analyze the data (good luck), you'd be able to predict his actions too. But it seems disingenuous to even call that the prisoner's dilemma.

comment by Vladimir_Nesov · 2010-11-16T19:56:35.325Z · LW(p) · GW(p)

Also, since it doesn't have a soul, you have absolutely no reason to feel bad for it's [sic] losses.

Huh?

Replies from: Jonii
comment by Jonii · 2010-11-16T20:36:29.109Z · LW(p) · GW(p)

Just an attempt to make it clear that we're dealing with something like an intelligent calculator here with nothing in it that we'd find interesting or valuable in itself. Setting up this as the true PD.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-11-21T01:07:18.826Z · LW(p) · GW(p)

Is that even well-defined? If I assert that I am a philosophical zombie in every sense of the term (lacking soul, qualia, and whatever other features you find relevant) does that mean you don't care about my losses?

Observers aren't ontological fundamental entities which is where you may be running into trouble.

Replies from: wedrifid
comment by wedrifid · 2010-11-21T01:35:29.361Z · LW(p) · GW(p)

I understood what he was trying to say.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-21T01:53:56.450Z · LW(p) · GW(p)

I understood what he was trying to say.

Everyone does, the problem is that the whole area of several steps around its literal meaning has serious problems. "But souls don't exist! But so what if someone doesn't have a soul tag, it's not morally relevant! But so what if the presence of souls influences empathy/eternal life/etc., this reason doesn't screen off other sources of moral value!" Only when you've gone all the way to "The other agent doesn't have moral value.", it starts making sense, but then you should've just said so, instead of pretending an argument.

Replies from: Jonii, wedrifid
comment by Jonii · 2010-11-21T14:29:02.572Z · LW(p) · GW(p)

But I'd think if I only said "It doesn't have moral value in itself", you'd still have to go back similar steps to find that property cluster that we assign value. I tried to transfer both ideas by using the word soul and claiming lack of moral value.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-21T14:35:10.164Z · LW(p) · GW(p)

you'd still have to go back similar steps to find that property cluster that we assign value. I tried to transfer both ideas by using the word soul and claiming lack of moral value.

What property cluster/why I'd need to find it/which both ideas?

Replies from: Jonii
comment by Jonii · 2010-11-21T15:19:44.260Z · LW(p) · GW(p)

Those properties that we think makes happy humans better than totally artificial smiling humans mimicing happy humans. You'd need to find it in order to grasp what it means to have a being that lacked moral value, and "both ideas" refers to the distinct ways of explaining what sort of paperclip maximizer we're talking about.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-21T15:29:48.212Z · LW(p) · GW(p)

Those properties that we think makes happy humans better than totally artificial smiling humans mimicking happy humans.

This I guessed.

You'd need to find it in order to grasp what it means to have a being that lacked moral value,

Why? "No moral value" has a clear decision-theoretic meaning, and referring to particular patterns that have moral value doesn't improve on that understanding. Also, the examples of things that have moral value are easy to imagine.

"both ideas" refers to the distinct ways of explaining what sort of paperclip maximizer we're talking about.

This I still don't understand. You'd need to name two ideas. My intuition at grasping the intended meaning fails me often. One relevant idea that I see is that the paperclip maximizer lacks moral value. What's the other, and how is it relevant?

comment by wedrifid · 2010-11-21T09:51:25.310Z · LW(p) · GW(p)

pretending an argument.

"Huh?"

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-21T13:58:19.031Z · LW(p) · GW(p)

What about it? Your perception of English says it's poorly-constructed, and I should rely less on my language intuition for such improvisation? Or is it unclear what I meant/why I believe so?

What is the purpose of saying "It doesn't have a soul.", as opposed to "It doesn't have moral value."? The desired conclusion is the latter, but the deeply flawed former is spoken instead. I guess it's meant as an argument, appealing to existing intuitions, connotations that the word "soul" evokes. But because of its flaws, it's not actually a rational argument, so it only pretends to be one, a rhetorical device.

Replies from: wedrifid
comment by wedrifid · 2010-11-22T04:12:58.477Z · LW(p) · GW(p)

It just wasn't an argument at all or a rhetorical device of any kind. It was a redundant aside setting up a counterfactual problem. At worst it was a waste of a sentence and at best it made the counterfactual accessible to even those people without a suitably sophisticated reductionist philosophy.

(And, obviously, there was an implication that the initial 'huh?' verged on disingenuous.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-11-23T22:21:06.366Z · LW(p) · GW(p)

At worst it was a waste of a sentence and at best it made the counterfactual accessible to even those people without a suitably sophisticated reductionist philosophy.

Rhetorical device in exactly this sense: it communicates where just stating the intended meaning won't work ("people without a suitably sophisticated reductionist philosophy"). The problem is insignificant (but still present), and as a rhetorical device it could do some good.

comment by NihilCredo · 2010-11-15T23:46:49.480Z · LW(p) · GW(p)

I defect on both scenarios. I am confident enough of this choice that I feel less respect for someone who would cooperate on either (at least to the degree that I can assume they are sincere in saying so and not just signaling).

EDIT: After your changes to the article, I suspect the reason you don't think it's obvious is because you've been dodging around an essential element of the True PD - i.e. the "one-shot" part.

Replies from: Jonii, Jonii
comment by Jonii · 2010-11-16T00:22:45.761Z · LW(p) · GW(p)

I admit taking "Rational people playing true, one-shot PD with beings as rational as they, co-operate" for granted. I didn't think this was going to be an issue, and so, since I'm building upon that as an axiom, things might look weird if you think that foundation is untrue. And for this reason, I'm unsure if this discussion should continue here. If you're alone with that opinion, I think this discussion should take place elsewhere, but if there are many who disagree with me on that basic level, I guess that discussion should happen here.

Replies from: NihilCredo
comment by NihilCredo · 2010-11-16T00:24:33.519Z · LW(p) · GW(p)

I agree with this assessment of the situation.

Replies from: Manfred
comment by Manfred · 2010-11-16T05:12:12.617Z · LW(p) · GW(p)

The trouble is that cooperating is highly contingent on the other agent having heard of or being smart enough to think in five minutes the idea of superrationality, and it's highly contingent on the information available to both sides - if you don't think THEY think you know about superrationality/are smart enough to think it in five minutes, you shouldn't cooperate.

So, given most situations or most opponents I'd defect. Probably against the paperclip maximizer, too, since "Simple approximation of decision theory" doesn't sound too promisingly clever, particularly when evaluating beings like me.

comment by Jonii · 2010-11-16T10:30:20.763Z · LW(p) · GW(p)

The assumption about superrationality is now much more explicitly stated.

comment by shokwave · 2010-11-21T14:50:54.719Z · LW(p) · GW(p)

if they take into account how much the other cares about those results, using some unknown method, they just might be able to systematically perform better

Systematically perform better, but in this specific situation perform worse? That doesn't sound like a winning strategy on a one-shot dilemma. And the parenthetical patch doesn't seem to fix this problem: if the payoffs are randomly assigned in this one-shot case, and you get the short end of the stick - well, you know you aren't going to get another chance, because it's a one-shot case. Good luck trying to counterfactually mug the player here.

comment by Snowyowl · 2010-11-21T01:46:02.931Z · LW(p) · GW(p)

"You are also told some specifics about the algorithm that the alien uses to reach its decision, and likewise told that alien is told about as much about you."

If I know enough to see that my decision doesn't affect the alien's, I defect. If I don't know enough, I consider that the alien might know what my own algorithm is. Therefore I decide to cooperate if I think the alien will cooperate. I assume the alien knows this and that he knows that I know. Therefore I assume that the alien will cooperate because he thinks this will cause me to cooperate based on his knowledge of my thought processes (and CC is preferable to DD). Following the algorithm laid out above, I cooperate.

This is still just superrationality, though a little more advanced than usual. I have incomplete knowledge about my opponent's thought processes, I assume the rest will be similar to mine, consequently I choose the optimum symmetric strategy and hope he does the same.

comment by Bongo · 2010-11-17T19:01:06.751Z · LW(p) · GW(p)

If we can't handle this kind of reasoning, we lose billions of lives in the original True Prisoner's Dilemma, too. If I understood that post correctly, Eliezer was even hinting that we should just take the loss.

That's really unacceptable. I can't take a decision theory that seriously unless I know it returns the winning answer in these problems.

Replies from: Jonii
comment by Jonii · 2010-11-17T19:54:34.876Z · LW(p) · GW(p)

It requires us to know what sort of utility function the other player has, at the very least, and even then the result might be, at best, mutual defect or, against superrational players, mutual co-operation.

Replies from: Snowyowl
comment by Snowyowl · 2010-11-21T01:49:58.225Z · LW(p) · GW(p)

Cooperation against superrational players is only optimal if you are superrational too, or if they know how you are going to play. If you know they are superrational but they don't know you aren't, you should defect.

Replies from: wedrifid
comment by wedrifid · 2010-11-26T14:20:44.683Z · LW(p) · GW(p)

Cooperation against superrational players is only optimal if you are superrational too, or if they know how you are going to play. If you know they are superrational but they don't know you aren't, you should defect.

I find this confusing. Not in the sense that I don't understand the gist of the meaning. Rather, it makes the concept of 'superrational' as used sound weird to me (which could perhaps be attributed to the word, not Snowyowl). In particular:

Cooperation against superrational players is only optimal if you are superrational too

What is this magical trait that I can have thing that can change what is optimal choice for me to make for a fixed externally specified utility function?

Replies from: shokwave
comment by shokwave · 2010-11-26T14:30:12.350Z · LW(p) · GW(p)

What is this magical trait

Something along the lines of "when you cooperate, your opponent is forced to cooperate too".

The reason it is optimal is it presents no chance to be defected against, and any situation where you are defected against is worse than every situation where the opponent cooperates.

Lacking this magical trait of superrationality, the chance of being defected against is drawn back in, which dramatically harms you cooperating, making it less optimal.

Replies from: wedrifid
comment by wedrifid · 2010-11-26T14:42:43.463Z · LW(p) · GW(p)

Something along the lines of "when you cooperate, your opponent is forced to cooperate too".

That would be something that made more sense but just isn't something that fits in that context.