The True Epistemic Prisoner's Dilemma
post by MBlume · 2009-04-19T08:57:02.580Z · LW · GW · Legacy · 72 commentsContents
72 comments
I spoke yesterday of the epistemic prisoner's dilemma, and JGWeissman wrote:
I am having some difficulty imagining that I am 99% sure of something, but I cannot either convince a person to outright agree with me or accept that he is uncertain and therefore should make the choice that would help more if it is right, but I could convince that same person to cooperate in the prisoner's dilemma. However, if I did find myself in that situation, I would cooperate.
To which I said:
Do you think you could convince a young-earth creationist to cooperate in the prisoner's dilemma?
And lo, JGWeissman saved me a lot of writing when he replied thus:
Good point. I probably could. I expect that the young-earth creationist has a huge bias that does not have to interfere with reasoning about the prisoner's dilemma.
So, suppose Omega finds a young-earth creationist and an atheist, and plays the following game with them. They will each be taken to a separate room, where the atheist will choose between each of them receiving $10000 if the earth is less than 1 million years old or each receiving $5000 if the earth is more than 1 million years old, and the young earth creationist will have a similar choice with the payoffs reversed. Now, with prisoner's dilemma tied to the young earth creationist's bias, would I, in the role of the atheist still be able to convince him to cooperate? I don't know. I am not sure how much the need to believe that the earth is around 5000 years would interfere with recognizing that it is in his interest to choose the payoff for earth being over a million years old. But still, if he seemed able to accept it, I would cooperate.
I make one small modification. You and your creationist friend are actually not that concerned about money, being distracted by the massive meteor about to strike the earth from an unknown direction. Fortunately, Omega is promising to protect limited portions of the globe, based on your decisions (I think you've all seen enough PDs that I can leave the numbers as an excercise).
It is this then which I call the true epistemic prisoner's dilemma. If I tell you a story about two doctors, even if I tell you to put yourself in the shoes of one, and not the other, it is easy for you to take yourself outside them, see the symmetry and say "the doctors should cooperate". I hope I have now broken some of that emotional symmetry.
As Omega lead the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution. Despite every pointless, futile argument you've ever had in an IRC room or a YouTube thread, you would struggle desperately, calling out every half-remembered fragment of Dawkins or Sagan you could muster, in hope that just before the door shut, the creationist would hold it open and say "You're right, I was wrong. You defect, I'll cooperate -- let's save the world together."
But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart's ignorance. And then -- I hope -- you would cooperate.
72 comments
Comments sorted by top scores.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-19T13:52:20.385Z · LW(p) · GW(p)
As Omega led the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution.
I could do that, but it seems simpler to make a convulsive effort to convince him that Omega, who clearly is no good Christian, almost certainly believes in the truth of evolution.
(Of course this is not relevant, but seemed worth pointing out. Cleverness is usually a dangerous thing, but in this case it seems worth dusting off.)
Replies from: JGWeissman, MBlume↑ comment by JGWeissman · 2009-04-19T16:18:27.507Z · LW(p) · GW(p)
For a less convenient world, suppose that the creationist perceives Omega as God, offering a miracle. Miracles can apparently include one person being saved from disaster that kills hundreds, so the fact that Omega doesn't just save everybody would not be compelling to the creationist.
Replies from: Alicorn↑ comment by MBlume · 2009-04-20T04:12:35.316Z · LW(p) · GW(p)
The assumption that the creationist actually buys "creationism is true iff omega believes it's true" is by far the weakest aspect of this scenario. As always, I just assume that Omega has some off-screen demonstration of his own trustworthiness that is Too Awesome To Show
(insert standard 'TV Tropes is horribly addictive' disclaimer here)
For the same reason, I've often wondered what a worldwide prediction market on theism would look like, if there was any possible way of providing payouts. Sadly, this is the closest I've seen.
comment by MrHen · 2009-04-19T12:58:01.401Z · LW(p) · GW(p)
It is this then which I call the true epistemic prisoner's dilemma. If I tell you a story about two doctors, even if I tell you to put yourself in the shoes of one, and not the other, it is easy for you to take yourself outside them, see the symmetry and say "the doctors should cooperate". I hope I have now broken some of that emotional symmetry.
As Omega lead the creationist to the other room, you would (I know I certainly would) make a convulsive effort to convince him of the truth of evolution.
It seems like it would be wiser to forgo the arguments for evolution and spend your time talking about cooperating.
But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart's ignorance. And then -- I hope -- you would cooperate.
By the way, while we are adding direct emotional weight to this example, the real villain here is Omega. In all honesty, the Young Earth Creationist cannot be blamed for sending untold numbers to their death because of a bad belief. The bad belief has nothing to do with the asteroid and any moral link between the two should be placed on Omega.
Replies from: Cameron_Taylor↑ comment by Cameron_Taylor · 2009-04-19T14:37:59.491Z · LW(p) · GW(p)
By the way, while we are adding direct emotional weight to this example, the real villain here is Omega.
I do not concur.
In all honesty, the Young Earth Creationist cannot be blamed for sending untold numbers to their death because of a bad belief.
Yes he can be blamed, and I do. Humans have learned to harness false belief in the face of overwhelming evidence and wield it as a weapon far more effectively than teeth, claws and even clubs. While I once excused destructive behavior based on 'sincere belief that they were doing the right thing' I no longer do so.
The bad belief has nothing to do with the asteroid No human can be blamed for the asteroid. They can be blamed for sending untold numbers to their death. The guy chose between 'save millions of people' or 'signal in group status by advocating nonsense' and chose the latter. I really don't care whether he is a well meaning innocent with bad belief in face of evidence or a machivellian agent on a halucinogenic trip.
The bad belief has nothing to do with the asteroid and any moral link between the two should be placed on Omega.
I have no particular inclination to do that. I've been given no information about either Omega's incentives or his abilities in this situation. All I know is that he has arrived and offered to save millions of people in a somewhat bizarre manner. I'd prefer he saved everyone but better some be saved than the entire planet be obliterated.
Replies from: MrHen↑ comment by MrHen · 2009-04-19T17:59:54.055Z · LW(p) · GW(p)
Anything that has the ability to save untold billions and will only do so if two particular individuals figure out how old the earth is evil. Or, at the very least, does not have the best interests of humanity in mind.
To belabor the point, if Omega held his hands behind his back and asked you and me to guess at whether the number of fingers he is holding up is odd or even and, if and only if we were correct, he would save lives it would be the OP's example with certainty dropped to 0. Would we be held to blame if we failed? Increasing our certainty does not increase our moral responsibility.
(Note) I think the formatting in your post may be off. The third quote looks like it may have too much included.
Replies from: randallsquared↑ comment by randallsquared · 2009-04-19T22:23:19.803Z · LW(p) · GW(p)
Anything that has the ability to save untold billions and will only do so if two particular individuals figure out how old the earth is evil. Or, at the very least, does not have the best interests of humanity in mind.
Since I'd say that evil is just having goals which are fundamentally incompatible with mine (or whoever is considering this), I don't think there's necessarily a difference between those two statements.
comment by Psychohistorian · 2009-04-19T22:16:55.377Z · LW(p) · GW(p)
And then -- I hope -- you would cooperate.
Why do you hope I'd let a billion people die (from a proposed quantification in another comment)?
This is actually rather different from a classic PD, to the extent that Cooperate (cooperate) is not the collectively desirable outcome.
Payoffs: You(Creationist): Defect(D): 1 Billion live D(C): 3 Billion live C(D): 0 live C(C): 2 Billion live
Under the traditional PD, D(C) is best for you, but worst for him. Under this PD, D(C) is best for both of you. He wants you to defect and he wants to cooperate; he just doesn't know it. Valuing his utility does not save this like it does the traditional PD. Assuming he's vaguely rational, he will end up happier if you choose to defect, regardless of his choice. Furthermore, he thinks you will be happier if he defects, so he has absolutely no reason to cooperate.
If only by cooperating can you guarantee his cooperation, you should do so. However the PD generally assumes such prior commitments are not possible. And unlike the traditional PD, C(C) does not lead to the best possible collective outcome. Thus, you should try your hardest to convince him to cooperate, then you should defect. He'll thank you for it when another billion people don't die.
The medical situation is more confusing because I don't think it's realistic. I sincerely doubt you would have two vaguely rational doctors who would both put 99% confidence on a diagnosis knowing that another doctor was at least 99% confident that that diagnosis was incorrect. Thus, you should both amend your estimates substantially downwards, and thus should probably cooperate. If you take the hypothetical at face value, it seems like you both should defect, even though again D(C) would be the optimal solution from your perspective.
The real problem I'm having with some of these comments is that they assume my decision to defect or cooperate affects his decision, which does not seem to be a part of the hypothetical. Frankly I don't see how people can come to this conclusion in this context, given that it's a 1-shot game with a different collective payoff matrix than the traditional PD.
comment by AllanCrossman · 2009-04-19T10:53:21.028Z · LW(p) · GW(p)
I think you've all seen enough PDs that I can leave the numbers as an exercise
Actually, since this is an unusual setup, I think it's worth spelling out:
To the atheist, Omega gives two choices, and forces him to choose between D and C:
D. Omega saves 1 billion people if the Earth is old.
C. Omega saves 2 billion people if the Earth is young.
To the creationist, Omega gives two choices, and forces him to choose between D and C:
D. Omega saves an extra 1 billion people if the Earth is young.
C. Omega saves an extra 2 billion people if the Earth is old.
And then -- I hope -- you would cooperate.
No, I certainly wouldn't. I would however lie to the creationist and suggest that we both cooperate. I'd then defect, which, regardless of what he does, is still the best move. If I choose C then my action saves no lives at all, since the Earth isn't young.
My position on one-shot PDs remains that cooperation is only worthwhile in odd situations where the players' actions are linked somehow, such that my cooperating makes it more likely that he will cooperate; e.g. if we're artificial agents running the same algorithm.
Replies from: prase, Zvi↑ comment by prase · 2009-04-19T18:34:45.631Z · LW(p) · GW(p)
My position on one-shot PDs remains that cooperation is only worthwhile in odd situations where the players' actions are linked somehow, such that my cooperating makes it more likely that he will cooperate; e.g. if we're artificial agents running the same algorithm.
Agreed. In this situation, you can be very sure that the creationist runs very different algorithm. Otherwise, he wouldn't be a creationist.
↑ comment by Zvi · 2009-04-19T11:43:29.613Z · LW(p) · GW(p)
Seems simple enough to me, too, as my answer yesterday implied. The probability the Earth is that young is close enough to 0 that it doesn't factor into my utility calculations, so Omega is asking me if I want to save a billion people. Do whatever you have to do to convince him, then save a billion people.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-19T12:30:49.772Z · LW(p) · GW(p)
With this attitude, you won't be able to convince him. He'll expect you to defect, no matter what you say. It's obvious to you what you'll do, and it's obvious for him. By refusing to save a billion people, and instead choosing the meaningless alternative option, you perform an instrumental action that results in your opponent saving 2 billion people. You control the other player indirectly.
Choosing the option other than saving 1 billion people doesn't have any terminal value, but it does have instrumental value, more of it than there is in directly saving 1 billion people.
This is not to say that you can place this kind of trust easily, for humans you may indeed require making a tangible precommitment. Humans are by default broken, in some situations you don't expect the right actions from them, the way you don't expect the right actions from rocks. An external precommitment is a crutch that compensates for the inborn ailments.
Replies from: Zvi, AllanCrossman, Nick_Tarleton↑ comment by Zvi · 2009-04-19T13:02:18.329Z · LW(p) · GW(p)
What makes us assume this? I get why in examples where you can see each others' source code this can be the case, and I do one-box on Newcomb where a similar situation is given, but I don't see how we can presume that there is this kind of instrumental value. All we know about this person is he is a flat earther, and I don't see how this corresponds to such efficient lie detection in both directions for both of us.
Obviously if we had a tangible precommitment option that was sufficient when a billion lives were at stake, I would take it. And I agree that if the payoffs were 1 person vs. 2 billion people on both sides, this would be a risk I'd be willing to take. But I don't see how we can suppose that the correspondance between "he thinks I will choose C if he agrees to choose C, and in fact then chooses C" and "I actually intend to choose C if he agrees to choose C" is not all that high. If the flat Earther in question is the person on whom they based Dr. Cal Lightman I still don't choose C because I'd feel that even if he believed me he'd probably choose D anyway. Do you think mosthumans are this good at lie detection (I know that I am not), and if so do you have evidence for it?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-19T13:39:31.961Z · LW(p) · GW(p)
I get why in examples where you can see each others' source code this can be the case, and I do one-box on Newcomb where a similar situation is given, but I don't see how we can presume that there is this kind of instrumental value. All we know about this person is he is a flat earther, and I don't see how this corresponds to such efficient lie detection in both directions for both of us.
What does the source code really impart? Certainty in the other process' workings. But why would you need certainty? Is being a co-operator really so extraordinary a claim that to support it you need overwhelming evidence that leaves no other possibilities?
The problem is that there are three salient possibilities for what the other player is:
- Defector, who really will defect, and will give you evidence of being a defector
- Co-operator, who will really cooperate (with another who he believes to be a co-operator), and will give you evidence of being a co-operator
- Deceiver, who will really defect, but will contrive evidence that he is a co-operator
Between co-operator and deceiver, all else equal, you should expect the evidence given by co-operator to be stronger than evidence given by deceiver. Deceiver has to support a complex edifice of his lies, separate from reality, while co-operator can rely on the whole of reality for support of his claims. As a result, each argument a co-operator makes should on average bring you closer to believing that he really is a co-operator, as opposed to being a deceiver. This process may be too slow to shift your expectation from the prior of very strongly disbelieving in existence of co-operators to posterior of believing that this one is really a co-operator, and this may be a problem. But this problem is only as dire as the rarity of co-operators and the deceptive eloquence of deceivers.
Replies from: Zvi↑ comment by Zvi · 2009-04-19T15:52:45.585Z · LW(p) · GW(p)
We clearly disagree strongly on the probabilities here. I agree that all things being equal you have a better shot at convincing him than I do, but I think it is small. We both do the same thing in the Defector case. In the co-operator course, he believes you with probability P+Q and me with probability P. Assuming you know if he trusts you in this case (we count anything else as deceivers) you save (P+Q) 2 +(1-P-Q) 1, I save (P) 3+(1-P) 1, both times the percentage of co-operators R. So you have to be at least twice as successful as I am even if there are no deceivers on the other side. Meanwhile, there's some percentage A who are decievers and some probability B that you'll believe a deceiver, or just A and 1 if you count anyone you don't believe as a simple Defector.
You think that R (P+Q) 2 + R (1-P-Q) 1 > R P 3 + R (1-P) 1 + A B 1. I strongly disagree. But if you convinced me otherwise, I would change my opinion.
Replies from: saturn, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-21T00:00:44.674Z · LW(p) · GW(p)
In the co-operator course, he believes you with probability P+Q and me with probability P.
That may be for one step, but my point is that the truth ultimately should win over lies. If you proceed to the next point of argument, you expect to distinguish Cooperator from Defector a little bit better, and as the argument continues, your ability to distinguish the possibilities should improve more and more.
The problem may be that it's not a fast enough process, but not that there is some fundamental limitation on how good the evidence may get. If you study the question thoroughly, you should be able to move long way away from uncertainty in the direction of truth.
↑ comment by AllanCrossman · 2009-04-19T12:48:21.720Z · LW(p) · GW(p)
By refusing to save a billion people, and instead choosing the meaningless alternative option, you perform an instrumental action that results in your opponent saving 2 billion people.
How does it to that, please? How does my action affect his?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-19T12:52:53.051Z · LW(p) · GW(p)
How does it do that, please?
Maybe it's not enough, maybe you need to do more than just doing the right thing. But it you actually plan to defect, you have no hope of convincing the other player that you won't. (See the revised last paragraph of the above comment.)
Replies from: AllanCrossman↑ comment by AllanCrossman · 2009-04-19T12:53:59.463Z · LW(p) · GW(p)
if you actually plan to defect, you have no hope of convincing the other player that you won't
Why? My opponent is not a mind-reader.
An external precommitment is a crutch that compensates for the inborn ailments.
Yes, if we can both pre-commit in a binding way, that's great. But what if we can't?
Replies from: Vladimir_Nesov, Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-19T13:09:22.986Z · LW(p) · GW(p)
Yes, if we can both pre-commit in a binding way, that's great. But what if we can't?
I feel that this is related to the intuitions on free will. When a stone is thrown your way, you can't change what you'll do, you'll either duck, or you won't. If you duck, it means that you are a stone-avoider, a system that has a property of avoiding stones, that processes data indicating the fact that a stone is flying your way, and transforms it into the actions of impact-avoiding.
The precommitment is only useful because [you+precommitment] is a system with a known characteristic of co-operator, that performs cooperation in return to the other co-operators. What you need in order to arrange mutual cooperation is to signal the other player that you are a co-operator, and to make sure that the other player is also a co-operator. Signaling the fact that you are a co-operator is easy if you attach a precommitment crutch to your natural decision-making algorithm.
Since co-operators win more than mutual defectors, being a co-operator is rational, and so it's often just said that if you and your opponent are rational, you'll cooperate.
There is a stigma of being just human, but I guess some kind of co-operator certification or a global meta-commitment of reflective consistency could be arranged to both signal that you are now a co-operator and enforce actually making co-operative decisions.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-21T14:47:48.213Z · LW(p) · GW(p)
Instead of answering AllanCrossman's question, you have provided a stellar example of how scholastics turns brains to mush. Read this.
Update 2: maybe, to demonstrate my point, I should quote some hilarious examples of faulty thinking from the article I linked to. Here we go:
19 Three is not an object at all, but an essence; not a thing, but a thought; not a particular, but a universal.
28 The number three is neither an idle Platonic universal, nor a blank Lockean substratum; it is a concrete and specific energy in things, and can be detected at work in such observable processes as combustion.
32 Since the properties of three are intelligible, and intelligibles can exist only in the intellect, the properties of three exist only in the intellect.
35 We get the concept of three only through the transcendental unity of our intuitions as being successive in time.
Ring any bells?
Replies from: orthonormal, Vladimir_Nesov↑ comment by orthonormal · 2009-04-21T16:28:09.597Z · LW(p) · GW(p)
If you think Vladimir is being opaque with his writing, and you disagree with his conclusion, that is not the same as asserting that he's writing nonsense. Charity (and the evidence of his usual clarity) demand that you ask for clarification before accusing him of such.
↑ comment by Vladimir_Nesov · 2009-04-21T16:24:27.457Z · LW(p) · GW(p)
Instead of answering AllanCrossman's question, you have provided a stellar example of how scholastics turns brains to mush.
Actually, I thought that I made a relatively clear argument, and I'm surprised that it's not upvoted (the same goes for the follow-up here). Maybe someone could constructively comment on why that is. I expect that the argument is not easy to understand, and maybe I failed at seeing the inferential distance between my argument and intended audience, so that people who understood the argument already consider it too obvious to be of notice, and people who disagree with the conclusion didn't understand the argument... Anyway, any constructive feedback on meta level would be appreciated.
On the concept of avoiders, see Dennett's lecture here. Maybe someone can give a reference in textual form.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-21T19:26:36.529Z · LW(p) · GW(p)
Uh...
AllanCrossman asked: what if we can't precommit?
You answered: it's good to be able to precommit, maybe we can still arrange it somehow.
Thus simplified, it doesn't look like an answer. But you didn't say it in simple words. You added philosophical fog that, when parsed and executed, completely cancels out, giving us no indication how to actually precommit.
Disagree?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-22T01:32:18.716Z · LW(p) · GW(p)
My reply can be summarized as explaining why "precommiting in binding way" is not a clear-cut necessity for this problem. If you are a cooperator, there is no need to precommit.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-22T08:52:31.777Z · LW(p) · GW(p)
In your terms, being a cooperator for this specific problem is synonymous to precommitting. You're just shunting words around. All right, how do I actually be a cooperator?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-22T13:39:22.568Z · LW(p) · GW(p)
No, it's not synonymous. If you precommit, you become a cooperator, but you can also be one without precommiting. If you are an AI that is written to be a cooperator, you'll be one. If you decide to act as a cooperator, you may be one. Being a cooperator is relatively easy. Being a cooperator and successfully signating that you are one, without precommitment, is in practice much harder. And a related problem, if you are a cooperator, you have to recognize a signal that the other person is a cooperator also, which may be too hard if he hasn't precommited.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-22T13:54:13.775Z · LW(p) · GW(p)
but you can also be one without precommiting
What? The implication goes both ways. If you're a cooperator (in your terms), then you're precommitted to cooperating (in classical terms). Maybe you misunderstand the word "precommitment"? It doesn't necessarily imply that some natural power forces the other guy to believe you.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-22T15:00:45.024Z · LW(p) · GW(p)
If you define precommitment this way, then every property becomes a precommitment to having that property, and the concept of precommitment becomes tautological. For example, is it a precommitment to always prefer good over evil (defined however you like)?
Replies from: cousin_it↑ comment by cousin_it · 2009-04-22T15:41:09.226Z · LW(p) · GW(p)
Not every property. Every immutable property. They're very rare. Your example isn't a precommitment because it's not immutable.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-22T15:46:07.283Z · LW(p) · GW(p)
What's "mutable"? Changing in time? Cooperation may be a one-off encounter, with no multiple occasions to change over. You may be a cooperator for the duration of one encounter, and a rock elsewhere. Every fact is immutable, so I don't know what you imply here.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-22T16:24:32.225Z · LW(p) · GW(p)
Yes, mutable means changing in time.
Precommitment is an interaction between two different times: the time when you're doing cheap talk with the opponent, and the time when you're actually deciding in the closed room. The time you burn your ships, and the time your troops go to battle. Signaling time and play time. If a property is immutable (preferably physically immutable) between those two times, that's precommitment. Sounds synonymous to your "being a cooperator" concept.
Replies from: Vladimir_Nesov, thomblake↑ comment by Vladimir_Nesov · 2009-04-22T16:50:18.853Z · LW(p) · GW(p)
In other words, my point is that if the signaling is about your future property, at the moment when you have to perform the promised behavior, there is no need for any kind of persistence, thus according to your definition precommitment is unnecessary. Likewise, signaling doesn't need to consist in you presenting any kind of argument, it may already be known that you (will be) a cooperator.
For example, the agent in question may be selected from a register of cooperators, where 99% of them are known to be cooperators. And cooperators themselves might as well be human, who decided to follow this counterintuitive algorithm, and benefit from doing so when interacting with other known cooperators, without any tangible precommitment system in place, no punishment for not being cooperators. This example may be implemented through reputation system.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-22T17:07:53.077Z · LW(p) · GW(p)
No such thing as future property. This isn't a factual disagreement on my part, just a quibble over terms; disregard it.
Your example isn't about signaling or precommitment, it's changing the game into multiple-shot, modifying the agent's utility function in an isolated play to take into account their reputation for future plays. Yes, it works. But doesn't help much in true one-shot (or last-play) situations.
On the other hand, the ideal platonic PD is also quite rare in reality - not as rare as Newcomb's, but still. You may remember us having an isomorphic argument about Newcomb's some time ago, with roles reversed - you defending the ideal platonic Newcomb's Problem, and me questioning its assumptions :-)
Me, I don't feel moral problems defecting in the pure one-shot PD. Some situations are just bad to be in, and the best way out is bad too. Especially situations where something terribly important to you is controlled by a cold uncaring alien entity, and the problem has been carefully constructed to prohibit you from manipulating it (Eliezer's "true PD").
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-22T17:16:47.659Z · LW(p) · GW(p)
No such thing as future property. This isn't a factual disagreement on my part, just a quibble over terms; disregard it.
In what sense do you mean no such thing? Clearly, there are future properties. My cat has a property of being dead in the future.
Your example isn't about signaling or precommitment, it's changing the game into multiple-shot, modifying the agent's utility function in an isolated play to take into account their reputation for future plays. Yes, it works. But doesn't help much in true one-shot (or last-play) situations.
Yes, it was just an example of how to set up cooperation without precommitment. It's clear that signaling being a one-off cooperator is a very hard problem, if you are only human and there are no Omegas flying around.
Replies from: cousin_it↑ comment by cousin_it · 2009-04-22T17:18:51.894Z · LW(p) · GW(p)
My cat has a property of being dead in the future.
Not with probability one, it doesn't.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-22T17:21:31.151Z · LW(p) · GW(p)
Not with probability one, it doesn't.
This doesn't place the future in a privileged position. Even though I'm certain I saw my cat 10 minutes ago, it wasn't alive a week ago with probability one, either.
Replies from: cousin_it↑ comment by thomblake · 2009-04-22T16:32:19.572Z · LW(p) · GW(p)
My answer to this would be that people have dispositions to behavior, and these dispositions color everything we do. If one might profit by showing courage, a coward will not do as well as a courageous man.
Of course, the relative success of such people at faking in appropriate situations is perhaps an empirical question.
ETA: this makes less sense as a direct response since you edited your comment. However, I think the difference is that "being a cooperator" regards a disposition that is part of the sort of person you are (though I think the above comment uses it more narrowly as a disposition that might only affect this one action), while a precommitment... well, I'm not sure actual people really do have those, if they're immutable.
↑ comment by Vladimir_Nesov · 2009-04-19T12:55:28.428Z · LW(p) · GW(p)
My opponent is not a mind-reader.
He is no fool either.
Replies from: AllanCrossman↑ comment by AllanCrossman · 2009-04-19T12:58:21.682Z · LW(p) · GW(p)
He is no fool either.
I don't understand.
You need to make it clear how my intention to defect or my intention to cooperate influences the other guy's actions, even if what I say to him is identical in both cases. Assume I'm a good liar.
↑ comment by Nick_Tarleton · 2009-04-20T01:20:45.001Z · LW(p) · GW(p)
With this attitude, you won't be able to convince him. He'll expect you to defect, no matter what you say.
Um... are you asserting that deception between humans is impossible?
comment by steven0461 · 2009-04-21T15:25:33.157Z · LW(p) · GW(p)
you would cooperate
As I understand it, to the extent that it makes sense to cooperate, the thing that cooperates is not you, but some sub-algorithm implemented in both you and your opponent. Is that right? If so, then maybe by phrasing it in this way we can avoid philosophers balking.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2009-04-21T16:57:22.221Z · LW(p) · GW(p)
As I understand it, to the extent that it makes sense to cooperate, the thing that cooperates is not you, but some sub-algorithm implemented in both you and your opponent.
It has to add up to normality, there should be a you somewhere. If each time you act on your better judgment over gut instinct it is "not you" that does the acting, why is it invited in your mind? Is the whole of deliberate reasoning not you?
In my book, when future-you fights a previously made informed commitment, then it is a case where future-you is not you anymore, where it stops caring about your counterfactuals. Not when the future-you remains reflectively consistent.
But possibly, this reflectively consistent creature can't a person anymore, and is not what we'd like to be, with our cognitive ritual morally significant after all, a thing to protect in itself.
comment by rwallace · 2009-04-19T22:28:00.871Z · LW(p) · GW(p)
I will point out to the defectors that the scenario described is no more plausible than creationism (after all it involves a deity behaving even more capriciously than the creationist one). If we postulate that your fictional self is believing in the scenario, surely your fictional self should no longer be quite so certain of the falsehood of creationism?
Replies from: jimmy, Lightwave↑ comment by Lightwave · 2009-04-20T01:28:41.848Z · LW(p) · GW(p)
In this scenario you can actually replace Omega with a person (e.g. a mad scientist or something), who just happens to be the only one who has, say, a cure for the disease which is about to kill a couple of billion people.
Replies from: rwallacecomment by Lightwave · 2009-04-19T17:40:32.789Z · LW(p) · GW(p)
Given the stakes, it seems to me the most rational thing to do here is to try to convince the other person that you should both cooperate, and then defect.
The difference between this dilemma and Newcomb is that Newcomb's Omega predicts perfectly which box you'll take, whereas the Creationist cannot predict whether you'll defect or not.
The only way you can lose is if you screw up so badly at trying to convincing him to cooperate (i.e. you're a terrible liar or bad at communicating in general and confuse him), that instead he's convinced he should defect now. So the biggest factor when deciding whether to cooperate or defect should be your ability to convince.
Replies from: Simulacra↑ comment by Simulacra · 2009-04-19T20:01:50.072Z · LW(p) · GW(p)
If you don't think you could convince him to cooperate then you still defect because he will, and if you cooperate 0 people are saved. Cooperating generates either 0 or 2 billion saved, defecting generates either 1 or 3 billion saved. Defect is clearly the better option.
If you were going to play 100 rounds for 10 or 20 million lives each, cooperate by all means. But in a single round PD defect is the winning choice (assuming the payout is all that matters to you; if your utility function cares about the other persons feelings towards you after the choice, cooperate can become the highest utility)
comment by ChrisHibbert · 2009-04-20T20:07:55.559Z · LW(p) · GW(p)
The Standard PD is set up so there are only two agents and only their choices and values matter. I tend to think of rationality in these dilemmas as being largely a matter of reputation, even when the situation is circumscribed and described as one-shot. Hofstadter's concept of super-rationality is part of how I think about this. If I have a reputation as someone who cooperates when that's the game-theoretically optimal thing to do, then it's more likely that whoever I've been partnered with will expect that from me, and cooperate if he understands why that strategy works.
Since it would buttress that reputation, I keep hoping that rationalists, generally, would come to embrace some interpretation of super-rationality, but I keep seeing self-professed rationalists whose choices seem short-sightedly instrumentalist to me.
But this seems to be a completely different situation. Rather than attempting to cooperate with someone who I should assume to be my partner, and who has my interests at heart, I'm asked to play a game with someone who doesn't reason the way I do, and who explicitly mistrusts my reasoning. In addition, the payoff isn't to me and the other player, the payoff is to a huge number of uninvolved other people. MBlume seems to want me to think of it in terms of something valuable in my preference ranking, but he's actually set it up so that it's not a prisoner's dilemma, it's a hostage situation in which I have a clearly superior choice, and an opportunity to try to convince someone whose reasoning is alien to my own.
I defect. I do my best to convince my friend that the stakes are too high to justify declaring his belief in god. So you can get me to defect, but only by setting up a situation in which my allies aren't sitting on the other side of the bargaining table.
comment by spriteless · 2009-04-20T04:30:22.949Z · LW(p) · GW(p)
The young Earth creationist is right, because the whole earth was created in a simulation by Omega that took about 5000 years to run.
You can't win with someone that much smarter than you. I don't see how this means anything but 'it's good to have infinite power, computational and otherwise.'
comment by Nick_Tarleton · 2009-04-20T01:02:07.801Z · LW(p) · GW(p)
the atheist will choose between each of them receiving $5000 if the earth is less than 1 million years old or each receiving $10000 if the earth is more than 1 million years old
Isn't this backwards? The dilemma occurs if payoff(unbelieved statement) > payoff(believed statement).
Replies from: orthonormal↑ comment by orthonormal · 2009-04-20T22:57:14.680Z · LW(p) · GW(p)
It's most definitely a typo, but we all know what the payoff matrix is supposed to be.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-04-20T23:04:50.489Z · LW(p) · GW(p)
I actually wasn't sure until I saw Allan Crossman's comment, though if that hadn't been there I probably would've been able to figure it out with a bit more effort.
Replies from: JGWeissman↑ comment by JGWeissman · 2009-04-21T01:06:49.859Z · LW(p) · GW(p)
Yes, it was a typo. I have fixed the original comment.
comment by RichardChappell · 2009-04-20T00:45:00.282Z · LW(p) · GW(p)
And then -- I hope -- you would cooperate.
This is to value your own "rationality" over that which is to be protected: the billion lives at stake. (We may add: such a "rationality" fetish isn't really rational at all.) Why give us even more to weep about?
Replies from: orthonormal, RichardChappell↑ comment by orthonormal · 2009-04-20T23:18:41.752Z · LW(p) · GW(p)
I can see how it looks to you as if MBlume's strategy prizes his ritual of cognition over that which he should protect— but be careful and charitable before you sling that accusation around here. This is a debate with a bit of a history on LW.
If you can't convince the creationist of evolution in the time available, but there is a way for both of you to bindingly precommit, it's uncontroversial that (C,C) is the lifesaving choice, because you save 2 billion rather than 1.
The question is whether there is a general way for quasi-rational agents to act as if they had precommitted to the Pareto equilibrium when dealing with an agent of the same sort. If they could do so and publicly (unfakeably) signal as much, then such agents would have an advantage in general PDs. A ritual of cognition such as this is an attempt to do just that.
EDIT: In case it's this ambiguity, MBlume's strategy isn't "cooperate in any scenario", but "visibly be the sort of person who can cooperate in a one-shot PD with someone else who also accept this strategy, and try and convince the creationist to think the same way". If it looks like the creationist will try to defect, MBlume will defect as well.
Replies from: RichardChappell↑ comment by RichardChappell · 2009-04-21T03:59:56.343Z · LW(p) · GW(p)
In case it's this ambiguity, MBlume's strategy isn't "cooperate in any scenario"
Ah. It did look to me as though he was suggesting that. For, after describing how we would try to convince the creationist to cooperate (by trying to convince them of their epistemic error), he writes:
But of course, you would fail. And the door would shut, and you would grit your teeth, and curse 2000 years of screamingly bad epistemic hygiene, and weep bitterly for the people who might die in a few hours because of your counterpart's ignorance.
I read this as suggesting that we would fail to convince the creationist to cooperate. So we would weep for all the people that would die due to their defection. In that case, to suggest that we ought to co-operate nonetheless would seem futile in the extreme -- hence my comment about merely adding to the reasons to weep.
But I take it your proposal is that MBlume meant something else: not that we would fail to convince the creationist to co-operate, but rather that we would fail to convince them to let us defect. That would make more sense. (But it is not at all clear from what he wrote.)
Replies from: orthonormal↑ comment by orthonormal · 2009-04-21T16:20:44.696Z · LW(p) · GW(p)
I read this as suggesting that we would fail to convince the creationist to cooperate. So we would weep for all the people that would die due to their defection.
I read it as saying that if the creationist could have been convinced of evolution, then 3 billion rather than 2 billion could have been saved; after the door shuts, MBlume then follows the policy of "both cooperate if we still disagree" that he and the creationist both signaled they were genuinely capable of.
(But it is not at all clear from what he wrote.)
I have to agree— MBlume, you should have written this post so that someone reading it on its own doesn't get a false impression. It makes sense within the debate, and especially in context of your previous post, but is very ambiguous if it's the first thing one reads.
There's perhaps one more source of ambiguity: the distinction between
- the assertion that "cooperate without communication, given only mutual knowledge of complete rationality in decision theory" is part of the completely rational decision theory, and
- the discussion of "agree to mutually cooperate in such a fashion that you each unfakeably signal your sincerity" as a feasible PD strategy for quasi-rational human beings.
If all goes well, I'd like to post on this myself soon.
↑ comment by RichardChappell · 2009-04-20T23:09:24.068Z · LW(p) · GW(p)
(Negative points? Anyone care to explain?)
Replies from: MrHen↑ comment by MrHen · 2009-04-20T23:25:16.854Z · LW(p) · GW(p)
(Negative points? Anyone care to explain?)
I did not vote one way or the other, but if I had to vote I would vote down. Reasonings below.
This is to value your own "rationality" over that which is to be protected: the billion lives at stake.
"Rationality", as best as I can tell, is pointing toward the belief that cooperating is the rationalistic approach to the example. Instead of giving a reason that it is not rational you dismiss it out of hand. This is not terribly useful to the discussion.
If it is actually pointing to the player's beliefs about the age of the universe, than the statement also has ambiguity against it.
(We may add: such a "rationality" fetish isn't really rational at all.)
This is somewhat interesting but not really presented in a manner that makes it discussible. It basically says the same thing as the sentence before it but adds loaded words.
"Why give us even more to weep about?" implies that you may have missed the entire point of the original article. The point was that it is rational to cooperate even though you are weeping. The explanation is given in the previous post. Your comment simply states that you disagree but do not address the author's reasonings and do not give reasonings of your own.
If I had to rewrite your post I would probably do something like this:
Choosing to cooperate because it could result in a larger outcome is not rationality since the other player is not likely to do the same. Doing it anyway because you are "supposed" to cooperate in a prisoner's dilemma just sends billions of people to their death.
You would also have to give a good excuse for the other player not cooperating. I think a few others around here have presented some.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-04-20T23:43:21.444Z · LW(p) · GW(p)
The point was that it is rational to cooperate even though you are weeping. The explanation is given in the previous post.
I am far from convinced that superrationality applies between you or me and humans in general, let alone humans with epistemic processes that permit them to be creationists. At least, it's obvious that my decision process is not correlated in any relevant sense with that of someone who hasn't heard/thought of or doesn't subscribe to superrationality.
Replies from: JGWeissman, gwern↑ comment by JGWeissman · 2009-04-21T01:36:19.024Z · LW(p) · GW(p)
Keep in mind, this creationist, despite his epistemic problems, has manages so far not to die from believing that "when they drink deadly poison, it will not hurt them at all". Maybe he has some rationality you can work with, maybe even enough that he thinks that saving an extra billion lives is worth cooperating with an atheist (so long as the atheist is likewise rational enough to save an extra billion lives by cooperating with a creationist).
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-04-21T02:37:39.920Z · LW(p) · GW(p)
Keep in mind, this creationist, despite his epistemic problems, has manages so far not to die from believing that "when they drink deadly poison, it will not hurt them at all".
Not killing yourself in such grossly obvious ways is pretty easy (very few people, even creationists, let their abstract beliefs control their actions that much), and doesn't seem at all relevant to me.
maybe even enough that he thinks that saving an extra billion lives is worth cooperating with an atheist
I'm sure he already thinks that, not being an alien monster and all – his problem is epistemic, not moral.
(so long as the atheist is likewise rational enough to save an extra billion lives by cooperating with a creationist)
So long as the creationist thinks that, you mean. Again, he's almost certainly not aware of superrationality, so I should persuade him to cooperate however I can, then defect. (Modulo the possibility that privately precommitting to cooperate could make me more persuasive, but on casual introspection I doubt I could actually do that.)
In the unlikely event the creationist is superrational, I expect we'd both start out trying to persuade each other, so we could notice the symmetry, mutually determine that we're superrational (since causal decision theorists could also start out persuading), and both cooperate (resulting in a worse outcome than if he hadn't been superrational).
Replies from: JGWeissman↑ comment by JGWeissman · 2009-04-21T03:35:49.656Z · LW(p) · GW(p)
Not killing yourself in such grossly obvious ways is pretty easy (very few people, even creationists, let their abstract beliefs control their actions that much), and doesn't seem at all relevant to me.
You seriously think that the fact that the creationist doesn't let his abstract belief control his actions is not relevant to the question of whether he will let his abstract belief control his actions? The point is, he has ways of overcoming the foolishness of his beliefs when faced with an important problem.
I'm sure he already thinks that, not being an alien monster and all
So, if you agree he would be willing to cooperate with an atheist, why would he not cooperate by exchanging his choice for the higher payoff in the event that the atheist is right for the atheist's choice for the higher payoff in the event the creationist is right? Recognizing a Pareto improvement is not hard even if one has never heard of Pareto.
In the unlikely event the creationist is superrational ...
It seems you are prepared to recognize this. Are you also prepared to recognize that he did not start out superrational, but is persuaded by your arguments?
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2009-04-21T04:10:55.350Z · LW(p) · GW(p)
You seriously think that the fact that the creationist doesn't let his abstract belief control his actions is not relevant to the question of whether he will let his abstract belief control his actions?
I think that the fact that he doesn't let his abstract belief cause him to drink poison, when everyone around him with the same abstract belief obviously doesn't drink poison, when common sense (poison is bad for you) opposes the abstract belief, and when the relevant abstract belief probably occupies very little space in his mind* is of little relevance to whether he will let an abstract belief that is highly salient and part of his identity make him act in a way that isn't nonconforming and doesn't conflict with common sense.
*If any; plenty of polls show Christians to be shockingly ignorant of the Bible, something many atheists seem to be unaware of.
So, if you agree he would be willing to cooperate with an atheist, why would he not cooperate by exchanging his choice for the higher payoff in the event that the atheist is right for the atheist's choice for the higher payoff in the event the creationist is right? Recognizing a Pareto improvement is not hard even if one has never heard of Pareto.
No doubt he would, which is why I would try to persuade him, but he is not capable of discerning what action I'll take (modulo imperfect deception on my part, but again I seriously doubt I could do better by internally committing), nor is his decision process correlated with mine.
It seems you are prepared to recognize this. Are you also prepared to recognize that he did not start out superrational, but is persuaded by your arguments?
I would rather persuade him to cooperate but not to be superrational (allowing the outcome to be D/C) than persuade him to be superrational (forcing C/C), and I doubt the latter would be easier.
(Caveat: I'm not entirely sure about the case where the creationist is not superrational, but knows me very well.)
Replies from: JGWeissman↑ comment by JGWeissman · 2009-04-21T05:02:25.673Z · LW(p) · GW(p)
The creationist does not have to contradict his belief about the age of the earth to cooperate. He only needs to recognize that the way to get the best result given his belief is to exchange cooperation for cooperation, using common sense (saving 2 billion people given that the earth is young is better than saving 1 billion people given that the earth is young). Yes, understanding the prisoner's dilemma is harder than understanding poison is bad, but it is still a case where common sense should overcome a small bias, if there is one at all. You might have some work to convince the creationist that his choice does not need to reflect his belief, just as your choice to cooperate would not indicate that you actually believe the earth is young.
I would rather persuade him to cooperate but not to be superrational (allowing the outcome to be D/C) than persuade him to be superrational (forcing C/C), and I doubt the latter would be easier.
Why is he going to cooperate unless you offer to cooperate in return? Unless you actually convinced him to reject young earth creationism, he would see that as saving 0 people instead of 1 billion. Or do you intend to trick him into believing that you would cooperate? I don't think I could do that; I would have to be honest to be convincing.
↑ comment by gwern · 2012-04-16T20:15:06.396Z · LW(p) · GW(p)
For those not familiar with superrationality, see http://www.gwern.net/docs/1985-hofstadter
comment by Nominull · 2009-04-19T22:28:16.476Z · LW(p) · GW(p)
My thinking is, if you are stupid (or ignorant, or irrational, or whatever) enough to be a creationist, you are probably also stupid enough not to know the high-order strategy for the prisoner's dilemma, and therefore cooperating with you is useless. You'll make your decision about whether or not to cooperate based on whatever stupid criteria you have, but they probably won't involve an accurate prediction of my decision algorithm, because you are stupid. I can't influence you by cooperating, so I defect and save some lives.