Epistemic vs. Instrumental Rationality: Case of the Leaky Agent
post by Wei Dai (Wei_Dai) · 2009-05-07T23:09:40.070Z · LW · GW · Legacy · 22 commentsContents
22 comments
Suppose you hire a real-estate agent to sell your house. You have to leave town so you give him the authority to negotiate with buyers on your behalf. The agent is honest and hard working. He'll work as hard to get a good price for your house as if he's selling his own house. But unfortunately, he's not very good at keeping secrets. He wants to know what is the minimum amount you're willing to sell the house for so he can do the negotiations for you. But you know that if you answer him truthfully, he's liable to leak that information to buyers, giving them a bargaining advantage and driving down the expected closing price. What should you do? Presumably most of you in this situation would give the agent a figure that's higher than the actual minimum. (How much higher involves optimizing a tradeoff between the extra money you get if the house sells, versus the probability that you can't find a buyer at the higher fictional minimum.)
Now here's the kicker: that agent is actually your future self. Would you tell yourself a lie, if you could believe it (perhaps with the help of future memory modification technologies), and if you could profit from it?
Edit: Some commenters have pointed out that this change in "minimum acceptable price" may not be exactly a lie. I should have made the example a bit clearer. Let's say if you fail to sell the house by a certain date, it will be reposessed by the bank, so the minimum acceptable price is the amount left on your mortgage, since you're better off selling the house for any amount above that than not selling it. But if buyers know that, they can just offer you slightly above the minimum acceptable price. It will help you get a better bargain if you can make yourself believe that the amount left on your mortgage is higher than it really is. This should be unambigously a lie.
22 comments
Comments sorted by top scores.
comment by MorganHouse · 2009-05-08T00:00:24.356Z · LW(p) · GW(p)
In this situation, I would say that whatever you currently believe is the minimum, is the actual minimum. If you manage to convince yourself that a higher minimum is required, you have simply updated the variable.
Replies from: Wei_Dai, pengvado, MrHen↑ comment by Wei Dai (Wei_Dai) · 2009-05-08T04:51:34.454Z · LW(p) · GW(p)
This is a good point. I've clarified the example a bit to address this.
↑ comment by pengvado · 2009-05-08T04:38:16.445Z · LW(p) · GW(p)
Agreed. Furthermore, updating your minimum is the right thing to do in this thought experiment. Just reason: "If my minimum is x, then the chance that I'll get an acceptable offer is y and the expected profit of those accepted (given that other agents can guess x) is z. Select x to maximize y*z." No self-deception involved.
comment by Vladimir_Nesov · 2009-05-08T18:59:17.079Z · LW(p) · GW(p)
Not directly related, but if you are expected to tell this specific systematic lie, the lie immediately becomes ineffective.
comment by jimmy · 2009-05-08T00:44:03.177Z · LW(p) · GW(p)
Well, in practice it seems that the answer is to fix the leak- especially since I don't know how to make myself believe something that I know to be false.
However, in the most inconvenient world, my self honesty has a finite price. I care about self honesty and I care about money. I don't feel like computing the price given that I'm not actually facing this condition.
These situations where "false beliefs win" come up often in hypotheticals because it forces you to chose between two things (truth and success) which correlate well enough to be confused with each other. Once you recognize this and have a good enough idea of the balance that you aren't chasing the wrong thing, I don't think there's anything to gain.
Does anyone have an example where common strategies utilizing false beliefs predictably do better than common strategies utilizing true beliefs? Are there any real world examples where the best thing you can do is still to believe a lie?
Replies from: conchiscomment by stcredzero · 2009-05-08T16:50:12.885Z · LW(p) · GW(p)
Perhaps this explains why people so often have distorted self-perceptions. For example, in the dating game, an inflated self perception amounts to a higher 'minimum.' Perhaps most distorted self-perceptions have evolutionary origins that are based on this mechanism?
comment by mdcaton · 2009-05-08T18:19:08.366Z · LW(p) · GW(p)
Fixing the leak is the best solution. Much easier and more realistic with a team of one (yourself) than it is with more than one.
While we can all probably give examples from personal experience equivalent to the real estate agent leaking the minimum price, my personal favorite resulted in the current border of California. Ever wonder why the West Coast border between the U.S. and Mexico is between San Diego and Tijuana? At the end of the war that resulted in this border, there were American troops occupying Mexico City. Why draw the line there? Turns out that the U.S. negotiators had a list of must-gets and nice-to-haves. Alta California and Colorado country were must-gets. Baja was viewed as a desert waste and a nice-to-have. The list got leaked to the Mexican delegation by an American personally acquainted with one of the Mexican negotiators, so the Mexican team dug in their heels and wouldn't give up Baja. Point: keep negotiation teams small. Fewer leak-points. One (yourself) is best, because only then do negotiator interests exactly correspond with YOUR interests.
comment by blink · 2009-05-09T20:13:16.485Z · LW(p) · GW(p)
In nearly all situations, I suspect, lying to yourself only appears profitable because the situations has been misinterpreted. It would help to have more contextual examples where lying is plausibly profitable.
The realtor example is too ambiguous. At best, lying gives me a greater probability of capturing more of the surplus from trade, but at the risk of no sale at all (e.g. the best offer falls between my real and stated minimum). Moreover, the scenario construes sale as buyer vs. seller, but my best hope to get a high price is with competition among buyers rather than negotiating in a zero-sum game with one buyer. Here, a truthful realtor might help, because buyers would believe him when he says someone else has made a better offer.
comment by lavalamp · 2009-05-08T16:32:57.022Z · LW(p) · GW(p)
So the question here is something like, "Would you try to make yourself believe a lie, in the case that believing the lie would cause you to maximize your utility function in a way that believing the truth would not?"
Is this more or less a correct reading?
comment by JamesAndrix · 2009-05-08T15:57:15.899Z · LW(p) · GW(p)
If I were constructing an agent that was unavoidably leaky, (All potential buyers are omegas,) then I would construct it such that it could not accept an offer below my 'minimum', even if it knows my 'true minimum'. If the buyers know this, then they have to deal with the agent's minimum. Where the agent is is my future self, it would be possible to enter a contract with a third party making selling below my 'minimum' prohibitive.
It seems like this kind of a strategy might have led to the way humans play the ultimatum game.
comment by wasistdas · 2009-05-08T14:47:16.379Z · LW(p) · GW(p)
I don’t think that a “lying case” analysis is correct. In both cases, lying or not, you are using all information you have to correctly estimate the “minimum acceptable price". In “lying case”, if you are rational, you know that you (probably) have given increased price so you decrease it back by your best estimation of increment. Now, the higher probability you lie for yourself, the less precise estimation you are left with. And, by being rational, you have to overestimate as likely as underestimate. So this strategy on gives you no consistent benefit.
comment by conchis · 2009-05-08T10:54:13.493Z · LW(p) · GW(p)
In answer to the modified puzzle: If the expected utility of the lie is positive, then yes I would lie to myself, if I were capable of doing so effectively.
There are complications in figuring out the expected utility of the lie (lying to myself may alter my utility), but other than that it's just a standard form of commitment device. I'd probably search for more effective commitment technology first, but if there's no better commitment technology available, do it.
comment by Cameron_Taylor · 2009-05-08T03:16:02.450Z · LW(p) · GW(p)
Now here's the kicker: that agent is actually your future self. Would you tell yourself a lie, if you could believe it (perhaps with the help of future memory modification technologies), and if you could profit from it?
It's not a lie, but yes, the figure I told the agent(/self) would be higher than my actual minimum in those circumstances. In fact, I regularly put this very concept into practice in my own life. I give myself certain higher standards for things and mean them. Yet I know that sometimes my future self will end up taking liberties with them in the moment if it is really necessary given the situation. He never was much for following rules anyway.
comment by Cyan · 2009-05-08T00:46:12.325Z · LW(p) · GW(p)
Wei Dai points out on OB that there exists causal agents that could benefit from hiding predictable actions and reactions we might make from our very selves on account of how leaky we are. I'd vote the OB comment up if I could.
Replies from: Cameron_Taylor↑ comment by Cameron_Taylor · 2009-05-08T03:12:10.229Z · LW(p) · GW(p)
I'm sure Wei Dai would settle for you just voting up the author of this post. Perhaps he'll give us his own answer too, after we've been given a chance to reply.
Replies from: Wei_Dai, Cyan↑ comment by Wei Dai (Wei_Dai) · 2009-05-09T08:12:48.810Z · LW(p) · GW(p)
My answer is yes, I'd probably do it, assuming the profit is large enough and the lie isn't on a topic that I greatly care about the truth of. But if everyone did this, everyone will be worse off, since their bargaining advantages will cancel each other, but their false beliefs will continue to carry a cost. It's like a prisoner's dilemma game, except that it won't be obvious who is cooperating and who is defecting, so cooperation (i.e. not self-modifying to believe strategic falsehoods) probably can't be enforced by simple tit-for-tat.
We as a society will need to find a solution to this problem, if it ever becomes a serious one.
↑ comment by Cyan · 2009-05-08T04:30:21.750Z · LW(p) · GW(p)
Wei Dai is the author of the present post. The timeline is that Wei Dai commented in response to Robin Hanson's OB post Prefer Peace, and then expanded on those comments here on LW. My link just points back to those original comments.
Replies from: randallsquared↑ comment by randallsquared · 2009-05-08T13:01:59.011Z · LW(p) · GW(p)
Wei Dai is the author of the present post.
Hence Cameron's gentle sarcasm.
Replies from: Cyancomment by phane · 2009-05-08T00:32:42.507Z · LW(p) · GW(p)
I often find that there's not any satisfactory way to calibrate my expectations for things like this anyway. I was once emailed by someone who wanted to buy a domain name from me. He refused to give an offer, asking me to provide a price. I found it impossible to gauge what it was actually worth to me, or what I thought it would be worth to him, so I said I wouldn't sell it unless he made an offer. I never heard from him again.
So, sure. My future self can be convinced of a new minimum, for all I care. I apparently hold my ideas about this very lightly anyway. I'm not even sure he'd (I'd) be "wrong," even if I currently think of it as a lie.