Principals, agents, negotiation, and precommitments

post by gwillen · 2012-09-21T03:41:55.923Z · LW · GW · Legacy · 27 comments

Contents

27 comments

I'm sure this observation has been made plenty of times before: a principal can gain negotiating power by delegating negotiations to an agent, and restricting that agent's ability to negotiate.

For example: If I'm at a family-owned pizza joint, and I want a slice of pepperoni but all they've got is meat-lover's, I can negotiate for the latter at the price of the former. This is a good deal with well-aligned incentives, and is likely to be accepted. But at a chain restaurant, the employees are not empowered to negotiate: It's the menu prices or nothing. Since I'm aware of their lack of power, and my demand for pizza is not very elastic, I'm likely to give them the higher price.

If I squint, this looks a lot like a precommitment, on the part of the pizza store, not to negotiate prices. But if they explicitly made such a precommitment, it might turn off customers -- nobody likes to feel like they're getting a bad deal, and a statement of precommitment (e.g. a sign reading "all prices are final") is likely to make customers feel marginally negative towards the business by drawing their attention to the money they aren't saving.

By contrast, the corporate form -- such as the chain store has -- gives this kind of 'precommitment' as a side-effect of the otherwise socially-normal behavior of delegating limited responsibility to employees. Same benefit, but without the drawback, mostly because the practice is socially-accepted.

Is there any literature that covers this kind of thing further? Particularly the link between precommitment and agents with limited negotating ability.

(I am sitting in a chain pizza store as I write this. Guess what I wanted to order, and what I got instead?)

27 comments

Comments sorted by top scores.

comment by beoShaffer · 2012-09-21T05:13:57.224Z · LW(p) · GW(p)

It's pretty well standard applied game theory, I think A Strategy of Conflict talks about it specifically.

Replies from: roystgnr, Manfred
comment by roystgnr · 2012-09-21T16:09:52.543Z · LW(p) · GW(p)

"The Strategy of Conflict" by Thomas C. Schelling. In Part II, "A Reorientation of Game Theory", Chapter 5, "Enforcement, Communication, and Strategic Moves", a half dozen subsections in is "Delegation". Coincidentally enough it's a section I read last night; I'm still only halfway through the book, so it was easy enough to look up the reference sitting right next to me. :-)

And it's probably what gwillen is looking for. Until I read the sentence starting with "Is there any literature", this post sounded like it was going to be the first in a series of "Cliffs' Notes" for Schelling.

Replies from: gwillen
comment by gwillen · 2012-09-21T22:37:23.360Z · LW(p) · GW(p)

Hah, that is a perfect citation, thanks.

comment by Manfred · 2012-09-21T10:35:26.665Z · LW(p) · GW(p)

Yeah, I guess someone should make a "here's what's in The Strategy of Conflict" series of posts - I keep telling people to read that book :D

comment by Nisan · 2012-09-21T13:03:57.284Z · LW(p) · GW(p)

This article by Yvain is relevant.

Replies from: gwillen
comment by gwillen · 2012-09-21T22:39:32.479Z · LW(p) · GW(p)

That article is all about precommitments and ways to get people to violate them. (I read it previously and liked it.) The interesting thing about delegation is that the precommitment becomes totally inviolable, because the person who would be permitted to violate it is not even present at the negotiation.

comment by shminux · 2012-09-21T06:22:17.885Z · LW(p) · GW(p)

The old classic about negotiations "Getting to Yes" covers it.

Replies from: gwillen
comment by gwillen · 2012-09-21T06:32:38.464Z · LW(p) · GW(p)

Ooh, I've heard of that before and it's exactly the kind of practical reference that sounds worth reading. I should get that one.

comment by William_Quixote · 2012-09-21T15:53:48.625Z · LW(p) · GW(p)

This is a very powerful fact about cooperates. By deligating different authorities and by hiring people with different personalities into different departments a corporate can simultaneously be th kind of cooperative entity that cooperates on a one shot prisoners dilemma and the kind of greedy entity that can credibly claim to reject anything less than an 80-20 split in it's favor in an ultimatum game.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-09-24T12:07:59.050Z · LW(p) · GW(p)

You can transform one obviously evil entity into a functionally equivalent structure of N mini-entities with limited powers, where all the mini-entities can signal good intentions but are forbidden (by other parts and/or by the system) to act upon them.

It's as if I modified my own source code to make me completely selfish, and then said to others: "Look, I am a nice person; I really feel with you, and I honestly would like to help you... but unfortunately I cannot, because I have this stupid source code which does not allow me to act this way."

But if I did it this way, you would obviously ask me: "So if you are such a nice person, why did you modify your source code this way?"

But it works if my source code was written by someone else. People somehow don't ask: "So if you are such a nice person, and the rules are bad, why did you agree to follow such bad rules?" Somehow we treat the choice of following some else's rules as a morally neutral choice.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-09-24T15:18:26.999Z · LW(p) · GW(p)

Somehow we treat the choice of following some else's rules as a morally neutral choice.

The excuse "I was just following orders" is pretty discredited these days.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-09-24T20:20:46.366Z · LW(p) · GW(p)

The excuse "I was just following orders" is pretty discredited these days.

For a Nazi before a war tribunal, yes.

For an employee who by following company orders makes the price negotiation more difficult for a customer, no.

The difference is probably based on price negotiation not being percieved as a moral problem. Thus the employee removes some of your possible utility, but he is not doing anything immoral. Following orders which are not considered immoral is still an acceptable excuse.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-22T13:46:59.895Z · LW(p) · GW(p)

I'm sure this observation has been made plenty of times before: a principal can gain negotiating power by delegating negotiations to an agent, and restricting that agent's ability to negotiate.

Well that sure can't be an equilibrium of a completed timeless decision theory with reflective consistency. Your delegees are more powerful because they have fewer choices? Why wouldn't you just rewrite your source code to eliminate those options? Why wouldn't you just not do them? And why would the other agent react any differently to the delegate than to the source-code change or the decision in the moment?

Replies from: roystgnr, AspiringRationalist, gwillen, wedrifid, Viliam_Bur, MixedNuts
comment by roystgnr · 2012-09-24T18:57:59.239Z · LW(p) · GW(p)

Rewriting my source code is tricky; I always start to get dizzy from the blood loss before the saw is even halfway through my skull.

Replies from: roystgnr
comment by roystgnr · 2012-09-25T20:05:26.788Z · LW(p) · GW(p)

In hindsight, whoever gave my comment its initial "-1 point" ding was correct: although I thought "Why wouldn't you just rewrite your source code" was a flippant question that doesn't mean it deserved just a joking answer. So, some more serious answers:

Your delegates are more powerful because they are known to have fewer choices and because they are known to value those choices differently, which can prevent them from being subject to threats or affected by precommitments that might have been useful against you.

I wouldn't rewrite my source code because, as I joked, I can't.. but even if I could, doing so would only be effective if there were some way of also convincing other agents that I wasn't deceiving them about my new source code. This may not be practical: for every program that does X when tested, returns source code for "do X" when requested, and does X in the real world, there exists another program which does X when tested, returns source code for "do X" when requested, and does Y in the real world. See the concern over electronic voting machines for a more contemporary example of the problem.

Whether I would just not do something is irrelevant - what matters is whether everyone interacting with me believes I will do it. It's easier for a customer to believe that a cashier won't exceed his authority than for a customer to believe that an owner won't accept a still-mutually-beneficial bargain, even if the owner swears that he precommitted not to haggle.

Wild speculation: There are instances where evolution seems to have built "one-boxing" type adaptations into humanity, and in those cases we seem to find precommitment claims plausible. If someone is hurt badly enough then they may want revenge even if taking revenge hurts them further. If someone is treated generously enough then they may be generous in return despite not wanting anything further from their benefactor. A lot of the "irrational" emotions look a lot like rational precommitments from the right perspective. But if you find yourself wishing you could precommit in a situation where apes aren't known for precommitting, it might be too late - the precommitment only helps if it's believed. Delegation is one of the ways you can make a precommitment more believable.

Someone really should write a "Cliffs Notes for Schelling" sequence. I'd naturally prefer "someone else", but if nobody starts it by December I suppose I'll try writing an intro post in January.

comment by NoSignalNoNoise (AspiringRationalist) · 2012-09-23T19:26:04.595Z · LW(p) · GW(p)

Why wouldn't you just rewrite your source code to eliminate those options?

While corporations don't have literal source code to modify, operating under a set of procedures that visibly make negotiation impossible, such as having the customer interact with an employee who is not authorized to negotiate, does essentially what you are saying.

comment by gwillen · 2012-09-23T09:54:42.244Z · LW(p) · GW(p)

Your delegees are more powerful because they have fewer choices? Why wouldn't you just rewrite your source code to eliminate those options?

Well, you are more powerful because your delegees have fewer choices. "Delegate negotiations to an agent with different source code" seems equivalent to "rewrite your source code" (assuming the agent can't communicate with you on demand.)

Actually, it seems possibly even more general, since you are always free to revoke the agent later.

As to why the agent would react differently: all other things being equal it wouldn't. However, we do have the inbuilt instinct to go to irrational lengths against those who try to cheat us, and "corporation delegating to an agent" doesn't feel like cheating because it's standard. I suspect that "precommitment not to negotiate", depending on how it's expressed, would instinctively look much more like a kind of cheating to most people.

comment by wedrifid · 2012-09-22T13:52:38.848Z · LW(p) · GW(p)

I'm sure this observation has been made plenty of times before: a principal can gain negotiating power by delegating negotiations to an agent, and restricting that agent's ability to negotiate.

Well that sure can't be an equilibrium of a completed timeless decision theory with reflective consistency.

You're right which means that the answer to the question:

And why would the other agent react any differently to the delegate than to the source-code change or the decision in the moment?

... is "People are crazy; the world is mad."

The mistake is to conclude that vulnerability to (or dependence on) this kind of tactic must be part of decision theory rather than just something that is effective for most humans.

comment by Viliam_Bur · 2012-09-24T11:51:30.111Z · LW(p) · GW(p)

And why would the other agent react any differently to the delegate than to the source-code change or the decision in the moment?

Let's go a bit more meta...

The world is imperfect. And we all know it. Therefore, when faced with an imperfection that seems inevitable, we often forgive it.

But people don't have correct models of the world, so they can't distinguish reliably between evitable and inevitable imperfections. This can be exploited by creating imperfections which seem inevitable, and which "coincidentally" increase your negotiating power.

For example if you hire agents to represent you, your customers usually can't tell the difference between the instructions you had to give them (e.g. because of the imperfections of the agents, or possible conflicts between you and the agents), and the instructions you have them deliberately to make life more difficult for your customers. Sometimes your customers even don't know whether you really had to hire the agents, or you just chose to do so because it gave you a leverage.

The answer is in some form: Customers don't have full knowledge about what really happened, which includes knowledge about how much their lack of knowledge was used against them.

comment by MixedNuts · 2012-09-22T13:53:01.010Z · LW(p) · GW(p)

Aren't you just neglecting that humans can't self-modify much?

Replies from: wedrifid
comment by wedrifid · 2012-09-22T14:00:09.345Z · LW(p) · GW(p)

Aren't you just neglecting that humans can't self-modify much?

No, and in particular certainly not just. Even if we decided that "read about some decision theory and better understand how to make decisions" doesn't qualify as "change your source code" the other option of "just not do them" requires no change.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-23T11:53:12.026Z · LW(p) · GW(p)

the other option of "just not do them" requires no change

Have you ever heard of akrasia?

Replies from: wedrifid
comment by wedrifid · 2012-09-23T18:10:45.819Z · LW(p) · GW(p)

Have you ever heard of akrasia?

Akrasia is one of thousands of things that I have heard of that do not seem particularly salient to the point.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-24T10:39:34.421Z · LW(p) · GW(p)

I mean, among humans "just not doing things" takes, you know, willpower.

Replies from: wedrifid
comment by wedrifid · 2012-09-24T13:44:23.603Z · LW(p) · GW(p)

I mean, among humans "just not doing things" takes, you know, willpower.

Yes, that is what akrasia means. I reaffirm both my ancestor comments.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-24T16:24:24.107Z · LW(p) · GW(p)

My point is that in some cases the “option of "just not do them"” does require a change (if you count precommitting devices and the like as changes). There are people who wouldn't be able to successfully resolve to (say) just stop smoking, they'd have to somehow prevent their future selves from doing so -- which does count as a change IMO.

Replies from: wedrifid
comment by wedrifid · 2012-09-24T18:05:28.325Z · LW(p) · GW(p)

My point is that in some cases the “option of "just not do them"” does require a change (if you count precommitting devices and the like as changes). There are people who wouldn't be able to successfully resolve to (say) just stop smoking

I understand what your are saying about akrasia and maintain that the intended rhetorical point of your question is not especially relevant to it's context. You are arguing against a position I wouldn't support so increasingly detailed explanations of something that was trivial to begin with aren't especially useful.

Obviously quitting smoking counts as change and involves enormous akrasia problems. An example of something that doesn't count as changing is just not negotiating in a certain situation because you are one of the many people who are predisposed to just not negotiate in such situations. That actually means not changing instead of changing (in response to pressure from a naive decision theory or naive decision theorist that asserts that negotiating is the rational choice when precommitment isn't possible.)

The problem with MixedNut's claim:

Aren't you just neglecting that humans can't self-modify much?

... wasn't that humans in fact can self modify a lot (they can't). The problem was that this premise doesn't weaken Eliezer's point significantly even though it is true.