Incentive compatibility and the Revelation Principle

post by badger · 2014-05-03T13:38:48.134Z · LW · GW · Legacy · 7 comments

Contents

  Returning to the painting puzzle
  Second-best mechanisms and the Myerson-Satterthwaite impossibility
  Conclusion
None
7 comments

In which the Revelation Principle is introduced, showing all mechanisms can be reduced to incentive compatible mechanisms. With this insight, a solution (of sorts) is given to the public good problem in the last post. Limitations of the Revelation Principle are also discussed.

The formalism I introduced last time will now start paying off. We were left with the question of how to check whether a mechanism exists that satisfies some particular goal, naively requiring us to search over all possible procedures. Luckily though, the space of all possible mechanisms can be reduced to something manageable.

Observe the following: suppose through divine intervention we were granted a mechanism (M, g) that implements a social choice function f. In other words, the outcome when agents of types θ1, …, θn interact with the mechanism is exactly the outcome prescribed by the function f for those types. Since a person’s type encodes all relevant variation in their characteristics, preferences, or beliefs, we’d know what that person wants to do if we knew their type. In particular, when agent i is type θi, we expect her choice of message to be mi(θi). Once all the messages are sent, the function g translates the agents’ choices into an outcome so that g(m1(θ1), …, mn(θn)) = f(θ1, …, θn).

But wait! Since we expect a type θi agent to send message mi(θi), why don’t we just package that inside the mechanism? Each agent will tell us their type and the mechanism designer will play in proxy for the agent according to the original mechanism. We’ll call this the direct mechanism for f since all we need to know is that agents tell us their types and then are assigned outcome f(θ), no matter what we’ve blackboxed in the middle.

 

Why did we expect an agent of type θi to send message mi(θi)? Presumably because that message maximized her utility. In particular, it had to be at least as good as the message mi(θʹi) she’s send when her type is different, giving us:

Since the outcomes of the mechanism coincide with f, we can conclude

Even though we started off with some mechanism, this last statement doesn't say anything about the mechanism itself, only the social choice function it implements. Let's call any social choice function f that satisfies this constraint for every agent and for all possible types of others strategy-proof or dominant-strategy incentive compatible (DSIC). Note that this can add up to a lot of constraints.

These observations lead us to the following powerful tool:

The Revelation Principle (for dominant strategies): A social choice function f is implementable in dominant strategies if and only if it is dominant-strategy incentive compatible. Alternatively, every mechanism that implements f in dominant strategies is equivalent to a DSIC direct mechanism.

Rather than requiring an agent to always (weakly) prefer their type’s outcome, it might suffice that agents get greater utility on average for their true type:

Social choice functions f that satisfy this constraint for each agent are Bayesian incentive compatible (BIC). Just as we had a Revelation Principle for dominant strategies, we have an analogous one for Bayes-Nash equilibrium:

The Revelation Principle (for Bayes-Nash equilibrium): A social choice function f is implementable in Bayes-Nash equilibrium if and only if it is Bayes-Nash incentive compatible.

As a check whether you’re following, consider these questions:

  1. Suppose you have a social choice function f that is (dominant-strategy or Bayesian) incentive compatible for the type spaces Θ 1, …, Θ n. Now, throw out some types for each agent. Is the restriction of f to these smaller type spaces still incentive compatible?
  2. Is it easier for a social choice function to be DSIC or BIC?

 

 

Answers in the footnotes.1

With the reduction provided by the various versions of the Revelation Principle, we can get away with analyzing only incentive compatible direct mechanisms rather than all possible ones. If a social choice function isn’t incentive compatible, then it can’t be implemented, no matter how fancy we get.

There are many reasons why we wouldn’t want to use a direct mechanism in practice: full types might be too complex to communicate easily, agents might be worried about privacy, or agents might not trust the mechanism operator to use the information like promised (changing the rules of the game after the fact, despite a claimed commitment to a particular outcome)2 , 3. Still, direct mechanisms are very straightforward for people to use. Rather than investing time to figure out how to play the game, the participants only have to consider what their true preferences are.

In any case, direct mechanisms are very handy theoretically. From this perspective, all incentive problems boil down to encouraging agents to be honest about their private information. The only leeway we have is whether being honest should be a dominant strategy, a best response to the honesty of others, minimize the agent’s maximimum regret, etc.

Returning to the painting puzzle

In our housemate story, we don’t need to consider all possible mechanisms; we only need to consider procedures where they write down their valuation and both have an incentive to be honest about their true preferences. This is still a big space of possible mechanisms, but is much more manageable. Think for a moment about what direct mechanism you’d recommend to the housemates.

Here is one procedure that always makes the efficient decision, though it’s less than ideal: Collect valuations simultaneously. If the sum of the two reports θJack + θJill is at least $300, the room will be painted. Jack’s payment will be pJack = 300 − θJill if the room is painted and zero otherwise. Similarly, Jill’s payment will be pJill = 300 − θJack when the room is painted and zero otherwise.

Take a moment to convince yourself that honesty is (weakly) dominant for each person under this mechanism. For instance, suppose Jack’s value is $120. What would happen if he reports above this, say at $150? If Jill reports something less than $150, then the room isn’t painted and Jack’s utility is zero, the same as when he is honest. If Jill reports between $150 and $180, the room is painted and Jack gets a payoff of 120 − (300 − θJill) = θJill − 180 < 0, less than his payoff from honesty where the room isn’t painted. If Jill reports above $180, the room is painted whether or not Jack reports honestly at $120 or dishonestly at $150, so the payoff is the same. Similar reasoning goes through for any report Jack considers below $120.

So what’s the issue here? Look at the sum of the payments. When the room is painted, Jack and Jill have a $300 bill, but the mechanism only collects 300 − θJack + 300 − θJill < 300. While the mechanism is efficient, it assumes extra money is coming from somewhere! If the individual values could be anywhere between $0 and $300, then the deficit could be up to $300.

In an attempt to solve this budget issue, we could tweak the payments to the following: pJack = 450 − θJill and pJill = 450 − θJack when the room is painted and both 150 when it isn’t. Honest reporting is still a dominant strategy and there is never a budget shortfall, but now the two are paying out $300 whether or not the room is painted. Hard to imagine them agreeing to use a procedure that could leave them worse off than never having considered the renovation.

Is there a mechanism that makes the efficient decision but avoids both these pitfalls, never forcing someone to pay more than their valuation while still covering the bill? Alas, this turns out to be impossible.

Second-best mechanisms and the Myerson-Satterthwaite impossibility

Myerson and Satterthwaite (1983) prove no possible mechanism exists that paints the room exactly when efficient without requiring an outside subsidy or making one agent worse off for having participated. The original paper was framed in terms of a single buyer and seller considering whether to trade an item, showing perfectly efficient trade is impossible in the presence of incomplete information when types are independently distributed and trade isn’t a foregone conclusion4.

Rather than looking for the “first-best” mechanism that always makes the efficient decision, we’re going to be forced to find a “second-best” mechanism that maximizes welfare while still satisfying our constraints. Surprisingly, the naive “vote and split the cost” mechanism is the best feasible procedure in dominant strategies. Restated in direct mechanism terms, if both submit a valuation greater than $150, the room is painted and they split the cost equally. Jeff Ely provides a very nice graphical proof of this fact here.

Honesty as a dominant strategy is a compelling feature of a mechanism—agents don’t have to put any thought into what others might do. Requiring honesty to be dominant might be overly strict though. Not all games have dominant strategies. On the other hand, we expect all (well-behaved) games to have a Nash equilibrium. If we weaken dominant-strategy IC to Bayes-Nash IC, we can do slightly better, but full efficiency is still impossible. The best feasible direct mechanism—assuming values are uniform between $0 and $300—is to paint the room if and only if the values sum to more than $375, with each paying $150 and the person with the high valuation giving the other one-third of the difference in their valuations, i.e.

and similarly for Jill. This direct mechanism corresponds to a Bayes-Nash equilibrium of the following, more intuitive mechanism: each writes down a bid and the room is painted if the bids sum to more than $300, with excess total payments over $300 split equally between them (so if Jack bids $200 and Jill bids $150, Jack pays $175 and Jill pays $125).

Conclusion

By reducing the design problem to encouraging the roommates to be honest via the Revelation Principle, we can identify the best feasible mechanism and end up uncovering a general impossibility about bargaining along the way. In the next post, I’ll delve further into what is possible under dominant-strategy incentive compatibility.

 

Previously on: Mechanism Design: Constructing Algorithms for Strategic Agents

Next up: Strategyproof Mechanisms: Impossibilities


  1. Question 1: Yes, it is still incentive compatible since the constraint still holds for all types kept, with the constraints involving all types thrown out being no longer relevant. Question 2: Bayesian incentive compatibility is the weaker condition, implied by dominant-strategy incentive compatibility, since if the constraint holds for every possible realization, then it must hold on average for any distribution across possible types.

  2. Although cryptography could help solve the privacy and commitment problems of direct mechanisms. See Naor et al 1999, “Privacy preserving auctions and mechanism design”.

  3. I’ve also glossed over the issue of full vs partial implementation. The revelation principle guarantees some equilibrium of the direct mechanism coincides with our social choice function. However, there might be other “bad” equilibria in the direct mechanism, whil an indirect mechanism might have a unique good equilibrium.

  4. Luckily, the welfare loss disappears quickly as the size of the market grows. Once there are at least six people on each side of the market, the overall welfare loss due to incomplete info is less than 1% (Satterthwaite et al 2014, “Optimality vs Practicality in Market Design)

7 comments

Comments sorted by top scores.

comment by 9eB1 · 2014-05-03T18:51:52.016Z · LW(p) · GW(p)

I believe you have a typo in this section with the subscripts:

If the sum of the two reports θJack + θJill is at least $300, the room will be painted. Jack’s payment will be pJack = 300 − θJill if the room is painted and zero otherwise. Similarly, Jill’s payment will be pJill = 300 − θJill when the room is painted and zero otherwise.

If Jill reports between $150 and $180, the room is painted and Jack gets a payoff of 120 − (300 − θJill) = θJill − 180 < 0, less than his payoff from honesty where the room isn’t painted.

When the room is painted, Jack and Jill have a $300 bill, but the mechanism only collects 300 − θJack + 300 − θJill < 300.

If pJack = 300 − θJill and pJill = 300 - θJill, then the mechanism should be collecting 600 - 2*θJill?

So it turns out that the reason none of the solutions I tried in the last article worked is because it's impossible for solutions to exist that satisfy the implicit restrictions I was putting on solutions. That's interesting. It's too bad that the second-best solution is so relatively crappy. Realistically, it doesn't seem to be any better than just negotiating with someone in the more traditional manner, and accepting that you may not end up revealing your true preferences.

Replies from: badger
comment by badger · 2014-05-03T20:46:03.916Z · LW(p) · GW(p)

Typo fixed now. Jill's payment should be p_Jill = 300 - p_Jack.

The second-best direct mechanisms do bite the bullet and assume agents would optimally manipulate themselves if the mechanism didn't do it for them. The "bid and split excess" mechanism I mention at the very end could be better if people are occasionally honest.

I'm now curious what's possible if agents have some known probability of ignoring incentives and being unconditionally helpful. It'd be fairly easy to calculate the potential welfare gain by adding a flag to the agent's type saying whether they are helpful or strategic and yet again applying the revelation principle. The trickier part would be finding an useful indirect mechanism to match that, since it'd be painfully obvious that you'd get a smaller payoff for saying you're helpful under the direct mechanism.

comment by bentarm · 2014-05-10T00:00:06.474Z · LW(p) · GW(p)

The Revelation Principle feels like one of those results that flip flops between trivially obvious and absurdly impossible... I'm currently in an "absurdly powerful" frame of mind.

I guess the principle is mostly useful for impossibility results? Given an arbitrary mechanism, will you usually be able to decompose it to find the associated incentive compatible mechanism?

Replies from: badger
comment by badger · 2014-11-23T14:10:01.987Z · LW(p) · GW(p)

I'm on board with "absurdly powerful". It underlies the bulk of mechanism design, to the point my advisor complains we've confused it with the entirety of mechanism design.

The principle gives us the entire set of possible outcomes for some solution concept like dominant-strategy equilibrium or Bayes-Nash equilibrium. It works for any search over the set of outcomes, whether that leads to an impossibility result or a constructive result like identifying the revenue-optimal auction.

Given an arbitrary mechanism, it's easy (in principle) to find the associated IC direct mechanism(s). The mechanism defines a game, so we solve the game and find the equilibrium outcomes for each type profile. Once we've found that, the IC direct mechanism just assigns the equilibrium outcome directly. For instance, if everyone's equilibrium strategy in a pay-your-bid/first-price auction was to bid 90% of their value, the direct mechanism assigns the item to the person with the highest value and charges them 90% of their value. Since a game can have multiple equilibria, we have one IC mechanism per outcome. The revelation principle can't answer questions like "Is there a mechanism where every equilibrium (as opposed to some equilibrium) gives a particular outcome?"

comment by Cyan · 2014-05-03T14:23:11.821Z · LW(p) · GW(p)

Still not quite grokking the Revelation Principle, although that's probably a function of inexperience with these kinds of problems. Is the idea that if I can assume or establish the sufficient properties of the players' utility functions then I can assume they announce their types to me, the mechanism designer, and proceed with design on that basis?

Replies from: badger
comment by badger · 2014-05-03T16:22:35.323Z · LW(p) · GW(p)

I added some explanation right after the diagram to clarify. The idea is that if I can design a game where players have dominant strategies, then I can also design a game where they have a dominant strategy to honestly reveal their types to me and proceed on that basis.

Replies from: Cyan
comment by Cyan · 2014-05-04T00:44:29.833Z · LW(p) · GW(p)

It's funny -- the parent doesn't state anything that you didn't already put in the OP, and yet I think I understand the point a little better. Thanks!