Game Theory As A Dark Art
post by Scott Alexander (Yvain) · 2012-07-24T03:27:07.391Z · LW · GW · Legacy · 107 commentsContents
The Hostile Takeover, Part II None 107 comments
One of the most charming features of game theory is the almost limitless depths of evil to which it can sink.
Your garden-variety evils act against your values. Your better class of evil, like Voldemort and the folk-tale version of Satan, use your greed to trick you into acting against your own values, then grab away the promised reward at the last moment. But even demons and dark wizards can only do this once or twice before most victims wise up and decide that taking their advice is a bad idea. Game theory can force you to betray your deepest principles for no lasting benefit again and again, and still leave you convinced that your behavior was rational.
Some of the examples in this post probably wouldn't work in reality; they're more of a reductio ad absurdum of the so-called homo economicus who acts free from any feelings of altruism or trust. But others are lifted directly from real life where seemingly intelligent people genuinely fall for them. And even the ones that don't work with real people might be valuable in modeling institutions or governments.
Of the following examples, the first three are from The Art of Strategy; the second three are relatively classic problems taken from around the Internet. A few have been mentioned in the comments here already and are reposted for people who didn't catch them the first time.
The Evil Plutocrat
You are an evil plutocrat who wants to get your pet bill - let's say a law that makes evil plutocrats tax-exempt - through the US Congress. Your usual strategy would be to bribe the Congressmen involved, but that would be pretty costly - Congressmen no longer come cheap. Assume all Congressmen act in their own financial self-interest, but that absent any financial self-interest they will grudgingly default to honestly representing their constituents, who hate your bill (and you personally). Is there any way to ensure Congress passes your bill, without spending any money on bribes at all?
Yes. Simply tell all Congressmen that if your bill fails, you will donate some stupendous amount of money to whichever party gave the greatest percent of their votes in favor.
Suppose the Democrats try to coordinate among themselves. They say “If we all oppose the bill, then if even one Republican supports the bill, the Republicans will get lots of money they can spend on campaigning against us. If only one of us supports the bill, the Republicans may anticipate this strategy and two of them may support it. The only way to ensure the Republicans don't gain a massive windfall and wipe the floor with us next election is for most of us to vote for the bill.”
Meanwhile, in their meeting, the Republicans think the same thing. The vote ends with most members of Congress supporting your bill, and you don't end up having to pay any money at all.
The Hostile Takeover
You are a ruthless businessman who wants to take over a competitor. The competitor's stock costs $100 a share, and there are 1000 shares, distributed among a hundred investors who each own ten. That means the company ought to cost $100,000, but you don't have $100,000. You only have $98,000. Worse, another competitor with $101,000 has made an offer for greater than the value of the company: they will pay $101 per share if they end up getting all of the shares. Can you still manage to take over the company?
Yes. You can make what is called a two-tiered offer. Suppose all investors get a chance to sell shares simultaneously. You will pay $105 for 500 shares - better than they could get from your competitor - but only pay $90 for the other 500. If you get fewer than 500 shares, all will sell for $105; if you get more than 500, you will start by distributing the $105 shares evenly among all investors who sold to you, and then distribute out as many of the $90 shares as necessary (leaving some $90 shares behind except when all investors sell to you) . And you will do this whether or not you succeed in taking over the company - if only one person sells you her share, then that one person gets $105.
Suppose an investor believes you're not going to succeed in taking over the company. That means you're not going to get over 50% of shares. That means the offer to buy 500 shares for $105 will still be open. That means the investor can either sell her share to you (for $105) or to your competitor (for $101). Clearly, it's in this investor's self-interest to sell to you.
Suppose the investor believes you will succeed in taking over the company. That means your competitor will not take over the company, and its $101 offer will not apply. That means that the new value of the shares will be $90, the offer you've made for the second half of shares. So they will get $90 if they don't sell to you. How much will they get if they do sell to you? They can expect half of their ten shares to go for $105 and half to go for $90; they will get a total of $97.50 per share. $97.50 is better than $90, so their incentive is to sell to you.
Suppose the investor believes you are right on the cusp of taking over the company, and her decision will determine the outcome. In that case, you have at most 499 shares. When the investor gives you her 10 shares, you will end up with 509 - 500 of which are $105 shares and 9 of which are $90 shares. If these are distributed randomly, investors can expect to make on average $104.73 per share, compared to $101 if your competitor buys the company.
Since all investors are thinking along these lines, they all choose to buy shares from you instead of your competitor. You pay out an average of $97.50 per share, and take over the company for $97,500, leaving $500 to spend on the victory party.
The stockholders, meanwhile, are left wondering why they just all sold shares for $97.50 when there was someone else who was promising them $101.
The Hostile Takeover, Part II
Your next target is a small family-owned corporation that has instituted what they consider to be invincible protection against hostile takeovers. All decisions are made by the Board of Directors, who serve for life. Although shareholders vote in the new members of the Board after one of them dies or retires, Board members can hang on for decades. And all decisions about the Board, impeachment of its members, and enforcement of its bylaws are made by the Board itself, with members voting from newest to most senior.
So you go about buying up 51% of the stock in the company, and sure enough, a Board member retires and is replaced by one of your lackeys. This lackey can propose procedural changes to the Board, but they have to be approved by majority vote. And at the moment the other four directors hate you with a vengeance, and anything you propose is likely to be defeated 4-1. You need those other four windbags out of there, and soon, but they're all young and healthy and unlikely to retire of their own accord.
The obvious next step is to start looking for a good assassin. But if you can't find one, is there any way you can propose mass forced retirement to the Board and get them to approve it by majority vote? Even better, is there any way you can get them to approve it unanimously, as a big “f#@& you” to whoever made up this stupid system?
Yes. Your lackey proposes as follows: “I move that we vote upon the following: that if this motion passes unanimously, all members of the of the Board resign immediately and are given a reasonable compensation; that if this motion passes 4-1 that the Director who voted against it must retire without compensation, and the four directors who voted in favor may stay on the Board; and that if the motion passes 3-2, then the two 'no' voters get no compensation and the three 'yes' voters may remain on the board and will also get a spectacular prize - to wit, our company's 51% share in your company divided up evenly among them.”
Your lackey then votes “yes”. The second newest director uses backward reasoning as follows:
Suppose that the vote were tied 2-2. The most senior director would prefer to vote “yes”, because then she gets to stay on the Board and gets a bunch of free stocks.
But knowing that, the second most senior director (SMSD) will also vote 'yes'. After all, when the issue reaches the SMSD, there will be one of the following cases:
1. If there is only one yes vote (your lackey's), the SMSD stands to gain from voting yes, knowing that will produce a 2-2 tie and make the most senior director vote yes to get her spectacular compensation. This means the motion will pass 3-2, and the SMSD will also remain on the board and get spectacular compensation if she votes yes, compared to a best case scenario of remaining on the board if she votes no.
2. If there are two yes votes, the SMSD must vote yes - otherwise, it will go 2-2 to the most senior director, who will vote yes, the motion will pass 3-2, and the SMSD will be forced to retire without compensation.
3. And if there are three yes votes, then the motion has already passed, and in all cases where the second most senior director votes “no”, she is forced to retire without compensation. Therefore, the second most senior director will always vote “yes”.
Since your lackey, the most senior director, and the second most senior director will always vote "yes", we can see that the other two directors, knowing the motion will pass, must vote "yes" as well in order to get any compensation at all. Therefore, the motion passes unanimously and you take over the company at minimal cost.
The Dollar Auction
You are an economics professor who forgot to go to the ATM before leaving for work, and who has only $20 in your pocket. You have a lunch meeting at a very expensive French restaurant, but you're stuck teaching classes until lunchtime and have no way to get money. Can you trick your students into giving you enough money for lunch in exchange for your $20, without lying to them in any way?
Yes. You can use what's called an all-pay auction, in which several people bid for an item, as in a traditional auction, but everyone pays their bid regardless of whether they win or lose (in a common variant, only the top two bidders pay their bids).
Suppose one student, Alice, bids $1. This seems reasonable - paying $1 to win $20 is a pretty good deal. A second student, Bob, bids $2. Still a good deal if you can get a twenty for a tenth that amount.
The bidding keeps going higher, spurred on by the knowledge that getting a $20 for a bid of less than $20 would be pretty cool. At some point, maybe Alice has bid $18 and Bob has bid $19.
Alice thinks: “What if I raise my bid to $20? Then certainly I would win, since Bob would not pay more than $20 to get $20, but I would only break even. However, breaking even is better than what I'm doing now, since if I stay where I am Bob wins the auction and I pay $18 without getting anything.” Therefore Alice bids $20.
Bob thinks “Well, it sounds pretty silly to bid $21 for a twenty dollar bill. But if I do that and win, I only lose a dollar, as opposed to bowing out now and losing my $19 bid.” So Bob bids $21.
Alice thinks “If I give up now, I'll lose a whole dollar. I know it seems stupid to keep going, but surely Bob has the same intuition and he'll give up soon. So I'll bid $22 and just lose two dollars...”
It's easy to see that the bidding could in theory go up with no limits but the players' funds, but in practice it rarely goes above $200.
...yes, $200. Economist Max Bazerman claims that of about 180 such auctions, seven have made him more than $100 (ie $50 from both players) and his highest take was $407 (ie over $200 from both players).
In any case, you're probably set for lunch. If you're not, take another $20 from your earnings and try again until you are - the auction gains even more money from people who have seen it before than it does from naive bidders (!) Bazerman, for his part, says he's made a total of $17,000 from the exercise.
At that point you're starting to wonder why no one has tried to build a corporation around this, and unsurprisingly, the online auction site Swoopo appears to be exactly that. More surprisingly, they seem to have gone bankrupt last year, suggesting that maybe H.L. Mencken was wrong and someone has gone broke underestimating people's intelligence.
The Bloodthirsty Pirates
You are a pirate captain who has just stolen $17,000, denominated entirely in $20 bills, from a very smug-looking game theorist. By the Pirate Code, you as the captain may choose how the treasure gets distributed among your men. But your first mate, second mate, third mate, and fourth mate all want a share of the treasure, and demand on threat of mutiny the right to approve or reject any distribution you choose.You expect they'll reject anything too lopsided in your favor, which is too bad, because that was totally what you were planning on.
You remember one fact that might help you - your crew, being bloodthirsty pirates, all hate each other and actively want one another dead. Unfortunately, their greed seems to have overcome their bloodlust for the moment, and as long as there are advantages to coordinating with one another, you won't be able to turn them against their fellow sailors. Doubly unfortunately, they also actively want you dead.
You think quick. “Aye,” you tell your men with a scowl that could turn blood to ice, “ye can have yer votin' system, ye scurvy dogs” (you're that kind of pirate). “But here's the rules: I propose a distribution. Then you all vote on whether or not to take it. If a majority of you, or even half of you, vote 'yes', then that's how we distribute the treasure. But if you vote 'no', then I walk the plank to punish me for my presumption, and the first mate is the new captain. He proposes a new distribution, and again you vote on it, and if you accept then that's final, and if you reject it he walks the plank and the second mate becomes the new captain. And so on.”
Your four mates agree to this proposal. What distribution should you propose? Will it be enough to ensure your comfortable retirement in Jamaica full of rum and wenches?
Yes. Surprisingly, you can get away with proposing that you get $16,960, your first mate gets nothing, your second mate gets $20, your third mate gets nothing, and your fourth mate gets $20 - and you will still win 3 -2.
The fourth mate uses backward reasoning like so: Suppose there were only two pirates left, me and the third mate. The third mate wouldn't have to promise me anything, because if he proposed all $17,000 for himself and none for me, the vote would be 1-1 and according to the original rules a tie passes. Therefore this is a better deal than I would get if it were just me and the third mate.
But suppose there were three pirates left, me, the third mate, and the second mate. Then the second mate would be the new captain, and he could propose $16,980 for himself, $0 for the third mate, and $20 for me. If I vote no, then it reduces to the previous case in which I get nothing. Therefore, I should vote yes and get $20. Therefore, the final vote is 2-1 in favor.
But suppose there were four pirates left: me, the third mate, the second mate, and the first mate. Then the first mate would be the new captain, and he could propose $16,980 for himself, $20 for the third mate, $0 for the second mate, and $0 for me. The third mate knows that if he votes no, this reduces to the previous case, in which he gets nothing. Therefore, he should vote yes and get $20. Therefore, the final vote is 2-2, and ties pass.
(He might also propose $16980 for himself, $0 for the second mate, $0 for the third mate, and $20 for me. But since he knows I am a bloodthirsty pirate who all else being equal wants him dead, I would vote no since I could get a similar deal from the third mate and make the first mate walk the plank in the bargain. Therefore, he would offer the $20 to the third mate.)
But in fact there are five pirates left: me, the third mate, the second mate, the first mate, and the captain. The captain has proposed $16,960 for himself, $20 for the second mate, and $20 for me. If I vote no, this reduces to the previous case, in which I get nothing. Therefore, I should vote yes and get $20.
(The captain would avoid giving the $20s to the third and fourth rather than to the second and fourth mates for a similar reason to the one given in the previous example - all else being equal, the pirates would prefer to watch him die.)
The second mate thinks along the same lines and realizes that if he votes no, this reduces to the case with the first mate, in which the second mate also gets nothing. Therefore, he too votes yes.
Since you, as the captain, obviously vote yes as well, the distribution passes 3-2. You end up with $16,980, and your crew, who were so certain of their ability to threaten you into sharing the treasure, each end up with either a single $20 or nothing.
The Prisoners' Dilemma, Redux
This sequence previously mentioned the popularity of Prisoners' Dilemmas as gimmicks on TV game shows. In one program, Golden Balls, contestants do various tasks that add money to a central “pot”. By the end of the game, only two contestants are left, and are offered a Prisoners' Dilemma situation to split the pot between them. If both players choose to “Split”, the pot is divided 50-50. If one player “Splits” and the other player “Steals”, the stealer gets the entire pot. If both players choose to “Steal”, then no one gets anything. The two players are allowed to talk to each other before making a decision, but like all Prisoner's Dilemmas, the final choice is made simultaneously and in secret.
You are a contestant on this show. You are actually not all that evil - you would prefer to split the pot rather than to steal all of it for yourself - but you certainly don't want to trust the other guy to have the same preference. In fact, the other guy looks a bit greedy. You would prefer to be able to rely on the other guy's rational self-interest rather than on his altruism. Is there any tactic you can use before the choice, when you're allowed to communicate freely, in order to make it rational for him to cooperate?
Yes. In one episode of Golden Balls, a player named Nick successfully meta-games the game by transforming it from the Prisoner's Dilemma (where defection is rational) to the Ultimatum Game (where cooperation is rational)
Nick tells his opponent: “I am going to choose 'Steal' on this round.” (He then immediately pressed his button; although the show hid which button he pressed, he only needed to demonstrate that he had committed and his mind could no longer be changed) “If you also choose 'Steal', then for certain neither of us gets any money. If you choose 'Split', then I get all the money, but immediately after the game, I will give you half of it. You may not trust me on this, and that's understandable, but think it through. First, there's no less reason to think I'm trustworthy than if I had just told you I pressed 'Split' to begin with, the way everyone else on this show does. And second, now if there's any chance whatsoever that I'm trustworthy, then that's some chance of getting the money - as opposed to the zero chance you have of getting the money if you choose 'Steal'.”
Nick's evaluation is correct. His opponent can either press 'Steal', with a certainty of getting zero, or press 'Split', with a nonzero probability of getting his half of the pot depending on Nick's trustworthiness.
But this solution is not quite perfect, in that one can imagine Nick's opponent being very convinced that Nick will cheat him, and deciding he values punishing this defection more than the tiny chance that Nick will play fair. That's why I was so impressed to see cousin_it propose what I think is an even better solution on the Less Wrong thread on the matter:
This game has multiple Nash equilibria and cheap talk is allowed, so correlated equilibria are possible. Here's how you implement a correlated equilibrium if your opponent is smart enough:
"We have two minutes to talk, right? I'm going to ask you to flip a coin (visibly to both of us) at the last possible moment, the exact second where we must cease talking. If the coin comes up heads, I promise I'll cooperate, you can just go ahead and claim the whole prize. If the coin comes up tails, I promise I'll defect. Please cooperate in this case, because you have nothing to gain by defecting, and anyway the arrangement is fair, isn't it?"
This sort of clever thinking is, in my opinion, the best that game theory has to offer. It shows that game theory need not be only a tool of evil for classical figures of villainy like bloodthirsty pirate captains or corporate raiders or economists, but can also be used to create trust and ensure cooperation between parties with common interests.
107 comments
Comments sorted by top scores.
comment by Wei Dai (Wei_Dai) · 2012-07-24T19:33:59.343Z · LW(p) · GW(p)
The three examples from The Art of Strategy don't seem to measure up to the book's reputation.
The Evil Plutocrat
Meanwhile, in their meeting, the Republicans think the same thing. The vote ends with all members of Congress supporting your bill, and you don't end up having to pay any money at all.
This isn't a Nash equilibrium. If I'm a congressman and I expect everyone else to vote for the bill, then I should vote against it since in that case doing so has no bad financial consequences and I get to represent my constituents.
The Hostile Takeover
Suppose the investor believes you will succeed in taking over the company. That means your competitor will not take over the company, and its $101 offer will not apply. That means that the new value of the shares will be $90, the offer you've made for the second half of shares.
It was mentioned that the stock is currently worth $100 on the open market, so why should I sell my shares to you for $90 instead of selling them on the open market? Is it assumed that there aren't legal protections for minority shareholders so whoever buys 50% of the shares can plunder the company and diminish its value for the other shareholders?
The Hostile Takeover, Part II
if the motion passes 3-2, then the two 'no' voters get no compensation and the three 'yes' voters may remain on the board and will also get a spectacular prize - to wit, our company's 51% share in your company divided up evenly among them.
Presumably the board has no power to split your shares among themselves, so what you're doing is making a promise that if the board votes that way, then you will split you shares among them. But if the board members reason backwards, they would think that if they did vote that way, you'd have no reason to actually fulfill your promise. So what the author seems to be doing is exempting the "evil villain" from backwards reasoning (or equivalently, giving him alone the power of making credible promises).
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2012-07-24T19:46:48.240Z · LW(p) · GW(p)
You're right; these seem to be more parables on what happens if one side has strong ability to coordinate among itself and keep precommitments and the other side does not.
In Evil Plutocrat, assuming the party whips are very good at their jobs and can enforce their decisions, I think the Nash equilibrium (correct me if I'm wrong) is for the Democrats to make 51% of Democrats vote for the bill, and the Republicans to make 51% of Republicans vote for the bill. That makes the bill pass (and therefore neither party has their opponents get money) but still allows as many Congressmen as possible to represent their constituents. I've edited the story above to reflect this. On the other hand, if the evil plutocrat valued unanimity, I think he could get away with saying "I will pay the money to whichever party gives me less support, unless both parties give me 100% support in which case no one gets the money". This would be riskier (he might have to pay up if one Congressman defected) but in theory should be able to win him unanimity.
In Hostile Takeover, although you're right, doesn't that just pass the problem on to whoever you sell it to? At some point the shareholders either have to decide not to sell the company (thus passing up the deal to get $101 for their stock) or sell the company to one of the two bidders. If the shareholders can coordinate well enough to not take the two-tiered offer, they can coordinate well enough to just all take the $101 offer, which is superior to selling on the open market. This problem looks like what happens if the shareholders can't coordinate.
I agree that Hostile Takeover Part II only works if we assume that you have much stronger powers of precommitment than anyone on the board, and so should be understood as a parable about what happens when one party can precommit better than another. If it were important to rescue the story from this loophole, I guess we could imagine a situation where any motion passed with the consent of someone who owned shares automatically transferred those shares, and so your lackey's vote on Hostile Takeover allows those shares to be transferred automatically?
I rewrote / adapted some of these so they wouldn't be outright plagiarism, and I don't have the book with me but it's entirely possible the errors are mine and not theirs.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-07-24T23:43:22.876Z · LW(p) · GW(p)
It occurs to me that in Evil Plutocrat, you can get what you want in a simpler way, by just going to the majority party and saying "unless you make the bill pass, I will donate a large amount of money to the other party" which shows that what makes the evil plutocrat powerful is not his clever use of game theory, but again his unique ability to make credible precommitments.
In Hostile Takeover, although you're right, doesn't that just pass the problem on to whoever you sell it to? At some point the shareholders either have to decide not to sell the company (thus passing up the deal to get $101 for their stock) or sell the company to one of the two bidders.
You can keep the shares and its associated stream of future dividends, which presumably is worth $100 in present value. (If the 50% owner intentionally does something to reduce the value of future dividends, he would be violating minority shareholder rights, which is why I asked whether we're assuming that such rights don't exist.)
You're right; these seem to be more parables on what happens if one side has strong ability to coordinate among itself and keep precommitments and the other side does not.
My problem is that these examples seem designed (but perhaps not consciously) to oversell the power of game theoretic thinking, by obfuscating the fact that the side that appears to be winning through clever use of game theory is also given other strong and unrealistic advantages. Unless maybe the author intended them to be puzzles, where we're supposed to figure out what element hidden in the setup is responsible for the counterintuitive/unrealistic outcomes?
Replies from: PetjaY↑ comment by PetjaY · 2016-08-27T17:43:19.066Z · LW(p) · GW(p)
Also hidden in hostile takeover is that on those assumptions (other buyer only buys if he gets all shares, your shares are worth less than 90$ if neither buys them) you could just buy 1 share for 102$, and get rest for 90$, no need for that complexity there either.
comment by MBlume · 2012-07-24T23:32:06.520Z · LW(p) · GW(p)
Note that I actually attempted to run a dollar auction at Benton and wound up selling my $20 for $1 to the first bidder. If memory serves, either the people in the room drew lots for the right to bid, or the bidder agreed to share profits with the others.
Replies from: TrE, Xachariah, Kaj_Sotala, Clippy↑ comment by TrE · 2012-07-25T18:18:48.760Z · LW(p) · GW(p)
I once bought a Euro for 99 Cents, being the first bidder, noone else had anything to gain by bidding more.
Replies from: Zvi↑ comment by Zvi · 2012-07-28T16:07:28.612Z · LW(p) · GW(p)
I'd bet that on average that play loses, because you need a 99% success rate, and people will do random crazy things more often than that given how many people have the opportunity...
Replies from: TrE↑ comment by TrE · 2012-07-28T16:21:17.059Z · LW(p) · GW(p)
Indeed. I thought about this as well when I wrote the comment (not so much when I actually played), but decided not to write that down. That considered, I was very surprised about the +6 karma for the grandparent and the fact that nobody had brought this up yet. Well done!
This kind of bet is the reason why especially evil game masters play the game with the slight variation that bets can only happen in small increments, similar to an English auction where the auctioneer puts forth a price and asks if any one person is willing to pay that price, and, if it's more than one person, raises the price.
↑ comment by Xachariah · 2012-07-25T06:03:29.242Z · LW(p) · GW(p)
Upvoted for trying to do it in real life.
It's nice to see coordination between homo-sapiens beating homo-economicus.
Replies from: wedrifid↑ comment by wedrifid · 2012-07-25T07:06:28.270Z · LW(p) · GW(p)
It's nice to see coordination between homo-sapiens beating homo-economicus.
Not exactly. If there was a homo-economicus there he would have kicked their ass.
Replies from: Xachariah↑ comment by Xachariah · 2012-07-25T07:20:27.139Z · LW(p) · GW(p)
And if there were two homo-economicii there, they would have each given MBlume infinity dollars until one of them ran out of cash.
There's a reason humans evolved not to fall into those traps.
Edit: Whoops, brain farted that one. Homo economucii would play a mixed strategy, with a NE of $0 dollars expected gain. Cooperating humans play a strategy where they win $19 every time.
Replies from: endoself, Mestroyer, wedrifid↑ comment by endoself · 2012-07-25T20:32:47.554Z · LW(p) · GW(p)
What? No they wouldn't. Giving MBlume infinity dollars is not a Nash equilibrium.
Replies from: MBlume↑ comment by MBlume · 2012-07-26T02:20:30.915Z · LW(p) · GW(p)
It isn't? Darn =(
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-08-02T04:32:27.623Z · LW(p) · GW(p)
I don't see how any one player can do better in a world where MBlume gets infinity dollars.
Replies from: None↑ comment by Mestroyer · 2012-07-29T16:34:32.958Z · LW(p) · GW(p)
It's not CDT-rational to bid one dollar for 20 dollars if there is a high probability that others will be bidding as well, because you are unlikely to actually make that $19 profit. You are likely to actually get $0 for your $1. And if you know in advance that you would make the decision to pour more money in when you are being outbid, then the expected utility of bidding $1 is even lower, because you will be paying even more for nothing.
Replies from: FourFire↑ comment by Kaj_Sotala · 2012-07-25T13:54:26.009Z · LW(p) · GW(p)
Now I feel proud of having lived with the awesome folks at Benton.
...just spent a few minutes using Google Streetview to virtually walk from the nearby park back to Benton house. It's great to be home, but I do miss that place sometimes. :-)
comment by Zvi · 2012-07-25T14:21:23.597Z · LW(p) · GW(p)
I believe the real world results of these attempts, if they were attempted this brazenly, are as follows:
The Evil Plutocrat: Democrats and Republicans coordinate and agree to vote down your bill. Voting is simultaneous and public, and you can change it, so even these two groups can trust each other here.
The Hostile Takeover: Leave aside the question of whether your taking over is about to destroy the stock price, and assume it does, and assume the SEC doesn't come knocking at your door. The other guy sees what you're doing, and adjusts his offer slightly to counter yours, since he has more money than you do, and a similar threat can get him all the shares.
The Hostile Takeover II: Again assume what you're doing is legal. The board members vote themselves another set of giant raises, and laugh at you for buying stock that will never see a dividend, since you've obviously violated all the norms involved. That, or they sign a contract between themselves and take all your stock.
The Dollar Auction: It's proven to work as a tax on stupidity, especially if others aren't allowed to talk (which prevents MBlume's issue of coordination), with only rare backfires. Of course, the other cases are "look what happens when everyone else thinks things through" and this one is "look what happens when people don't think things through..."
The Bloodthirsty Pirates: The first mate turns to the others, says "I propose we kill this guy and split the money evenly." They all kill you. Then maybe they kill each other, and maybe they don't. Alternatively, they've already coordinated. Either way, you're clearly trying to cheat them, and pirates know what to do with cheaters.
The Prisoner's Dilemma Redux: That was gorgeous to watch that one time. It likely won't work again. The coin is another great trick that they'd likely outlaw after it was used once, since that one does work repeatedly.
In general, this is the "I get to set the rules of the game and introduce twists, and then everyone else has to use CDT without being able to go outside the game or coordinate in any way" school of exploitative game theory, with the Dollar Auction a case where you pick out the few people who think badly, often a quality strategy.
Replies from: andrea-mulazzani, trippdup↑ comment by Emiya (andrea-mulazzani) · 2020-10-08T14:57:29.851Z · LW(p) · GW(p)
The evil plutocrat seems to work fine in real life.
For corporations is more convenient to give money to party A so it beats party B, if party B doesn't work in their interest.
This isn't an unrealistic precommitment, even if party B doesn't try to directly attack corporations profit, voting no on proposals that would greatly benefits corporations profit still makes throwing money at party A the right move for corporations to do. There's basically no need to even make an open threat since everyone involved figures it out on his own.
To avoid parties getting annoyed by being blackmailed and coordinating to take them down, they still give out money as reward for active compliance, but the example is right in saying that they would have to throw out a lot more money if they could only pay for compliance and wouldn't be able to threaten paying the other side as retribution for rebellion.
comment by wedrifid · 2012-07-25T02:54:00.588Z · LW(p) · GW(p)
In one episode of Golden Balls, a player named Nick successfully meta-games the game by transforming it from the Prisoner's Dilemma (where defection is rational) to the Ultimatum Game (where cooperation is rational)
One sentence, three false claims about game theory.
- The game did not begin as a Prisoner's Dilemma. The only thing that determines a Prisoner's Dilemma is the payoffs of the game and Golden Balls simply doesn't have the payoffs that the Prisoner's Dilemma has and so isn't one.
- If you don't use scare quotes around "rational" when you claim that irrational things are rational then you are wrong. Mind you, this isn't a Prisoner's Dilemma and the reason that defection is always "rational" (as claimed by CDT) in the Prisoner's Dilemma is that defecting gives better payoffs than cooperating when the other defects (or cooperates). This doesn't apply in golden balls. The reasoning is different and far less straightforward. That makes this error rather moot.
- Nick didn't convert the problem into an Ultimatum Game. He converted it from Golden Balls before he uttered a few arbitrary verbal symbols to Golden Balls after he uttered a few verbal symbols. The game theoretic meaning of those words is nothing, they are no signal whatsoever. We can see this in that he in fact is lying about his choice and chooses split. This wouldn't be an ultimatum game even if the decisions were not simultaneous. In this case they are simultaneous---Nick chooses his ball at the end, just like Ibrahim and so even uses future tense when describing what he will do. Ibrahim could just as credibly say "I'm going to chose steal unless I believe you will chose split" and his words carry the same (negligible) weight.
A sequence which teaches those new to game theory to make exactly the kind of annoying mistakes that those new to game theory typically make is worse than no sequence at all.
Replies from: TrE↑ comment by TrE · 2012-07-25T19:02:48.705Z · LW(p) · GW(p)
As far as I know, this version is a form of prisoner's dilemma: Payoff(C,D) ≤ Payoff(D,D) < Payoff(C,C) < Payoff(D,C). Normally, Payoff(D,D) is > (strictly greater than) Payoff(C,D), not just (equal or greater than), but it's still reasonable to call this game a weak form of prisoner's dilemma, as they share most characteristics.
Nothing to say here, but I apparently have to put a "2." in if I want the "3." from below to be represented properly.
Technically you're right, though in this world of evolution and repeated social interaction, Nick did change the game by gambling not alone with money, but with his trustworthiness as a benevolent human being as well. Nick would look like a total douche to most people who get to know what he was doing, including his friends and family, if he chose steal and took the money all for himself. By making the air pressure oscillate in a certain way, Nick made it long-term unfavourable for him to steal the money completely, so the best he could do from there on was probably to either split or to steal and then split. From this perspective, he in fact did change the payoff function.
↑ comment by Bundle_Gerbe · 2012-07-25T20:25:30.961Z · LW(p) · GW(p)
The specific problem with calling the last game a "prisoner's dilemma" is that someone learning about game theory from this article may well remember from it, "there is a cool way to coordinate on the prisoner's dilemma using coin flips based on correlated equilibria" then be seriously confused at some later point.
Replies from: TrE↑ comment by TrE · 2012-07-25T20:47:15.196Z · LW(p) · GW(p)
Of course, by changing the payoff matrix, Nick also changed the game, so after him putting in some more of his stakes, it wasn't Golden Balls / PD anymore but a game which had the structure Nick favoured. What is to be learned from this article is how to design games to your own profit - whether you are watching from the outside or playing from the inside.
Replies from: TrE↑ comment by TrE · 2012-07-26T06:17:53.536Z · LW(p) · GW(p)
Apparently I didn't quite understand what you wanted to tell me - I'm sorry! Yes, as an introduction to game theory, this indeed is a problem. Example #6 is a bit out of place for that, as game theory here didn't work in practice in the sense that it didn't make accurate predictions.
↑ comment by wedrifid · 2012-07-25T22:26:53.051Z · LW(p) · GW(p)
Technically you're right, though in this world of evolution and repeated social interaction
In "this world of repeated social interaction" nothing is a True Prisoner's Dilemma. Or a true One Shot Ultimatum Game or one shot game of any kind. This post is about bloodthirsty pirates offering only some of their minions $20 out of $17,000, others $0 and he is still confident that they will not overthrow him despite actively wanting him dead. The "oh, but reputation effects" objection is absurdly out of place.
Replies from: TrE↑ comment by TrE · 2012-07-26T05:20:05.456Z · LW(p) · GW(p)
Then I have to raise the question why one should bother to discuss models that don't reflect reality well enough to make accurate predictions, in this case a real-world example.
The post consists of 6 examples. The first three are pure theory and wouldn't stand a chance in practice. It's however insightful to think about them to realize how powerful the designer of a game, in theory, can be.
The fourth example is a game that has been tried in practice, with apparently highly profitable results. Here, game-theory with a "rational actor model" delivers accurate predictions (that is, if every player is, in a Bayesian Game, confident enough that his opponent will eventually not bet more money, one SPE is to always invest more, correct me if I'm wrong). Thus, it's in this case fine to apply game theory to the real world, as it works in most cases, under certain assumptions.
The fifth example, as you noted, stems from the realm of fiction and is useful for the pondering of game theory, but not useful in practice.
The last example is, like the fourth example, something that has actually happened. However, in this case, after Nick has uttered a few words that seem meaningless from a game theoretic point of view, game theory (with the payoff matrix for Golden Balls and the "rational actor model") no longer makes accurate predictions. This means that perhaps, we should modify our model in order to get out a better prediction. One way to do so is to change the payoff matrix, another is to see the game as an instance of repeated PD. Also, one could choose to model the people with a rule-based or behavioural model - If my opponent has openly, in public, announced that he wants to split fairly if I do X, then I do X.
What's left is that, while game theory is useful in modelling the real world at times, at times it is not. And when it is not, in my opinion one should accept this fact and use a different model.
Another note about "rational actor model" vs. "rule-based model" and "behavioural model":
The rational actor model often applies in the real world if the stakes are high, the game is repeated several times, it's a group decision and/or the coices to be made are fairly easy. It says that people have a goal and optimize for it. The objective can be money, but people are also allowed to have a different payoff function. It is, of course, not a model of rational people in the sense that these are always winning, more like academia-rational.
Behavioural models attempt to model people based on how they behaved before. These models take into account biases that we may have.
Rule-based models are based on simple rules that agents follow. These are often easy to write down, but can be exploited.
comment by Bundle_Gerbe · 2012-07-24T17:54:46.099Z · LW(p) · GW(p)
Calling the last game a "Prisoner's Dilemma" is little misleading in this context as the critical difference from the standard Prisoner's Dilemma (the fact that the payoff for (C,D) is the same as for (D,D)) is exactly what makes cousin_it 's (and Nick's) solution work. A small incentive to defect if you know your opponent is defecting defeats a strategy based on committing to defect.
Replies from: cousin_itcomment by Pentashagon · 2012-07-24T21:44:20.944Z · LW(p) · GW(p)
Aren't most of these evil strategies effectively combated by good strategies? For instance, the Republicans could promise 1/2 of a stupendous amount of money to the Democrats for all of them voting no and then have one Republican vote yes. The plutocrat pays 1 stupendous amount of money to the Republicans who split it with the Democrats. Shouldn't the business offering $101 a share just offer $105.01 for the first 500 shares and $96.99 for the other half, then redistribute them like the evil offer? Shouldn't a family business establish a managed trust instead of relying on such a weak board of directors? Or not sell 51% of their shares to anyone else? Shouldn't the first bidder for $20 bid $19.99 and slowly build up a wealth in pennies? Shouldn't bloodthirsty pirates invest the $17000 in faster ships and more cannons?
Replies from: Matvey_Ezhov, Xachariah↑ comment by Matvey_Ezhov · 2012-08-01T19:26:44.864Z · LW(p) · GW(p)
Even better, if party A want to play the game, they could announce to party B that they will vote for 26% of total (assuming 50%+ is what it takes to pass the vote) and to split in case the bill doesn't pass. This way they can be sure that party B wouldn't outvote them without passing the bill and ruining thhe game.
↑ comment by Xachariah · 2012-07-25T06:30:48.371Z · LW(p) · GW(p)
All of them are defeated by the opposition coordinating and precommitment.
In democrats v republicans you can also do it without cross-aisle coordination. Whoever had a majority could just say unilaterally that 0 members will vote for the bill and ensure that it stays dead. The plutocrat would have no incentive to ever offer the deal in the first place, because it would always fail and he'd end up wasting a huge amount of money.
The family business could instead vote 4-to-1 to change the rules themselves. I mean, if the board is allowed to pass motions that divvy up 51% of other people's shares, they could just pass a motion to take that money back anyhow. Or, since they're a family they could just coordinate to vote 3-2 and split the money, gain back majority, and re-vote the two board members back.
As MBlume noted coordination obliterates the pay-all auction (or all auction systems, really). And as you note, a strong solo strategy wins even without coordination.
Finally, all of the pirates can get greater shares via precommitment. There's a reason humans evolved rejection to the ultimatum game, afterall.
Replies from: MBlume↑ comment by MBlume · 2012-07-30T23:33:20.830Z · LW(p) · GW(p)
Trustworthy coordination obliterates the pay-all auction. Anyone in the room can still defect and bid $2, and the question is whether you let them get away with it, or whether you commit to chasing them in a destructive bidding war.
Replies from: Xachariah↑ comment by Xachariah · 2012-07-31T00:50:18.053Z · LW(p) · GW(p)
The group either coordinated or it didn't. Trustworthy only comes into play when you're looking at a human participant's expectations, not the actions they're taking.
Also, 'letting them get away with it' isn't necessarily a binary question. Unless you will literally never see them again, you punish them outside the bidding system, and they know you will punish them outside the bidding system, so they do not defect unless they don't trust your willingness or ability to punish them sufficiently.
Replies from: MBlume↑ comment by MBlume · 2012-07-31T01:15:15.032Z · LW(p) · GW(p)
Also, 'letting them get away with it' isn't necessarily a binary question. Unless you will literally never see them again, you punish them outside the bidding system, and they know you will punish them outside the bidding system, so they do not defect unless they don't trust your willingness or ability to punish them sufficiently.
And this was a group of housemates, so...
Replies from: khafracomment by Grognor · 2012-07-24T22:12:35.765Z · LW(p) · GW(p)
I really wish you would have put a disclaimer on these posts the likes of:
One of the assumptions The Art of Strategy makes is that rational agents use causal decision theory. This is not actually true, but I'll be using their incorrect use of "rationality" in order to make you uncomfortable.
Anyway,
Nick successfully meta-games the game by transforming it from the Prisoner's Dilemma (where defection is rational) [...]
this is the problem with writing out your whole sequence before submitting even the first post. You make the later posts insufficiently responsive to feedback and make up poor excuses for not changing them.
Edit: Why yes, wedrifid, there was. Fixed.
Replies from: wedrifid↑ comment by wedrifid · 2012-07-24T23:30:52.699Z · LW(p) · GW(p)
this is the problem with writing out your whole sequence before submitting even the first post. You make the later posts insufficiently responsive to feedback and make up ) for not changing them.
Is there a malformed link in there where the ")" appears?
comment by gwern · 2012-07-24T19:02:41.643Z · LW(p) · GW(p)
At that point you're starting to wonder why no one has tried to build a corporation around this, and unsurprisingly, the online auction site Swoopo appears to be exactly that. More surprisingly, they seem to have gone bankrupt last year, suggesting that maybe H.L. Mencken was wrong and someone has gone broke underestimating people's intelligence.
Actually, what happens is that most users realize that they are being taken for a ride and quit; without enough of a flow of naive users, the business is no longer profitable. See the 3 papers in http://www.gwern.net/Sunk%20cost#fn35
comment by gjm · 2013-01-31T11:27:09.048Z · LW(p) · GW(p)
I have conducted a dollar auction (in a small group of people, many of them very clever). What happened was rather a surprise. I was selling a £10 note. One bidder offered £0.01. And then ... another offered £10.01, just enough to guarantee that continuing was a strictly-inferior option for the person who had already bid. Self-sacrificing sabotage!
I'd expected one of three outcomes: (1) bidding up to silly levels, for the usual naive reason; (2) no bids or very few bids, after seeing that the obvious naive thing would end up losing a pile of money; (3) bidding up to somewhere around £5 and stopping randomly, some variant of which is probably the optimal mixed strategy when the thing being bid for is worth essentially the same to everyone. (Not exactly the same even though payment is in the same currency, because of variations in risk aversion.)
comment by Matvey_Ezhov · 2012-08-01T19:18:18.983Z · LW(p) · GW(p)
In "The Evil Plutocrat" all parties would probably cooperate to vote the bill down, since otherwise they will be sending a message that they could be played in that (and probably other) fashion, which will deminish their future profits, as other lobbyists would try to play them instead of bribing.
Replies from: thrawnca↑ comment by thrawnca · 2016-08-30T05:19:06.667Z · LW(p) · GW(p)
It should also be possible to milk the scenario for publicity: "Our opponents sold out to the evil plutocrat and passed horrible legislation so he would bankroll them!"
I wish I were more confident that that strategy would actually work...
comment by Bundle_Gerbe · 2012-07-24T19:18:48.137Z · LW(p) · GW(p)
For the dollar auction, note that according to the wikipedia page for All-pay auction the auction does have a mixed-strategy Nash equilibrium in which the players' expected payoff is zero and the auctioneer's expected revenue is $20. So this breaks the pattern of the other examples, which show how Nash equilibrium game theory can be exploited to devious ends when facing rational maximizers.
The dollar auction is an interesting counterpoint to games like the Traveler's Dilemma in which the game when played by humans reaches outcomes much better than the Nash equilibrium. The Dollar Auction is an example where the outcomes are much worse when played by humans.
It seems humans in the Traveler's Dilemma and the Dollar Auction fail to reach the Nash equilibrium for an entirely different reason than in, say, the Prisoner's Dilemma or the Ultimatum Game (in which altruism/fairness are the main factors). In both cases, understanding the game requires iterative thinking, and there's a sort of "almost-equilibrium" different from the Nash equilibrium when this thinking isn't fully applied.
comment by beoShaffer · 2012-07-24T05:36:34.672Z · LW(p) · GW(p)
Nice scenarios, but it could use some minor editing.
Bob wins the auction and I pay $18 cents
I don't thing the cents is supposed to be there.
comment by Xachariah · 2012-07-25T06:49:38.789Z · LW(p) · GW(p)
At that point you're starting to wonder why no one has tried to build a corporation around this, and unsurprisingly, the online auction site Swoopo appears to be exactly that. More surprisingly, they seem to have gone bankrupt last year, suggesting that maybe H.L. Mencken was wrong and someone has gone broke underestimating people's intelligence.
Just as you posted this, a commercial came on in the other room for Quibids. Which is like this, except that instead of being a pay all auction, submitting a bid costs 60 cents while the 'price' goes up by 5 or 10 cents. Eg, a laptop that 'gets won' for $75.00 will have made $975 for Quibids ($900 for 150 bids, and $75 for the final price).
Basically, somebody saw Swoopo and figured they went bankrupt because they weren't underhanded enough.
Edit: Actually on further looking it appears Swoopo did the same thing with paying for bids. So, the only news is that Swoopo had a horcrux and Quibids is it's reincarnated form.
comment by ViEtArmis · 2012-07-24T15:51:30.263Z · LW(p) · GW(p)
Your lackey proposes as follows: “I move that we vote upon the following: that if this motion passes unanimously, all members of the of the Board resign immediately and are given a reasonable compensation; that if this motion passes 4-1 that the Director who voted against it must retire without compensation, and the four directors who voted in favor may stay on the Board; and that if the motion passes 3-2, then the two 'no' voters get no compensation and the three 'yes' voters may remain on the board and will also get a spectacular prize - to wit, our company's 51% share in your company divided up evenly among them.”
Considering the reasoning that ends in "everyone is kicked off the board," wouldn't they all talk about it for a few minutes and then reject the proposal 4-1 (or maybe 3-2)?
Replies from: Yvain, Xachariah, complexmeme↑ comment by Scott Alexander (Yvain) · 2012-07-24T19:13:33.399Z · LW(p) · GW(p)
This seems much like the Prisoners' Dilemma. Yes, you can avoid it easily if you can talk beforehand and trust everyone to go through with their precommitments. If you can't talk or you don't trust what they say, then it's much harder to avoid. After all, if the first two directors cooperated with the plan by voting no, then the second two directors would have a very high incentive to defect and vote yes.
In practice people are usually able to solve these for much the same reasons they can usually solve prisoners' dilemmas - things like altruism and reputational penalties.
Replies from: cypher197, ViEtArmis↑ comment by cypher197 · 2012-07-29T17:22:09.513Z · LW(p) · GW(p)
What occurred to me when I read it is "Why is this guy allowed to propose a motion which changes its actions based on how many people voted in favor of, or against, it?" While it's likely the company's bylaws don't specifically prohibit it, I'm not sure what a lawyer would make of it, and even if it worked, I don't think these sort of meta-motions would remain viable for long. I suspect the other members of the board would either sign a contract with each other, (gaining their own certainty of precommitment,) or refuse to acknowledge it on the grounds that it isn't serious.
Replies from: Artikan↑ comment by ViEtArmis · 2012-07-24T19:20:09.295Z · LW(p) · GW(p)
Even without a precommitment etc., there isn't direct incentive to be the first or second "yes" vote, only the third. If you had two shills on the board, it's a much stronger scenario.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2012-07-24T19:33:51.140Z · LW(p) · GW(p)
But since there's such a strong incentive to be the third, if you are the second-most-senior-director and know that all the directors are strawmen-rational-actors, you can be pretty confident that if you vote yes, the most-senior-director will also vote yes.
Replies from: ViEtArmis↑ comment by ViEtArmis · 2012-07-24T20:10:08.787Z · LW(p) · GW(p)
Of course, it all gets into careful opponent analysis then, which makes the whole exercise quite fuzzy and into "well, Tom really hates the new guy, so he'll probably vote no because he's ornery" territory. All the directors are basing their decisions on the decisions of each other, since there is no reward for acting alone. Again, a second confederate in the beginning makes all the difference.
↑ comment by Xachariah · 2012-07-25T07:25:36.024Z · LW(p) · GW(p)
It seems to me the best option is to pass the proposal 3-2. Take 66% of the company's 51% share to regain board appointments, re-appoint the two who had to resign, and then kick off the lackey and get somebody else. They're buying back 34% of their shares for free.
↑ comment by complexmeme · 2012-07-24T18:56:48.293Z · LW(p) · GW(p)
Agreed. Pretty sure even if the other board members didn't see the exact nature of the trap, they'd still find it obvious that it is a trap, especially considering the source.
comment by MarkusRamikin · 2012-07-24T08:18:28.836Z · LW(p) · GW(p)
the auction gains even more money from people who have seen it before than it does from naive bidders
How on Earth?
Replies from: Ezekiel, mfb↑ comment by Ezekiel · 2012-07-24T15:31:32.450Z · LW(p) · GW(p)
Read as:
Replies from: MarkusRamikinthe auction gains even more money from people who have seen it before [and are nevertheless willing to play again] than it does from naive bidders
↑ comment by MarkusRamikin · 2012-07-24T16:18:19.261Z · LW(p) · GW(p)
Right, of course. Selection effect.
I think what confused me was that I took that to mean the total amount of money earned, not per-person.
Replies from: nshepperd↑ comment by nshepperd · 2012-07-25T15:22:51.731Z · LW(p) · GW(p)
If I'm reading the link (thanks VincentYu!) correctly, your first impression was right. In the graduate class, total amounts earned were [1.95, 1.90, 2.15, 2.50] in that order for consecutive auctions. In the undergraduate class, [2.30, 2.05, 4.25, 3.50].
The number of people playing (bidding) did decrease (graduate: [5, 3, 4, 2], undergraduate: [6, 4, 3, 2]) in each round, but a selection effect is insufficient to explain the increase in total earnings, since there's no reason these "selected" people could not have bid equally as much in the first round.
ETA: Note that this is a two-highest-bidders-pay auction, not all-pay, so the increase in total earnings does reflect an average increase in individual bids as well.
↑ comment by mfb · 2012-07-24T12:55:05.655Z · LW(p) · GW(p)
Unfortunately, the link seems broken. I would really like to see this study.
Great examples of game theory.
Considerung the evil plutocrat:
What happens if both parties vote "nearly" 50% yes? The bill would fail, and the money depends on rounding issues. In addition, the best solution for both would be a cooperation here. Reject the bill, share the money in some way.
If a party reasons that the other party votes 100% yes, the best option would be just some "yes", and several "no" votes. The bill passes, but the party gets a better reputation. Therefore, we have no stable equilibrium.
Edit: Why does the site steal single line breaks?
Replies from: thomblake, fubarobfusco, VincentYu↑ comment by fubarobfusco · 2012-07-24T17:35:39.128Z · LW(p) · GW(p)
What happens if both parties vote "nearly" 50% yes? The bill would fail, and the money depends on rounding issues. In addition, the best solution for both would be a cooperation here. Reject the bill, share the money in some way.
Indeed, the problem seems to assume that political parties are not the sorts of things that can learn to cooperate with each other against a common foe.
↑ comment by VincentYu · 2012-07-24T17:10:42.460Z · LW(p) · GW(p)
Unfortunately, the link seems broken. I would really like to see this study.
Edit: Why does the site steal single line breaks?
I can't answer why, but you can prevent that by putting two extra spaces at the end of the line before a single line break (more details).
comment by DanielLC · 2012-07-24T04:07:18.243Z · LW(p) · GW(p)
I don't understand the dollar auction.
Alice and Bob both have the strategy "bid until you run out of money". Alice has $50, and the auction is currently at $20. If Alice and Bob continue their strategies, Alice has a 50% chance of losing the auction and losing $50, and a 50% chance of winning the auction and losing a smaller amount of money. Her expected loss is more than $25. If she changes her strategy to "never bid", she'll instead lose $20, which clearly isn't as bad. As such, this strategy is unstable, and not a Nash equilibrium.
The only Nash equilibria I can find in it are Alice always bids and Bob never bids, Bob always bids and Alice never bids, and a mixed strategy in which they may-or-may-not bid in any given round.
Replies from: Vaniver, drethelin↑ comment by drethelin · 2012-07-24T06:37:00.434Z · LW(p) · GW(p)
You don't know when the other person will give up and it's better to be the last person to give up.
Replies from: handoflixue, DanielLC↑ comment by handoflixue · 2012-07-24T20:17:48.339Z · LW(p) · GW(p)
No, it's best to never get in to it in the first place - once you bid even $1, you can come out worse than you started (losing the bid), but if you never bid, you just break even.
↑ comment by DanielLC · 2012-07-24T20:46:28.814Z · LW(p) · GW(p)
If never giving up really is the best strategy, then you can assume they'll use it. If they're using it, it's better to give up now. If you assume they use a mixed strategy in which they have a chance of giving up, it really doesn't matter what you do, as is often the case with Nash equilibria, but you have to use the same mixed strategy or they'd change theirs.
comment by handoflixue · 2012-07-24T20:23:52.403Z · LW(p) · GW(p)
These seem to rely on the "protagonist" offering a package deal, and no one else countering with a similar technique. The lesson here thus seems to be that they who dictate the rules of the game, are generally the ones who win that game. I can't say I'm finding that terribly surprising...
(That said, I did very much enjoy each of the examples from a literary point of view :))
comment by [deleted] · 2012-07-24T22:01:22.027Z · LW(p) · GW(p)
Since you, as the captain, obviously vote yes as well, the distribution passes 3-2. You end up with $16,980, and your crew, who were so certain of their ability to threaten you into sharing the treasure, each end up with either a single $20 or nothing.
Shouldn't that be $16,960?
comment by Spurlock · 2012-07-24T15:42:01.566Z · LW(p) · GW(p)
Is it rational (even straw-man rational) to enter the dollar auction after one person has already entered it? It should be obvious that you'll both happily keep bidding at least up to $20, that you have at best a 50% chance of getting the $20, and that even if you do get it you will almost certainly make a negligible amount of money even if the bidding stays under $20. So after one person has already bid, it seems like the action "enter this auction" has a clearly negative expected utility.
Replies from: Yvain, Tasky↑ comment by Scott Alexander (Yvain) · 2012-07-24T19:29:54.999Z · LW(p) · GW(p)
If you can go through that chain of reasoning, so can the other person - therefore, it doesn't seem entirely ridiculous to me to bid $2 to the other person's $1 in the hope that they won't want to enter a bidding war and you'll win $18.
Let's say there's an X% chance you expect the other person to surrender and let you have the $20 for $2 rather than enter the bidding war, and let's also say you don't intend to ever make a bid after your first bid of $2. Then expected value is (X)(20) - (1-X)(2) = 20x - (2 - 2X) = 22X - 2. If X is greater than 1/11 or about 9%, then it's profitable to enter the auction. So unless you're greater than 91% sure that the other person will start a bidding war instead of sacrificing their $1 and letting you have the money, it's positive expected value to enter the auction.
Replies from: VincentYu, Cyan, Psychosmurf, Spurlock↑ comment by VincentYu · 2012-07-24T20:44:05.762Z · LW(p) · GW(p)
Then expected utility is (X)(20) - (1-X)(2) = 20x - (2 - 2X) = 22X - 2.
it's positive expected utility to enter the auction.
Nitpick: Expected value, not utility.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2012-07-28T17:43:48.415Z · LW(p) · GW(p)
It is standard to call an expected value an "expected utility" when the values in question are utilities.
Replies from: Vaniver↑ comment by Vaniver · 2012-07-28T17:46:50.489Z · LW(p) · GW(p)
Correct but irrelevant, as Yvain was discussing dollars.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2012-07-28T23:25:44.808Z · LW(p) · GW(p)
You're right; I was identifying the values with utilities for the purposes of the scenario, which I only now see was precisely what VincentYu was criticizing.
↑ comment by Cyan · 2012-07-29T22:30:58.721Z · LW(p) · GW(p)
What happens if the first bidder bids $19 (or $19.99, or in general, the amount being auctioned minus the smallest permissible increment)? Any potential second bidder can't make any money. (...Without colluding with the auctioneer -- is that allowed?)
↑ comment by Psychosmurf · 2014-03-11T06:10:28.622Z · LW(p) · GW(p)
But the other person could anticipate this reasoning and then simply bid $3 knowing that his opponent has committed himself to not bidding beyond $2.
↑ comment by Spurlock · 2012-07-24T20:33:04.724Z · LW(p) · GW(p)
AFAICT, this is an unfortunately strong argument... Thanks.
I see two solutions to the paradox:
1) Note that auctions are usually played by more than 2 bidders. Even if the first bidder would let you have the pot for $2, the odds that you'll be allowed to have it by everyone decrease sharply as the number of participants increases. So in a real auction (say at least 5 participants), 9% probably is overconfident.
2) If we have a small number of bidders, one would have to find statistics about the distribution of winners on these auctions (10% won by first bid, 12% won on second bid, and so on...). Of course, this strategy only works if your opponents don't know (and won't catch on) that you never bid more than once. But it should work at least for a one-shot auction where you don't publish your strategy in advance.
Out of curiosity, since you argue that joining these auctions as player #2 could very well have positive EU, would you endorse the statement "it is rational to join dollar auctions as the second bidder"? If not, why not?
Replies from: Bundle_Gerbe↑ comment by Bundle_Gerbe · 2012-07-24T21:59:21.035Z · LW(p) · GW(p)
Against typical human opponents it is not rational to join dollar auctions either as the second player or as the first, because of the known typical behavior of humans in this game.
The equilibrium strategy however is a mixed strategy, in which you pick the maximum bid you are willing to make at random from a certain distribution that has different weights for different maximum bids. If you use a the right formula, your opponents won't have any better choice than mirroring you, and you will all have an expected payout of zero.
↑ comment by Tasky · 2012-07-28T17:39:46.321Z · LW(p) · GW(p)
if another bidder has bid $1, you can enter the auction with 2$ and promise the other bidder $2 if you win the auction.
Replies from: evandcomment by Emiya (andrea-mulazzani) · 2020-10-07T14:19:13.901Z · LW(p) · GW(p)
A professor did something similar to the twenty dollar auctions in one of my classes (environmental psychology). I'm highly confused about my memory because the outcome still seems to me unrealistically stupid, but as far as I try I can't find anything out of place with my memory of it.
If I remember correctly the offer was 500€, from her pockets (I'm sure about the money being around this sum because I remember going "WHAT?!", I remember it was only one banknote and I would have been a lot less impressed by 100€ or 50€). Students had to wrote in a piece of paper how much money they would accept, and that money would be divided equally amongst all the lowest offers.
She flat out explained before voting that if we'd all wrote 500€ we'd each gain the 16.55ish €, didn't tried to ramp up suspicions about other students not cooperating.
I wrote 500€, more as a way to vote and make a statement of choosing smart cooperation over dumb competition, since I understood what the point she wanted to make was right away.
I was feeling pretty sure that one people would walk away with something close to 20-30€, but I had too much aversion for the intuitive stupidity of bidding enough lower I'd have a true chance at some profit, with the result of throwing 480€ or more of collective profits out of the window. (I was sure the game would go the professor's way not because I had a good model of how these things usually turn out, but because I was sure she was a smart professor who didn't wanted to lose 500€).
I thought that, to be safe to win I'd have to offer around € 10-13, since people wouldn't bid much lower than the individual profit they'd gain from cooperating (not a good model of how people think), and that sum wasn't worth me in some way choosing dumb competition over smart cooperation.
To this day I still can't model accurately what went through the head of the student who wrote something around 20 cents.
I sincerely hope it was something more like "whatever" than decision theory. I think two other students went 2€ or lower.
If I remember correctly the professor claimed her annual loss for the game, one game every year, were an average of 7€, I remember I was impressed on how bad we did even in comparison to other classes.
That was one hell of a way to hammer in a point about how hard it is too cooperate to protect collective utilities (common resources in that case).
comment by Crazy philosopher (commissar Yarrick) · 2024-08-21T13:23:05.731Z · LW(p) · GW(p)
That's why I was so impressed to see cousin_it propose what I think is an even better solution on the Less Wrong thread on the matter:
Or you can write a cheque to your opponent for half of the winning amount in exchange for the fact that he will cooperate, and you will defect. It won't make sense for him to defect.
comment by DavidTC · 2012-08-06T22:47:11.534Z · LW(p) · GW(p)
The Hostile Takeover, Part II seems to fall apart with the realization that boards do not vote in order like that, but all at once, and may change their vote during the voting. You can postulate such a Board, but it's fairly unlikely such a thing would exist.
And it also falls apart with the fact that parliamentary law would not allow such a maneuver even if voting worked like that. It is simply not permissible to pass resolutions that hurt individuals that vote against them.
What would actually happen: 1) The lackey says: I move that we vote upon the following: that if this motion passes unanimously, all members of the of the Board resign immediately and are given a reasonable compensation; that if this motion passes 4-1 that the Director who voted against it must retire without compensation, and the four directors who voted in favor may stay on the Board; and that if the motion passes 3-2, then the two 'no' voters get no compensation and the three 'yes' voters may remain on the board and will also get a spectacular prize - to wit, our company's 51% share in your company divided up evenly among them. 2) The rest of the Board look at each other and one of them says: I move we postpone that indefinitely because, frankly, that's complete nonsense, and possibly not even legal. 3) Someone seconds that. 4) The board votes that the original motion should be, indeed, be proposed indefinitely. (A funnier variant involves amending the motion so the lackey is removed instead, and everyone but him is given reasonable compensation whenever they resign.)
Board of Directors, under pretty much all legal system, operate under Robert's Rules of Order by default, and under Robert's Rules of Order there are certain things even a majority can't do, like punish members for voting specific ways.
Exceptions are indeed allowed if the bylaws say so, but this hypothetical company apparently has bylaws that say 'A majority of board of directors can literally do anything they want, even stuff expressly required under Robert's and state law to be in the bylaws.' That's not likely.
And then, after allowing anything in the bylaws, the Board itself previously decided to override the reasonable default rules or order to make an absurd order-based voting system and require the board to cast votes on crazy things the majority of the board does not want to vote on and has decided to allow members to be punished or rewards for their votes. (And apparently doesn't have any way to make amendments to motions.)
And at that point the major question it: What would stop the Board from just disallowing new appointees any voting rights, and further all authority of the Board is now permanently held in a group that's composed of the four existing board members and someone else they've selected? In the hypothetical world you've set up the Board can just appoint and operate a shadow Board, well in advance of anything the lackey can do. This is completely crazy, of course, but apparently the bylaws allow the Board to do anything.
Robert's Rules of Order are not Nomic, and weren't written by stupid people.
Replies from: MugaSofer↑ comment by MugaSofer · 2013-01-09T11:46:45.790Z · LW(p) · GW(p)
Don't fight the counterfactual. This clearly takes place in an alternate universe with different norms. The board didn't use RRoO, they used a system of their own devising that involves voting in order, on the basis that it would help avoid hostile takeovers. We don't know what the usual rules are, and frankly they don't matter.
Replies from: Artikancomment by A1987dM (army1987) · 2012-07-25T15:22:29.626Z · LW(p) · GW(p)
I briefly thought “WTF?” at the end of the second sentence of “The Evil Plutocrat”, because in my country (at least according to the stereotype) most member of the parliament are evil plutocrats (or friends of evil plutocrats) themselves, so you would be eager to pass such a bill, with no need to be bribed. (I won't go into whether I think the stereotype is accurate or not, because in such a hypothetical the stereotype. rather than a realistic situation, springs to mind anyway.) I had to read the third sentence twice before getting the point.
Replies from: DaFranker↑ comment by DaFranker · 2012-07-25T15:38:36.372Z · LW(p) · GW(p)
I had the exact same reflex-thought. It helps to visualize these scenarios or hypothetical problems as virtual scenes in a computer game, where each player gets points for acting according to whatever criteria fit the problem's stated agent-goals.
Since the player is just controlling a virtual avatar and merely wants the points, the behaviors postulated often suddenly start making sense, since the real player doesn't really care about other concerns that would normally be very important in "real life".
This is how I initially approached the Prisoner's Dilemma. You're sitting at a computer, and you're playing the Prisoner's Dilemma to get points to virtually purchase a shinier sword in a videogame. You can't play more than one round per hour. You do kinda really want that sword, though. "There", I thought, "this makes a lot more sense now."
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-07-26T10:14:08.495Z · LW(p) · GW(p)
Not sure it would work for me. My sense of empathy is way too strong: I mean, if I look at this animation and follow one of the balls, I kind-of instinctively feel sorry for it after particularly hard or close-together collisions!
More seriously, one time that I actually entered the mind-set that I cared about nothing but winning a zero-sum game (chess) against my opponent (who I had never met before, and I didn't expect ever meeting afterwards), I ended up using all kinds of dark arts to make him play worse (e.g. verbally humiliating him after each blunder of his -- I had the impression he was about to break down and cry); I did win the game but I felt awful -- more or less the way EY described here.
Replies from: DaFranker↑ comment by DaFranker · 2012-07-26T13:53:43.186Z · LW(p) · GW(p)
[...]I ended up using all kinds of dark arts to make him play worse (e.g. verbally humiliating him after each blunder of his -- I had the impression he was about to break down and cry)
This is also one of the things people actually do in those videogames, unfortunately. If someone really wants that sword, they won't shy away from sending threatening /whispers and using all kinds of verbal abuse/trickery. In situations like the prisoner's dilemma, I suspect a favorite of many a younger gamer (think young teen) would be "You're too chicken to Cooperate! Go on, be the sissy and Defect!".
comment by Pentashagon · 2012-07-24T20:33:24.826Z · LW(p) · GW(p)
Assuming that the Golden Balls are indistinguishable and the game host doesn't interfere there is actually a way to pre-commit to splitting. Hand your own "split" ball to your opponent and state "I want you to choose this ball instead of either of your own. If you do so, I promise to choose whichever of your own balls you hand me." This is quite similar to Nick's ploy of promising to steal except by giving away the "split" ball it becomes impossible to un-commit to stealing unless your opponent gives you his or her own "split" ball. Short of sleight of hand or a last-minute switch (which could be avoided by setting both "steal" balls out of reach after the exchange and before the final decision) your opponent has no rational choice but to cooperate and split.
Replies from: wedrifid↑ comment by wedrifid · 2012-07-25T03:03:27.107Z · LW(p) · GW(p)
Hand your own "split" ball to your opponent and state "I want you to choose this ball instead of either of your own.
It seems unlikely that the host would allow you to choose the other person's ball. You are just trying to denying the decision problem and cheating doesn't work when someone else with power is choosing the rules and overseeing the process.
Replies from: Pentashagon↑ comment by Pentashagon · 2012-07-25T16:45:06.578Z · LW(p) · GW(p)
There can be additional strategies if the rules require using physical artifacts to signal a decision. Another method would be for a player to visually reveal the choices inside each of the balls to the other player and offer to set the "steal" ball out of reach before decision time if the other player did the same, but to steal if the other player didn't cooperate (perhaps setting the "split" ball out of reach as evidence). If the balls can't be moved or revealed to the other player before the final decision then you are correct. However, in watching a couple videos of players in the Golden Balls, it appears that the host allows players to pick up and manipulate their own balls before the final decision.
comment by fubarobfusco · 2012-07-24T07:36:35.279Z · LW(p) · GW(p)
It's very worthwhile for an agent to develop an ability to recognize these sorts of games and learn to either ① avoid them entirely, or ② in the case of progressive ones like the dollar auction, bail out early and cut losses.
comment by shaih · 2013-02-18T02:53:12.918Z · LW(p) · GW(p)
It seems that the prisoner's dilemma mentioned here differs from the typical (from at least my perspective) prisoner's dilemma in the sense that rewards for both defecting are equal to instead of greater then the rewards for the one that cooperates in the defect/cooperate case. This leads to the outcome of whenever one person (p1) is known to defect (p2) no longer stands a chance to gain anything. Unless this game is repeated in which case punishments make sense (p2) has no game theory incentive to pick one case over the other outside of made deals such as the ultimatum. The difference between the two would only be the money (p1) walks away with. So instead of a prisoner's dilemma it turns into (p2) having the two moves cooperate (p1) gets money defect (p1) gets no money from here it would seem that even though (p1) did something that was to (p2)'s disadvantage, (p2) gains nothing from causing (p1) the harm of defecting and it seems to me that a moral argument could easily be made that states (p2) must cooperate. This doesn't work for the traditional prisoner's dilemma because once (p1) defects (p2) stands more to gain from defecting then cooperating.
comment by MugaSofer · 2013-01-09T11:43:18.860Z · LW(p) · GW(p)
At that point you're starting to wonder why no one has tried to build a corporation around this, and unsurprisingly, the online auction site Swoopo appears to be exactly that. More surprisingly, they seem to have gone bankrupt last year, suggesting that maybe H.L. Mencken was wrong and someone has gone broke underestimating people's intelligence.
As soon as I started reading this example, I was reminded of MadBid.Com, the evilness of which I spent some time convincing several friends and relations of. They have not, as far as I can tell, gone out of buisness; but surely game theory does not mean you should actually participate in these auctions? After all, if you don't have any better odds of winning then anyone else ... I'm genuinely unsure what game theory suggests, but I don't think it's "pay $200 and lose anyway."
comment by Dr_Manhattan · 2012-07-24T16:03:39.861Z · LW(p) · GW(p)
Similar (but perhaps simpler) to Pirate's dilemma,
http://en.wikipedia.org/wiki/Voting_paradox could be put to "good" use, by controlling the voting order.
For some reason I actually recall reading someone clever applying this on a corporate level, but cannot remember any more. Hopefully it will come back to me.