Acausal Trade and the Ultimatum Game

post by Yair Halberstadt (yair-halberstadt) · 2021-09-05T05:36:28.171Z · LW · GW · 8 comments

In the ultimatum game you have two participants. Let's call them Adam and Becky. You offer Adam 100 dollars and tell him he has to split it with Becky. Adam makes an offer to Becky, and Becky can either accept or reject the offer. If she rejects the offer they both get nothing, otherwise they split the cash in accordance with Adam's offer.

For example Adam might offer Becky $20. If Becky rejects that they both end up with 0. If Becky accepts it, Becky get's $20, and Adam gets $80.

What is the best strategy for both Becky and Adam?

Under causal decision theory the answer is simple. Whatever amount Adam offers Becky, Becky should accept, as the alternative is getting nothing. Therefore Adam should offer Becky a single cent, which Becky will accept, leaving Adam with $99.99, and Becky with 1 cent.

Surprise surprise, when you offer real humans this game they don't accept the single cent, and usually end up rejecting any offers below about 30%. In most cases Adam will offer Becky a 50/50 split, which she will accept.

Wikipedia has a large section dedicated to explaining why humans don't act like we would expect based on causal decision theory here. Perhaps humans are just irrational, or maybe they're being rational when you take into account other factors, like maintaining their image as a fair person.

For some reason nobody points out that what the humans are doing is absolutely 100% the correct thing to do! Our theoretical Homo Economicus who accepts 1 cent is clearly beaten by real life Becky who refuses to settle for less than 50/50, and so gets $50 every time! If your strategy loses, that's a problem with your decision theory not the world [LW · GW]!

That's the main point of this post - I was shocked to see that nobody pointed that out in the article, so felt the need to shout it from the rooftops here.

The rest of this post will be dedicated to explaining how a rational agent might arrive at a 50/50 split.

Under causal decision theory, even though Becky should accept 1 cent when it's offered, if she can precommit to only accepting $99.99 up front, she should. For example, she could offer a friend $10,000 if she accepts any offer less than $99.99. Adam, knowing that Becky will be forced to reject any offer for under $99.99 will be forced to make such an offer, so that at least he gains 1 cent.

Of course Adam could counter by precommitting before Becky does to not make an offer under $99.99, so it becomes a race to see who can precommit first.

However the game is usually played in situations where being able to verbally unambiguously precommit in such a way as causal decision theory would accept the precommit isn't really possible. For example under causal decision theory, a precommittment to throw all your money away if you accepted 1 cent wouldn't actually work, since once you accepted 1 cent you would have no incentive to actually throw away your money. You need some kind of external entity to force you to carry out your precommitment.

Of course that's because causal decision theory is plain wrong. There's various better decision theories, for example updateless decision theory [? · GW]. One of the main outcomes of such decision theories is you should act in every situation as if you made all the best precommitments in all possible situations from the day you were born, and you should always carry out such precommitments.

So up front, right now what precommitments should you make for both Adam's situation and Becky's?

At first it might seem tempting to precommit to offering 1 cent as Adam, or rejecting any offer less than $99.99 as Becky. The problem with that is that your opponent is likely to have made pretty much the same precommitments as you, so you'll just come away with nothing.

If however you precommit to offering $50 as Adam, and rejecting any offer less than $50 as Becky, then if your opponent made the same precommitments you would make a decent offer, get it accepted, and thus maximize your payoff in both situations.

And there we have it - acuasal trade. Without Adam or Becky needing to communicate with each other, they were able to strike a fair deal. Both of them have agreed not to make outrageous precommitments, and in return for Adam making a fair offer now, Becky has precommitted to making a fair offer if she was in his place.

So I suppose the real question the wikipedia article should ask is, "how come humans aren't rational enough to always offer a 50/50 split?"

8 comments

Comments sorted by top scores.

comment by Measure · 2021-09-05T12:06:29.290Z · LW(p) · GW(p)

Against a real partner — not a copy of yourself — it's better to leave some wiggle room rather than simply rejecting offers less than 50% in case your partner has a slightly different notion of fairness from yours. For example, you could reject increasingly lower offers with increasing probability such that their expected utility is maximized at 50%.

Replies from: JBlack, yair-halberstadt
comment by JBlack · 2021-09-06T11:35:18.333Z · LW(p) · GW(p)

If the first player knows the second player's distribution, then their optimum strategy is always a single point, the one where 

  (1-offer) * P(offer accepted)

is maximized. You can do this by setting P(50%) = 1 and P(x) < 1 / 2(1-x)  for all x < 50%. Choosing a distribution only just under these limits maximizes player 2's payoff for irrational player 1's, while providing incentive for smarter player 1's to always choose 50%.

In general, it never makes sense for acceptance probability to decrease for larger amounts offered, and so the reject probability is a cumulative distribution function for a threshold value. Hence any viable strategy is equivalent to drawing a threshold value from some distribution. So in principle, both players are precommitting to a single number in each round drawn from some distribution.

Nonetheless, the game does not become symmetric. Even when both players enter the game with a precommitted split drawn from a distribution, the first player has the disadvantage that they cannot win more than the amount they commit to, while the second player will receive a larger payout for any proposal above their committed level. So for any distribution other than "always 50%", the first player should propose unfair splits slightly more often than the second player rejects them.

However, in settings where the players are known to choose precommitted splits from a distribution, one player or the other can always do better by moving their cumulative distribution closer to "always 50%". This is the only stable equilibrium. (Edit: I messed up the assumptions in the maths, and this is completely wrong)

As seen above, a population of player 2's with known precommitment strategy can induce player 1 to offer 50% all the time. But this still isn't a stable equilibrium! Player 2's can likewise choose a rejection function that incentivizes any offer short of 100%. This can be seen as slightly skewed version of Prisoner's Dilemma, where either side choosing a distribution that incentivizes greater than 50% pay-off to themselves is defecting, and one that incentivizes 50% is cooperating.

comment by Yair Halberstadt (yair-halberstadt) · 2021-09-05T12:08:08.635Z · LW(p) · GW(p)

That seems reasonable yes.

comment by Vladimir_Nesov · 2021-09-05T09:08:46.153Z · LW(p) · GW(p)

If Adam is a "5%-Bot" not controlled by something further (always makes the offer of 5% without any thought), Becky would want to accept even if deciding updatelessly, so updatelessness is not sufficient. If it's a functional decision [LW · GW] by a single algorithm controlling both players, what is this algorithm's utility function? If the utility function likes 80/20 in Becky's favor, this is what gets decided. The algorithm itself doesn't need updatelessness and may also be causal (in which case Becky always accepts and Adam chooses an offer that, when accepted, maximizes joint utility).

Most importantly for the basic problem statement, the issue of choosing the joint utility function doesn't [LW · GW] go away [LW · GW] even if you allow players to separately reason about each other [LW · GW].

Replies from: yair-halberstadt
comment by Yair Halberstadt (yair-halberstadt) · 2021-09-05T12:12:01.268Z · LW(p) · GW(p)

I would say:

a) the chances are the other player will have roughly similiar commitments to you. b) 50/50 is a shelling point that works well for an acausal Trade. So when choosing my precommitments I know that anything too high is likely to be rejected, anything to low is a waste, and 50/50 is likely to be about what most people will choose - as evidenced from the fact that I'm happy to choose it.

comment by Bezzi · 2021-09-10T08:17:52.481Z · LW(p) · GW(p)

For some reason nobody points out that what the humans are doing is absolutely 100% the correct thing to do! Our theoretical Homo Economicus who accepts 1 cent is clearly beaten by real life Becky who refuses to settle for less than 50/50, and so gets $50 every time!

I am still not convinced that staying intentionally out of a perfect-subgame equilibrium should be a good choice. What do you mean with "every time"? The standard version of the Ultimatum Game assumes to be played once, and that you will never encounter the same opponent again. There is no concept of "this player is well-known for rejecting anything less than 50/50". 

Suppose to play the Ultimatum Game not in person, but via Twitter accounts with silly nicknames. Every account randomly change name after each game, so you cannot distinguish one player from another. And all players play from their room with absolutely no one else wandering around. You are not going to be judged greedy or dumb or whatever by anyone. Do you still reject 1 cent in this scenario?

Replies from: Vladimir_Nesov, yair-halberstadt
comment by Vladimir_Nesov · 2021-09-10T21:21:20.284Z · LW(p) · GW(p)

This is acausal tragedy of the commons. Should you overfish? Should you punish those that do at a cost to yourself?

comment by Yair Halberstadt (yair-halberstadt) · 2021-09-10T14:07:05.309Z · LW(p) · GW(p)

If you lived in a world where people accepted that, you would be offered 1 cent every time. The point of acausal Trade and timeless decision theory is that other agents will act differently depending on how you would act in a counterfactual.