Abadarian Trades

post by David Udell · 2022-06-30T16:41:12.232Z · LW · GW · 22 comments

Spoilers for planecrash (Book 2).

I propose an equal split of the gains from this trade, and will reject lesser splits with a probability corresponding to how disproportionately they reserve the gains for you, such that you can't actually do better by pretending to underrate me, but we'll still work something out with high probability if we honestly disagree, paid in Wishes and spellsilver above and beyond the ordinary payment of permanent Arcane Sight. …and permanent Tongues."

…first of all, mortals aren't supposed to know about any of that, however garbled and incomplete it sounds, and second, if they stumble over a piece of it, you're supposed to shut them down hard and refuse to bargain for their soul and ideally let them get executed by Cheliax.

"That is not the way of Hell," he rumbles.  "Asmodeus is not Abadar, little mortal, no matter what company you have been keeping of late.  You can try to hold what secrets you like, and Hell will keep its own, and whoever is closer to Asmodeus in wit and ways is the one to win the compact.  Name to me the price you seek for yourself."

--planecrash

If I offer you tutoring for $40, you have the option of either turning me down and keeping $40 or gaining tutoring and losing $40. Whichever option is more desirable to you is the one you'll take. Similarly, I have the option of offering tutoring for money or not offering tutoring with my time. I'll do whichever sounds net best to me. Because people only ever willingly switch alternatives when the new alternative is a net improvement, someone taking me up on my tutoring offer means that we both prefer what we bought ($40, tutoring) to what we sold (tutoring, $40). This is one of the best reasons to be enthusiastic about free markets: transactions are mutually beneficial to both traders.

Because they constitute improvements by both trader's lights, both traders want to make every transaction they can. Once no more trades can clear, no more mutual improvements can be made. However much happier each trader is now, by their own lights, is how beneficial free markets were to them.

If tutoring is worth $100 to you in total, then I can offer tutoring for up to $99 and you'd still buy from me. If I'd rather have unpaid leisure time than work for $24, I'll only ever offer tutoring for $25 or more. So there are many prices I can offer that you'd buy from me at, that would leave us both better off. Towards one end, I am much happier and you're a bit happier because of free markets. Towards the other, you're much happier and I'm just a bit happier. Which of those offers is made and accepted alters how much better off each of us ends up.

Because this range of offers are all mutually agreeable, and only one actual offer has to be signed, how should the two of us choose a trade? One option, to head off getting into commitment [LW · GW] races [LW · GW] with each other over splits, is to precommit to dividing the value pie according to your notion of fairness. You each accept proffered fair splits of the value pie with probability 1. You each accept unfair splits with diminishing probability as those offers seem more unfair, such that it is always lower EV to offer a more unfair division. This precommitment also has the advantage of being robust to small differences in notions of fairness, and degrading gracefully in the face of very different notions of fairness.[1]

  1. ^

    If you ultimately endorse a Schelling notion of fairness -- say that Shapley values are the only obvious formalization of what's fair, meaning that scattered agents could all converge on endorsing the Shapley formalization -- you'll be less likely to have to pay even that disagreement-about-fairness tax.

22 comments

Comments sorted by top scores.

comment by Dagon · 2022-06-30T19:11:35.685Z · LW(p) · GW(p)

One option, to head off getting into commitment [LW · GW] races [LW · GW] with each other over splits, is to precommit to dividing the value pie according to your notion of fairness.

Umm, that doesn't head off a commitment race, it just tries to win it. If the counterparty has pre-precommitted to some split you don't like, all your precommittment does is to prevent a trade that would benefit you both.  

All of these ideas have an unstated assumption of repeated games, where you can, over time, adjust the offers you get.  If your counterparties don't cooperate (they're different each time, or have no memory, or are just better at modeling their trade partners than you are at modeling them, or otherwise are more powerful than you), you simply have a worse life than if you'd accepted the split.

Replies from: JBlack
comment by JBlack · 2022-07-01T07:39:45.631Z · LW(p) · GW(p)

That's a very CDT model.

Replies from: Dagon
comment by Dagon · 2022-07-01T16:35:31.989Z · LW(p) · GW(p)

Expand on that a bit - I think it's more of a meta-DT model, and applies less to CDT (because CDT doesn't even acknowledge the equilibrium) than other DTs that take a more expansive view of decision-contingent outcomes.

comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-06-30T17:52:05.403Z · LW(p) · GW(p)

I'd appreciate more spelling out in detail of the proposed algorithm sometime. E.g.

You each accept unfair splits with diminishing probability as those offers seem more unfair, such that it is always lower EV to offer a more unfair division.

Lower EV for your opponent, presumably. So they are disincentivised from offering more unfair divisions to you. But does that mean to implement this strategy you need to correctly guess your opponents utility function? If you are gullible and believe they have whatever utility function they say they have, can they exploit you by choosing a utility function that makes you still have a reasonably high probability of accepting a pretty unfair deal, and then proposing said unfair deal?

Replies from: tom-shlomi-1, Dagon, JBlack
comment by Tom Shlomi (tom-shlomi-1) · 2022-06-30T18:42:54.327Z · LW(p) · GW(p)

I believe it's the algorithm from https://www.lesswrong.com/posts/z2YwmzuT7nWx62Kfh/cooperating-with-agents-with-different-ideas-of-fairness. [LW · GW] Basically, if you're offered an unfair deal (and the other trader isn't willing to renegotiate), you should accept the trade with a probability just low enough that the other trader does worse in expectation than if they offered a fair trade. For example, if you think that a fair deal would provide $10 to both players over not trading and the other trader offers a deal where they get $15 and you get $4, then you should accept with probability , so that in expectation they get less than if they offered a fair trade.

Any Pareto bargaining method is vulnerable to lying about utility functions, and so to have a chance at bargaining fairly, it's necessary to have some idea of what your partner's utility function is. I don't think that using this method for dealing with unfair trades is especially vulnerable to deception, though possibly there's some better way to deal with uncertainty over your partner's utility function.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-07-01T01:17:23.287Z · LW(p) · GW(p)

Thanks. I know it's that algorithm, I just want a more detailed and comprehensive description of it, so I can look at the whole thing and understand the problems with it that remain.

"Any Pareto bargaining method is vulnerable..." Interesting, thanks! I take it there is a proof somewhere of this? Where can I read about this? What is a pareto bargaining method?

I feel like arguably "My bargaining protocol works great except that it incentivises people to try to fool each other about what their utility function is" .... is sorta like saying "my utopian legal system solves all social problems except for the one where people in power are incentivised to cheat/defect/abuse their authority."

Though maybe that's OK if we are talking about AI bargaining and they have magic mind-reading supertechnology that lets them access each other's original (before strategic modification) utility functions?

Replies from: tom-shlomi-1
comment by Tom Shlomi (tom-shlomi-1) · 2022-07-01T17:15:42.598Z · LW(p) · GW(p)

Thanks. I know it's that algorithm, I just want a more detailed and comprehensive description of it, so I can look at the whole thing and understand the problems with it that remain.

It's really a class of algorithms, depending on how your opponent bargains, such that if the fair bargain (by your standard of fairness) gives X utility to you and Y utility to your partner, then you refuse to accept any other solution which gives your partner at least Y utility in expectation. So if they give you a take-it-or-leave-it offer which gives you positive utility and them Y'>Y utility, then you accept it with probability Y/Y' - , such that their expected value from giving you that offer is Y - ,. If they have a different standard of fairness which gives you X' utility and them Y' utility but also use Adabarian bargaining, then you should agree to a bargain which gives you X' -  , utility and them Y -  , utility (this is always possible via randomizing over their bargaining solution, your bargaining solution, and not trading, so long as all the bargaining solutions give positive utility to everyone).

"Any Pareto bargaining method is vulnerable..." Interesting, thanks! I take it there is a proof somewhere of this? Where can I read about this? What is a pareto bargaining method?

Sorry, that should actually be Pareto bargaining solution, which is a just a solution which ends up on the Pareto frontier. In The Pareto World Liars Prospers [LW · GW] is a good explainer, and https://www.jstor.org/stable/1914235 shows a general result that every bargaining solution which is invulnerable to strategic dishonesty is equivalent to a lottery over dictatorships (where one person gets to choose their ideal solution) and tuple methods (where the possible outcomes are restricted to a set of two).

I feel like arguably "My bargaining protocol works great except that it incentivises people to try to fool each other about what their utility function is" .... is sorta like saying "my utopian legal system solves all social problems except for the one where people in power are incentivised to cheat/defect/abuse their authority."

I agree with this, but also it would be pretty great to have a legal system which would work if people in power didn't abuse their authority; I don't think any current legal system even has that. Designing methods robust to strategic manipulation is an important part of the problem, but not the only part, and I don't think it's unreasonable focus on other parts, especially since there are a lot of scenarios where approximating your partner's utility function is possible. In particular, if monetary value can be assigned to everything being bargained over, then approximating utility as money is usually reasonable.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-07-02T10:38:46.489Z · LW(p) · GW(p)

Thanks, this was super helpful!

What would you say are the remaining problems that need to be solved, if we assume everyone has a way to accurately estimate everyone else's utility function? The main one that comes to mind for me is, there are many possible solutions/equilibria/policy-sets that get to pareto-optimal outcomes, but they differ in how good they are for different players, and so it's not enough that players be aware of a solution, and it's also not even enough that there be one solution which stands out as extra salient--because players will be hoping to achieve a solution that is more favorable to them and might do various crazy things to try to achieve that. (This is a vague problem statement though, perhaps you can do better!)

Replies from: tom-shlomi-1
comment by Tom Shlomi (tom-shlomi-1) · 2022-07-27T22:26:04.939Z · LW(p) · GW(p)

You're welcome!

The main one that comes to mind for me is, there are many possible solutions/equilibria/policy-sets that get to pareto-optimal outcomes, but they differ in how good they are for different players, and so it's not enough that players be aware of a solution, and it's also not even enough that there be one solution which stands out as extra salient--because players will be hoping to achieve a solution that is more favorable to them and might do various crazy things to try to achieve that.

This seems like it's solved by just not letting your opponent get more utility than they would under the bargaining system you think is fair, no matter what crazy things they do? If there is a bargaining solution which stands out, then agents which strategize over which solution they propose will choose the salient one, since they expect their partner to do the same. I might be misunderstand what you're getting at, though.

 

What would you say are the remaining problems that need to be solved, if we assume everyone has a way to accurately estimate everyone else's utility function?

Finding something like a universally canonical bargaining solution would be great, as it would allow agents with knowledge of each other's utility functions to achieve Pareto optimality.  I think it's not fully disentagleable from the question of incentivizing honesty, as I could imagine that there is some otherwise great bargaining solution that turns out to be unfixably vulnerable to dishonesty. Although, I do have an intuition that probably most reasonable bargaining solutions thought up in good faith are similar enough to each other that agents using different ones wouldn't end up too far from the Pareto frontier, and so I'm not sure how important it is.

I think my answer is probably figuring out how to deal with strategic successor agents and dealing with counterfactuals. The successor agent problem is similar to the problem of lying about utility functions: if you're dealing with a successor agent (or an agent which has modified its utility function), you need to bargain with it as though it had the utility function of its creator (or its original utility function), and figuring out how to deal with uncertainty over how an agent's values have been modified by other agents or its past self seems important.

Bargaining solutions usually have the property that you can't naively apply them to subgames, as different agents might value the subgames more or less, and an agent might be happy to accept a locally unfair deal for a better deal in another subgame. This is fine for sequential or simultaneous subgames, but some subgames only happen with some probability. Determining what would happen in counterfactual subgames is important for determining the fair solution in the occurring subgames, but verifying would counterfactually happen is often quite difficult.

In some sense, these are just subproblems of incentivizing honesty generally. I think the problem of incentivizing honesty is the overwhelming importance-weighted bulk of the remaining open problems in bargaining theory (relative to what I know), and it's hard for me to think of an important problem that isn't entangled with that in some way.

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-07-28T00:47:08.950Z · LW(p) · GW(p)

Huh. I don't worry much about the problem of incentivizing honesty myself, because the cases I'm most worried about are cases where everyone can read everyone else's minds (with some time lag). Do you think there's basically no problem then, in those cases?

Replies from: tom-shlomi-1
comment by Tom Shlomi (tom-shlomi-1) · 2022-07-28T00:57:43.248Z · LW(p) · GW(p)

There's still the problem of successor agents and self-modifying agents, where you need to set up incentives to create successor agents with the same utility functions and to not strategically self-modify, and I think a solution to that would probably also work as a solution to normal dishonesty.

I do expect that in a case where agents can also see each other's histories, we can make bargaining go well with the bargaining theory we know (given that the agents try to bargain well, there are of course possible agents which don't try to cooperate well).

Replies from: daniel-kokotajlo
comment by Daniel Kokotajlo (daniel-kokotajlo) · 2022-07-28T11:50:44.101Z · LW(p) · GW(p)

In the cases I'm thinking about you don't just read their minds now, you read their entire history, including predecessor agents. All is transparent. (Fictional but illustrative example: the French AGI and the Russian AGI are smart like sherlock holmes, they can deduce pretty much everything that happened in great detail leading up to and during the creation of each other + also they are still running on human hardware at human institutions and thanks to constant leaking and also the offence/defense balance favoring offense, they can see logs of what each other is and was thinking the entire time, including through various rounds of modification-to-successor agent.)

comment by Dagon · 2022-07-01T02:05:22.391Z · LW(p) · GW(p)

I'd like to understand why, if you think your trade partner is willing and able to change their offer based on your algorithm, you don't set your baseline HIGHER than "fair".  If you have the power to manipulate the offer by being known to reject some, you should use that to get an even better deal, right?

Replies from: tom-shlomi-1
comment by Tom Shlomi (tom-shlomi-1) · 2022-07-02T00:06:48.923Z · LW(p) · GW(p)

It's a combination of evidential reasoning and norm-setting. If you're playing the ultimatum game over $10 with a similarly-reasoning opponent, then deciding to only accept an (8, 2) split mostly won't increase the chance they give in, it will increase the chance that they also only accept an (8, 2) split, and so you'll end up with $2 in expectation. The point of an idea of fairness is that, at least so long as there's common knowledge of no hidden information, both players should agree on the fair split. So if, while bargaining with a similarly-reasoning opponent, you decide to only accept fair offers, this increases the chance that your opponent only accepts fair offers, and so you should end up agreeing, modulo factors which cause disagreement on what is fair.

Similarly, fair bargaining is a good norm to set, as, once it is a norm, it allows people to trade on/close to the Pareto frontier while disincentivizing any attempts to extort unfair bargains.

Replies from: Dagon
comment by Dagon · 2022-07-02T01:38:08.300Z · LW(p) · GW(p)

It's a combination of evidential reasoning and norm-setting.

I see the norm-setting, which is exactly what I'm trying to point out.  Norm-setting is outside the game, and won't actually work with a lot of potential trading partners.  I seem to be missing the evidential reasoning component, other than figuring out who has more power to "win" the race.

with a similarly-reasoning opponent

Again, this requirement weakens the argument greatly.  It's my primary objection - why do we believe that our correspondent is sufficiently similarly-reasoning for this to hold?  If it's set up long in advance that all humans can take or leave an 8,2 split, then those humans who've precommitted to reject that offer just get nothing (as does the offerer, but who knows what motivated that ancient alien)?

comment by JBlack · 2022-07-01T07:38:48.759Z · LW(p) · GW(p)

Yes, if it is known that you accept whatever they tell you before setting your fair price then you will probably get bad information and make deals that are bad for you.

For any given credence distribution you might have for their true utility, there are corresponding rejection functions that both disincentivize their offering unfair deals and get you high utility.

comment by Zolmeister · 2022-06-30T23:27:34.675Z · LW(p) · GW(p)

This concept is introduced in Book 1 as the solution to the Ultimatum Game, and describes fairness as Shapely value.

When somebody offers you a 7:5 split, instead of the 6:6 split that would be fair, you should accept their offer with slightly less than 6/7 probability. Their expected value from offering you 7:5, in this case, is 7 * slightly less than 6/7, or slightly less than 6.

_

Once you've arrived at a notion of a 'fair price' in some one-time trading situation where the seller sets a price and the buyer decides whether to accept, the seller doesn't have an incentive to say the fair price is higher than that; the buyer will accept with a lower probability that cancels out some of the seller's expected gains from trade. [1]

Replies from: Zolmeister
comment by Zolmeister · 2022-06-30T23:37:37.062Z · LW(p) · GW(p)

I found this section, along with dath ilani Governance, and SCIENCE! particularly brilliant.

comment by Daniel V · 2022-06-30T19:08:46.754Z · LW(p) · GW(p)

Related: probabilistic negotiation (linking to my comment) [LW · GW]. 

Because of asymmetric information about demand schedules in the individual one-off context, either you're guessing or accepting their self-reports (i.e., I agree with Kokotajlo and Shlomi). As nice as probabilistic negotiation is in theory, practically you just hope to converge to splitting the surplus, and giving-in happens for whomever tires of the negotiation first. Depends on how much you know about your counterpart.

It's much easier to set market prices where you have repeated transactions across participants, so "market" demand schedules (i.e., multiple unitary reservation prices) can be "learned" and the "market price" that enables value-maximization reveals itself. I appreciate that it's harder at the individual level - bringing in probability allows working with individual demand schedules (i.e., multiple probabilistic reservation prices rather than a single unitary reservation price), but bringing in probability doesn't exactly solve the problem because probabilities can only be learned through being furnished knowledge of the generating mechanism (e.g., Yudkowsky and Kennedy) or through repeated observation, the exact things that we assume we lack in this situation and that make this a problem in the first place.

comment by ChristianKl · 2022-06-30T18:21:59.447Z · LW(p) · GW(p)

Whether or not a trade happens depends on whether the traders expect the trade to be valuable and not on the actual value that the participants receive. 

Replies from: korin43
comment by Brendan Long (korin43) · 2022-06-30T20:09:58.596Z · LW(p) · GW(p)

I'm confused about what distinction you're making. What's the difference between the value of the trade and the actual value the participants expect to receive? Is the distinction if one trader is mislead about the deal?

Replies from: ChristianKl
comment by ChristianKl · 2022-06-30T20:26:24.776Z · LW(p) · GW(p)

The expectation of the value is something that exists in the head of the trader before the trade actually happens. The actual value is something that happens at a different layer of abstraction.

Even without anyone intentionally misleading anybody else, it's often hard to know the value of a service.

If I invent a new medical treatment and offer it to a patient, the traditional approach to estimating the actual value needs clinical trials. Prediction-based Medicine [LW · GW] might not need as much work, but it still doesn't change the fundamental fact that the received value is something different than the expected value. 

This is classic, the map is not the territory, consciousness of abstraction. It's surprising to me that it's not obvious to other people that consciousness of abstraction and the difference between expected and received value doesn't feel important to other people. Maybe, Korzybski should be read more?