Some Ways Coordination is Hard

post by Zvi · 2019-06-13T13:00:00.443Z · LW · GW · 11 comments

Contents

  Questions
  Duncan’s Example
  Solutions in Duncan’s Example
  Solutions in Raymond’s Example
  The Facebook Exodus Example
None
11 comments

Response to (Raymond Arnold at Less Wrong): The Schelling Choice is Rabbit, Not Stag [LW · GW] and by implication Duncan’s Open Problems in Group Rationality

Stag Hunt is the game whereby if everyone gets together, they can hunt the Stag and win big. Those who do not hunt Stag instead hunt Rabbit, and win small. But if even one person instead decides to hunt Rabbit rather than hunt Stag, everyone hunting Stag loses rather than wins. This makes hunting Stag risky, which makes Stag risky (since others view it as risky, and thus might not do it, and view others as viewing it as risky, making it even more likely they won’t do it, and so on). Sometimes this can be overcome, and sometimes it can’t.

Raymond claims that the Schelling point of this game, by default, is Rabbit, not Stag.

Whether or not this is true depends on the exact rules of the game and the exact game state, what one might call the margin of coordination. 

If you haven’t yet, click through to at least to Raymond’s article [LW · GW] and his quote from Duncan’s original description, and consider reading Duncan’s full article.

Questions

  1. What is the risk versus reward on the stag hunt? How often must it work to be worth trying?
  2. Can we afford to fail? Can others? For how long?
  3. Can the stag hunt fail even if everyone chooses stag?
  4. Do we have time to iterate or communicate to establish future coordination, and will our actions now act as a signal?
  5. How many people need to cooperate before the stag hunt is worthwhile? Can we afford to lose anyone? Is there a lesser stag we can go after instead with a smaller group?
  6. Is there a particular player who has reason to worry, or might cause others to worry?
  7. Are the players known to be aware of the stag hunt? If there are multiple possible stag hunts, do we all know which one we’re going for?
  8. Are the players confident others know the payoff and agree it is known to be there for everyone? Does everyone even know what stag is and what rabbit is?
  9. Does this group have a history of going on stag hunts? Is going on the stag hunt praiseworthy or blameworthy if no one follows you, or not enough people do?
  10. Do the rabbit hunts have network effects such that failure to coordinate on them is bad for everyone, not only for the person going stag?
  11. Are there players who value relative success rather than absolute success, and want others to fail?
  12. Do we trust other players to behave in their own best interests and trust them to trust others in this way? Do we trust them to act in the best interest of the group knowing the group rewards that?
  13. Do we trust others to do what they say they are going to do?
  14. Are people capable of overcoming small short-term personal incentives to achieve or stand for something worthwhile, to help the group or world, or to cultivate and encourage virtue? Do they realize that cooperation is a thing and is possible at all? Do they realize that It’s Not the Incentives, It’s You [? · GW]?
  15. Is the stag hunt even worth asking all these questions?

Note that most of these require common knowledge. We need everyone to know, and for everyone to know that everyone knows that everyone knows that everyone knows for however many levels it takes. Alternatively, we need habits of stag hunting that are self-sustaining.

Man, coordination is complicated. And hard. So many questions! Such variation.

Duncan’s Example

In Duncan’s example, we have full common knowledge of the situation, and full agreement on payoffs, which is very good. Alas, we still suffer severe problems.

We have a very bad answer to questions two, five and six. And also fourteen. If we lose even one person, the stag hunt fails, and there is no alternative hunt with fewer people. And we have players who have reason to worry, because they can ill afford a failed stag hunt. One of them will be stranded without the ability to even hunt rabbit, should the stag hunt fail.

It seems that everyone involved is reasoning as if each member is looking out mostly or entirely for themselves and their short-term success, and expecting all others to do so as well, even when this is an obviously bad medium-term plan.

The result will often be cascading loss of faith, resulting in full abandonment of the stag hunt. Certainly the stag hunt is no longer the Schelling point of selection, given the risks everyone sees. You wouldn’t do this implicitly, without everyone agreeing to it first, and you’d only do it with an explicit agreement if you had common knowledge of everyone being trustworthy to follow through.

Or, if there was a long history that the moment everyone had the resources to do it, they coordinate on the stag hunt. But that still only works with common knowledge of the full game state, so getting it without explicit communication is still super rare.

The obvious thing this group can do, in this situation, is to explicitly agree to go on the stag hunt. 

But they’re explicitly already trying to do that, and finding it won’t work, because these people do not trust each other. They fail question thirteen.

What are some other things this group might try? With so many advantages, it seems a shame to give up.

Solutions in Duncan’s Example

D1. Alexis gives one utility to Elliot (solve question two)

This actually should be enough! Alexis gives one of their 15 banked resources to Elliot. Now everyone has at least 6 banked resources, and will be able to choose rabbit even if the hunt fails. This makes the situation clear to everyone, and removes the worry that Elliot will need to choose rabbit.

D2. Wait until next hunt (solve question two)

Even simpler than option D1. If everyone hunts rabbit now, everyone goes up in stored resources, and next time has enough buffer to coordinate on a stag hunt.

Both point to the principle of slack that Raymond reminds us about, and extend this to the whole group. Don’t only keep your own slack high, don’t ask anyone else to fully deplete theirs under normal circumstances, even if it means everyone doing less efficient things for a bit.

D3. Build trust (solve question thirteen)

Note that if the group is sufficiently unreliable, that alone will prevent all stag hunts no matter what else is done. If the group could trust each other to follow through, knew that their words had meanings and promises meant something, then they could coordinate reliably despite their other handicaps here. With sufficient lack of trust, the stag hunt isn’t positive expectation to participate in, so there’s no point and everyone hunts rabbits until this is fixed.

D4. Use punishment or other incentives

A solution for any game-theoretic situation is to change the rules of the game, by coordinating to reallocate blame and resources based on actions. This is often the solution within the game, but can also happen by extending the situation outside the game. Raymond’s example shows that the game of Stag Hunt often inevitably causes punishment to take place. Using enough of it, reliably enough, predictably enough, should work, at least for the failures in Duncan’s example.

Improving any of the other answers would also help tilt the scales. Stag hunts are threshold effects at their core, so helping the situation in any way improves your chances more than one might think, and any problem causes more issues than you’d naively predict.

Solutions in Raymond’s Example

Raymond’s scenario faces different problems than Duncan’s. Where Duncan had problems with questions 2, 5, 6 and 13, Raymond faced a problem with questions 3, 7 and especially 8. He thought that staying focused had the big payoff, while his coworker thought that staying narrowly focused was a failure mode.

Coordination is especially hard if some of the people coordinating think that the result of coordination would be bad. What are the solutions now?

R1. Talk things over and create common knowledge (solve seven and eight)

R2. Propose a different coordinated approach designed to appeal to all participants

Once Raymond and his colleague talked things over and had common knowledge of each others’ preferences for conversation types, coordination became possible. It became clear that Raymond’s preferred approach didn’t count as a stag hunt, because it didn’t benefit all parties, and there was a disagreement about whether it was net beneficial slash efficient to do it. Instead, a hybrid approach seemed likely to be better.

In cases where there is a clearly correct approach, making that clear to everyone, and knowing this has happened, makes it much more likely that coordination can successfully take place. In cases where there turns out not to be such an approach, this prevents those who previously thought such an approach existed from having false expectations, and minimizes conflict and frustration.

R3. Bid on what approach to take

Often coordination on any solution is better than failure to coordinate. Some level of meandering versus focus that all parties agree to is better than fighting over that ratio and having the meeting collapse, provided the meeting is worthwhile. Thus, if the participants can’t agree on what’s best, or find a solution that works well enough for everyone to prompt coordination, then a transfer of some kind can fix that.

Doing this with dollars in social situations is usually a terrible idea. That introduces bad dynamics, in ways I won’t go into here. Instead, one should strive to offer similar consideration in other coordination questions or trade-offs in the future. The greater the social trust, the more implicit this can be while still working. This then takes the form of an intentionally poorly specified number of amorphous points, that then can be cashed in at a future time. The points matter. They don’t need to balance, but they can’t be allowed to get too imbalanced.

The Facebook Exodus Example

A while back, I realized I was very much Against Facebook. The problem was that the entire rationalist community was doing most of their discourse and communication there, as was a large portion of my other friend groups. I’d failed to find a good Magic: The Gathering team that didn’t want to do its (if you can still call it) coordination on Facebook. This was a major factor in ending my Magic comeback.

Many, but far from all, of those I queried agreed that Facebook was terrible and wished for a better alternative. But all of them initially despaired. The problem looked too hard. The network effects were too strong. Even if we could agree Facebook was bad, what was the alternative? What could possibly meet everyone’s needs as well as Facebook was doing, even if it was much better at not ruining lives and wasting time? Even if a good alternative was found, could we get everyone to agree on it?

Look at that list of questions. Consider that success depends to a large extent on common knowledge of the answers to those questions.

  1. What is the risk versus reward on the stag hunt? How often must it work to be worth trying?
  2. Can we afford to fail? Can others? For how long?
  3. Can the stag hunt fail even if everyone chooses stag?
  4. Do we have time to iterate or communicate to establish future coordination, and will our actions now act as a signal?
  5. How many people need to cooperate before the stag hunt is worthwhile? Can we afford to lose anyone? Is there a lesser stag we can go after instead with a smaller group?
  6. Is there a particular player who has reason to worry, or might cause others to worry?
  7. Are the players known to be aware of the stag hunt? If there are multiple possible stag hunts, do we all know which one we’re going for?
  8. Are the players confident others know the payoff and agree it is known to be there for everyone? Does everyone even know what stag is and what rabbit is?
  9. Does this group have a history of going on stag hunts? Is going on the stag hunt praiseworthy or blameworthy if no one follows you, or not enough people do?
  10. Do the rabbit hunts have network effects such that failure to coordinate on them is bad for everyone, not only for the person going stag?
  11. Are there players who value relative success rather than absolute success, and want others to fail?
  12. Do we trust other players to behave in their own best interests and trust them to trust others in this way? Do we trust them to act in the best interest of the group knowing the group rewards that?
  13. Do we trust others to do what they say they are going to do?
  14. Is the stag hunt even worth asking all these questions?

Which ones were problems?

Most of them.

We had (1) uncertain risk versus reward of switching to another platform or set of platforms, (3) even if coordination on the switch was successful, with (2) continuous and potentially painful social consequences and blameworthiness for being ‘out of the loop’ even temporarily. To be better off, often (5) the entire group would have to agree to the new location and method, with (6)(8) some people who would dislike any given proposal, or like staying with Facebook because they didn’t agree with my assessments, or because they’d need to coordinate elsewhere. (10) The attempt would hurt our network effects and cause non-trivial communication interruptions, even if it eventually worked. (7) Getting the word out would be a non-trivial issue, as this would include common knowledge that the word was out and the coordination was on, when it likely wasn’t going to be on at any given time.

(12) Facebook has many addictive qualities, so even many people who would say they ‘should’ quit or even were quitting would often fail. (13) Even when people agreed to switch and announced this intention, they’d often find themselves coming back.

There was a lot of excusing one’s actions because of (14) Network Effects and The Incentives.

A lot of people (15) reasonably didn’t want to even think about it, under these circumstances.

The good news is we had (4) plenty of time to make this work, and (9) even most of those who didn’t think the switch was a good idea understood that it was a noble thing to attempt and would make sense under some scenarios. And no one was (11) thinking of their relative social media position. And of course, that the stag hunt wouldn’t automatically or fully fail if one person didn’t cooperate, and if we got enough cooperation, critical mass would take over.

But that’s still 11 out of 14 problems that remained importantly unsolved.

The better news was we had one other important advantage. I hated hunting rabbit. Rabbit hunting was not a productive exercise for me, and I’d rather be off hunting stag unsuccessfully on my own, than hunting rabbit. It’s not a great spot, certainly there would be better uses of time, but that was a great advantage – that I didn’t feel too bad about failures. Otherwise, the whole thing would never have had a chance.

It also helped that many others are increasingly making similar cases, for a wide variety of reasons, some of which I don’t even agree with or I don’t think are big deals.

The solution I went with was three–fold.

F1. First, to explain why I believed Facebook was awful, in order to help create common knowledge. Starting with an article, then continuing to make the case.

F2. Second, go out and start stag hunting on my own, and make it clear I wasn’t going anywhere. This does not work when stag hunts are all-or-nothing with a fixed ‘all’ group, but this is rare. More often, stag hunts work if those who people count on, do their jobs rather than everyone who might show up in theory shows up and does their job. That’s a crucial distinction, and a dramatic drop in difficulty.

F3. Third, to reward with engagement, praise and when helpful direct encouragement and assistance those who wanted to make the switch to blogs or other less toxic communication forms. To some extent there was shaming of Facebook participation, but that’s not much use when everyone’s doing it.

Without the effort to first create common knowledge, the attempt would have had no chance of success. And of course, a combination of factors helped out, from the emergence of Less Wrong 2.0 to a variety of others waking up to the Facebook problem at about the same time, for their own reasons.

The solution of ‘be the coordination you want to see in the world even if it doesn’t make sense on its own’ is very powerful. That’s how Jason Murray kick-started the New York rationalist group, and how many other groups have started – show up at the same time and same place, even if you end up eating alone, to ensure others can come. Doing it for a full-on stag hunt with fixed potential participation is more expensive, but it is still a very strong signal that can encourage a cascade. We need to accept that coordination is hard, and therefore solutions that are expensive are worth considering.

Solving coordination problems is not only expensive, it’s also often highly unpleasant and non-intuitively difficult work, that superficially doesn’t look like the ‘real work.’ Thus, those who solve coordination problems often end up resented as those who didn’t do the real work and are pulling down the big bucks, as something that one should obviously be able to get along without, discouraging and lowering reward and thus discouraging this hard and risky work. Often many people correctly say ‘oh, the problem is people can’t coordinate’ go out to solve it, and make things worse, because the problems are indeed hard, and competition to solve them makes them harder.

If everyone required to successfully hunt a stag can agree on common knowledge of what the stag is, that the stag would be well worth hunting, and how and when the stag is hunted, one could argue that’s a lot more than half of the battle. The rest is still hard, but relatively hard part really, really should be over. Ninety percent of life after that, one could say, is the virtue of reliably showing up.

 

 

 

 

 

 

11 comments

Comments sorted by top scores.

comment by Dagon · 2019-06-13T16:48:35.560Z · LW(p) · GW(p)

I agree with and support your conclusions - "just start, even if the first hunt might fail" is excellent for cases where the problem is mostly one of coordination - there needs to be common knowledge that a stag hunt has any chance of working at all.

I worry that the hunting model is so oversimplified as to obscure the OTHER hard parts about densely-connected interpersonal behaviors (aka "group behaviors"), most notably actual unaligned values (my ears prick up on this one whenever someone uses "utility" as if it were a resource), and akrasia (internal unacknowledge misalignment), and actual capability differences (some people can bag a rabbit, but actually make a stag hunt less likely to succeed).

And it _ALSO_ obscures some solutions. Find ways to make them non-exclusive. I haven't left Facebook, but I'm happy to use other mechanisms for groups which have. Use rabbit guts as stag snares, letting individual research contribute in smaller ways to the overall goal. If you like stags, and your current tribe likes rabbits, change tribes.

comment by Raemon · 2021-01-04T04:57:46.456Z · LW(p) · GW(p)

tl;dr – I'd like to see further work that examines a ton of examples of real coordination problems that rationalists have run into ("stag hunt" shaped and otherwise), and then attempt to extract more general life lessons and ontologies from that. 

...

1.5 years later, this post still seems good for helping to understand the nuances of stag hunts, and I think was worth a re-read. But something that strikes me as I reread it is that it doesn't have any particular takeaway. Or rather, it doesn't compress easily. 

I spent 5 minutes seeing if I could distill it down in some fashion – either distill the 15 questions into a smaller higher-order-cluster, or distill the life-lessons from the various examples. But that ended up feeling like the wrong approach. These three examples were the ones that Zvi had available at the time, but might not be the best to generalize from.

This is maybe all fine – not everything needs to compress easily. Sometimes it's just useful to read through some considerations and examples. I personally am glad to have re-read this post, because I'm thinking a lot about coordination now, and having some meaty examples is useful. But I'm not sure this genre of post makes sense for a Best Of book.

But I think there are two different questions I might ask about this post, re: Longterm Curation.

  • Does this make sense to include in a Best of 2019 book?
  • Does this make sense to include in a dedicated Coordination Book (either comprehensive book of essays, or perhaps explicit textbook?)

For the first question, it depends a lot on what else is up for consideration. My guess is we have enough great posts that this one wouldn't make my personal cut, but I think the post is solid and I'd feel good about including it if it turned out to be in the top 40.

For the second question, I think it'd be net-positive to include this post in a Coordination Sequence. But for the Coordination Textbook, it feels like not-quite-the-right thing and I suspect there's a better version of it out there waiting to be written.

What I'd really like to see is someone who does "Babble Challenge: List 50 coordination failures among rationalists and nearby folk", and then goes through most of those and explores what the shape of the "game" was (was it a stag hunt or something else?) and then tries to draw out some commonalities and life lessons.

Right now I'm real-into-stag-hunts, but I'm not sure that's actually the right frame for most problems. (It feels more one-size-fits-all than "Prisoner's Dilemma", but not sufficiently. I'll also be commenting about this in my self-review of "Schelling Choice is Rabbit.")

Replies from: Zvi
comment by Zvi · 2021-01-15T20:15:08.203Z · LW(p) · GW(p)

I just reviewed the OP this post responds to, and sounds like we're thinking along similar lines in many ways - I'd like to see a Big Book of Coordination at some point, and hold both posts back until then, or if people like both enough include both.

Here is my offer: Cherry-picking examples is bad, and worry that someone cherry-picked them is also bad. Appearance of impropriety, register your experiments, other neat stuff like that. So if Raemon or someone else compiles a list of the concrete examples, I'll make at least an ordinary effort to do a post about them, intended to be similar to things like Simple Rules of Law. 

comment by Raemon · 2020-12-13T05:28:53.716Z · LW(p) · GW(p)

This felt like one of the important pieces of the "whether/how to stag hunt!?" question, which has been central to my thinking on the past few years.

comment by Mary Chernyshenko (mary-chernyshenko) · 2019-06-28T18:20:28.776Z · LW(p) · GW(p)

There's got to be a name for a "stag hunt,which if successful requires reminding people that a stag has been gained". Seems like the average rabbit doesn't have this problem.

comment by Ben Pace (Benito) · 2020-12-05T03:55:57.223Z · LW(p) · GW(p)

This post helped me understand when and how to coordinate around new solutions, and gave a lot more depth to the ideas of stag and rabbit hunts.

comment by johnswentworth · 2019-06-13T16:25:27.608Z · LW(p) · GW(p)

Is "shilling point" some new thing I've never heard of, or is this just another spelling of "Schelling point"? I assume the latter, but it sounds like a name someone would come up with for a concept similar-to-but-slightly-different-from a Schelling point.

Replies from: gjm
comment by gjm · 2019-06-13T17:07:27.753Z · LW(p) · GW(p)

Zvi has "Shilling" in the title of Raymond's earlier post, which definitely said "Schelling", so I bet it's just a mis-schpelling.

Replies from: Raemon
comment by Raemon · 2019-06-13T18:55:16.622Z · LW(p) · GW(p)

I got confused when I saw the misspelling multiple times and wondered if it were some kind of wordplay

Replies from: Zvi
comment by Zvi · 2019-06-13T19:23:43.006Z · LW(p) · GW(p)

Edit: Nope, not wordplay, fixing it now.

Feels like an autocorrect problem, but could also be my brain caching it wrong.

Replies from: Vaniver
comment by Vaniver · 2019-06-14T22:26:12.053Z · LW(p) · GW(p)

I fixed it more.