Simulacrum 3 As Stag-Hunt Strategy
post by johnswentworth · 2021-01-26T19:40:42.727Z · LW · GW · 37 commentsContents
In The Wild Takeaway None 37 comments
Reminder of the rules of Stag Hunt:
- Each player chooses to hunt either Rabbit or Stag
- Players who choose Rabbit receive a small reward regardless of what everyone else chooses
- Players who choose Stag receive a large reward if-and-only-if everyone else chooses Stag. If even a single player chooses Rabbit, then all the Stag-hunters receive zero reward.
From the outside, the obvious choice is for everyone to hunt Stag. But in real-world situations, there’s lots of noise and uncertainty, and not everyone sees the game the same way, so the Schelling choice is Rabbit [LW · GW].
How does one make a Stag hunt happen, rather than a Rabbit hunt, even though the Schelling choice is Rabbit?
If one were utterly unscrupulous, one strategy would be to try to trick everyone into thinking that Stag is the obvious right choice, regardless of what everyone else is doing.
Now, tricking people is usually a risky strategy at best - it’s not something we can expect to work reliably, especially if we need to trick everyone. But this is an unusual case: we’re tricking people in a way which (we expect) will benefit them. Therefore, they have an incentive to play along.
So: we make our case for Stag, try to convince people it’s the obviously-correct choice no matter what. And… they’re not fooled. But they all pretend to be fooled. And they all look around at each other, see everyone else also pretending to be fooled, and deduce that everyone else will therefore choose Stag. And if everyone else is choosing Stag… well then, Stag actually is the obvious choice. Just like that, Stag becomes the new Schelling point.
We can even take it a step further.
If nobody actually needs to be convinced that Stag is the best choice regardless, then we don’t actually need to try to trick them. We can just pretend to try to trick them. Pretend to pretend that Stag is the best choice regardless. That will give everyone else the opportunity to pretend to be fooled by this utterly transparent ploy, and once again we’re off to hunt Stag.
This is simulacrum 3 [? · GW]: we’re not telling the truth about reality (simulacrum 1), or pretending that reality is some other way in order to manipulate people (simulacrum 2). We’re pretending to pretend that reality is some other way, so that everyone else can play along.
In The Wild
We have a model for how-to-win-at-Stag-Hunt. If it actually works, we’d expect to find it in the wild in places where economic selection pressure favors groups which can hunt Stag. More precisely: we want to look for places where the payout increases faster-than-linearly with the number of people buying in. Economics jargon: we’re looking for increasing marginal returns.
Telecoms, for instance, are a textbook example. One telecom network connecting fifty cities is far more valuable than fifty networks which each only work within one city. In terms of marginal returns: the fifty-first city connected to a network contributes more value than the first, since anyone in the first fifty cities can reach a person in the fifty-first. The bigger the network, the more valuable it is to expand it.
From an investor’s standpoint, this means that a telecom investment is likely to have better returns if more people invest in it. It’s like a Stag Hunt for investors: each investor wants to invest if-and-only-if enough other investors also invest. (Though note that it’s more robust than a true Stag Hunt - we don’t need literally every investor to invest in order to get a big payoff.)
Which brings us to this graph, from T-mobile’s 2016 annual report (second page):
Fun fact: that is not a graph of those numbers. Some clever person took the numbers, and stuck them as labels on a completely unrelated graph. Those numbers are actually near-perfectly linear, with a tiny amount of downward curvature.
Who is this supposed to fool, and to what end?
This certainly shouldn't fool any serious investment analyst. They'll all have their own spreadsheets and graphs forecasting T-mobile's growth. Unless T-mobile's management deeply and fundamentally disbelieves the efficient markets hypothesis, this isn't going to inflate the stock price.
It could just be that T-mobile's management were themselves morons, or had probably-unrealistic models of just how moronic their investors were. Still, I'd expect competition (both market pressure and investor pressure in shareholder/board meetings) to weed out that level of stupidity.
My current best guess is that this graph is not intended to actually fool anyone - at least not anyone who cares enough to pay attention. This graph is simulacrum 3 behavior: it’s pretending to pretend that growth is accelerating. Individual investors play along, pretending to be fooled so that all the investors will see them pretending to be fooled. The end result is that everyone hunts the Stag: the investors invest in T-mobile, T-mobile uses the investments to continue expansion, and increasing investment yields increasing returns because that’s how telecom networks work.
… Well, that’s almost my model. It still needs one final tweak.
I’ve worded all this as though T-mobile’s managers and investors are actually thinking through this confusing recursive rabbit-hole of a strategy. But that’s not how economics usually works [LW · GW]. This whole strategy works just as well when people accidentally stumble into it. Managers see that companies grow when they “try to look good for investors”. Managers don’t need to have a whole gears-level model of how or why that works, they just see that it works and then do more of it. Likewise with the investors, though to a more limited extent.
And thus, a maze [? · GW] is born: there are real economic incentives for managers to pretend (or at least pretend to pretend) that the company is profitable and growing and all-around deserving of sunglasses emoji, regardless of what’s actually going on. In industries where this sort of behavior results in actually-higher profits/growth/etc, economic pressure will select for managers who play the game, whether intentionally or not. In particular, we should expect this sort of thing in industries with increasing marginal returns on investment.
Takeaway
Reality is that which remains even if you don’t believe it. Simulacrum 3 is that which remains only if enough people pretend (or at least pretend to pretend) to believe it. Sometimes that’s enough to create real value - in particular by solving Stag Hunt problems. In such situations, economic pressure will select for groups which naturally engage in simulacrum-3-style behavior.
37 comments
Comments sorted by top scores.
comment by Yoav Ravid · 2021-01-28T07:08:12.680Z · LW(p) · GW(p)
Interestingly, i think the current coordination around the GameStop short squeeze can be explained with this framework.
If WallStreetBets tried to do this coordination with the goal of making money, they would probably fail. but this is not the stated goal. for many, the goal is one of entertainment, sticking it to the man, or other goals that are unrelated to making money. many even say they don't care if they lose money on this because it's not the point.
So in a way, they're pretending to value the stock, and they're coordinating to pretend.
I think the benefits in this case from having a non-monetary goal is that it changes the incentive structure. if the goal was to make money, then there would be much more incentive to defect and not everyone would be able to win. if the point is to bankrupt those who shorted these companies (or some other non-monetary goal), then everyone can "win" (even if they lose money) and it's easier to coordinate.
I don't know if it goes all the way to level 3, but i still find this an interesting connection.
Replies from: TurnTrout, daniel-kokotajlo↑ comment by TurnTrout · 2021-01-28T22:56:23.749Z · LW(p) · GW(p)
This is a great point.
I think the WSB situation is fascinating [LW(p) · GW(p)] and I hope that someone does a postmortem once the dust has settled. I think it contains a lot of important lessons with respect to coordination problems. See also Eliezer's Medium post.
↑ comment by Daniel Kokotajlo (daniel-kokotajlo) · 2021-01-29T19:55:42.941Z · LW(p) · GW(p)
Very good point! I think their slogan "we like the stock" is another example of this, one that is more blatant and self-aware.
comment by johnswentworth · 2021-01-31T22:23:48.080Z · LW(p) · GW(p)
I've been drawing more connections the more I think about this.
Here's one: belief-in-belief [LW · GW]. Humans seem hard-wired to sometimes believe that they believe things different from their aliefs (i.e. their "real" beliefs as implied by their actions). Why? The usual answer is handwave politics handwave machiavellian intelligence hypothesis handwave. But what are the actual concrete mechanisms by which humans are able to gain by being good at "politics" in this particular way? What exactly are these "politics", what is the form of the game, how do "politics" have such large evolutionary rewards?
One possible answer is that belief-in-belief evolved to solve coordination games, via exactly the sort of strategy in this post.
Example: a group is organizing a lion hunt, and the tribe's shaman blesses them with lion-protection. They don't want to actually believe that they're magically protected against lions; that would imply terrible tactics like "casually stroll in and pick a one-on-one fight with a lion". Nor do they want to pretend to believe to be magically protected; that would at-best trick other members of the tribe into thinking they're magically protected against lions, which would just mean other tribe-members get slaughtered by bad tactics. (Yes, evolution is usually happy when competitors die, but in this case I'm assuming that humans in a larger tribe have higher reproductive fitness than in a smaller tribe or alone.) What the hunters really want is to pretend-to-pretend to be protected from lions (so everyone goes on the hunt), while also acting like they are not protected (e.g. for tactical purposes).
If the goal is to pretend-to-pretend, without actually believing or pretending to actually believe, then belief-in-belief is the ideal tool.
Replies from: Viliam, causal-chain, pktechgirl, Raemon↑ comment by Viliam · 2021-02-04T00:48:12.739Z · LW(p) · GW(p)
Seems like, step by step, every irrational human behavior will be explained as a higher form of rationality. At the end, we will realize that the only truly irrational people on this planet are the so-called rationalists.
(Not completely serious, but...)
Replies from: PatrickDFarley↑ comment by PatrickDFarley · 2021-02-11T17:08:07.243Z · LW(p) · GW(p)
It's easy to say this if you're surrounded by nerdy types who stubbornly refuse to leave simulacrum 1. But have you looked at other people these days??
Look at the Midwestern mom who just blew another $200 on the latest exercise fad that is definitely not going to give her the body she wants. Look at every business that went under because they failed to measure what really mattered. Look at anyone whose S3 sentiment has been so easily hijacked and commoditized by the social media outrage machines.
Nah, I'm thoroughly convinced that there is still an advantage in knowing what's actually true, which means seeing the S levels for what they are.
↑ comment by Causal Chain (causal-chain) · 2022-12-10T21:45:12.859Z · LW(p) · GW(p)
This seems like a reasonable mechanism, but I thought we already had one: belief-in-belief makes it easier to lie without being caught.
↑ comment by Elizabeth (pktechgirl) · 2022-12-11T06:06:48.502Z · LW(p) · GW(p)
I can't tell if it's related in a way relevant to the post, but the lion anecdote definitely reminds me of this description of CA's vaccine rollout. CA wanted to make sure vaccines were distributed evenly or disproportionately towards disprivileged people. One effort towards this was zipcode restrictions, enforced with ID requirements. Unfortunately ID and proof of address are exactly the kind of things that privilege helps you access. Being poor, moving a lot, working weird hours, lacking legal right to be in the country... all make you less likely to have ID. The article alleges that the (unspoken) plan was to just not enforce those rules against people who looked disprivileged, but not every worker got the memo and lots of people who everyone wanted to have a vaccine were turned away because they didn't have a bill with their address on it.
comment by romeostevensit · 2021-01-26T23:44:21.184Z · LW(p) · GW(p)
I like this as an explanation for babblers, people who stumble on simulacrum level 3 but are clueless and thus are totally disconnected from the original reasons for all these necessary coordination points. They've never seen a stag or rabbit but know that saying "stag!" emphatically to people gets agreement. This happens intergenerationally as parents forget to alert their children to the actual reasons for things. Having observed this happen with millenials, I am scared of what we are all collectively missing because older generations literally just forgot to tell us.
Replies from: DonyChristie, Gunnar_Zarncke↑ comment by Pee Doom (DonyChristie) · 2021-01-27T02:15:11.235Z · LW(p) · GW(p)
This happens intergenerationally as parents forget to alert their children to the actual reasons for things. Having observed this happen with millenials, I am scared of what we are all collectively missing because older generations literally just forgot to tell us.
What do you think we are missing?
↑ comment by Gunnar_Zarncke · 2021-01-27T13:49:46.905Z · LW(p) · GW(p)
See also Lost Purposes [LW · GW], Old Rules With Forgotten Reasons
comment by Gunnar_Zarncke · 2021-01-26T20:28:48.957Z · LW(p) · GW(p)
This reminded me of a summary of the job of real estate developers by Scott Alexander:
I started the book with the question: what exactly do real estate developers do? ... Why don’t you or I take out a $100 million loan from a bank, hire a company to build a $100 million skyscraper, and then rent it out for somewhat more than $100 million and become rich?
As best I can tell, the developer’s job is coordination. This often means blatant lies. The usual process goes like this: the bank would be happy to lend you the money as long as you have guaranteed renters. The renters would be happy to sign up as long as you show them a design. The architect would be happy to design the building as long as you tell them what the government’s allowing. The government would be happy to give you your permit as long as you have a construction company lined up. And the construction company would be happy to sign on with you as long as you have the money from the bank in your pocket. Or some kind of complicated multi-step catch-22 like that. The solution – or at least Trump’s solution – is to tell everybody that all the other players have agreed and the deal is completely done except for their signature. The trick is to lie to the right people in the right order, so that by the time somebody checks to see whether they’ve been conned, you actually do have the signatures you told them that you had.
Now I wonder whether it is actually "blatant lies" or more like the pretend to pretend to develop the real estate.
Replies from: Ratios, Raemon, romeostevensit, vedernikov-andrei↑ comment by Ratios · 2021-12-02T16:34:19.984Z · LW(p) · GW(p)
Relevant Joke:
I told my son, “You will marry the girl I choose.”
He said, “NO!”
I told him, “She is Bill Gates’ daughter.”
He said, “OK.”
I called Bill Gates and said, “I want your daughter to marry my son.”
Bill Gates said, “NO.”
I told Bill Gates, My son is the CEO of World Bank.”
Bill Gates said, “OK.”
I called the President of World Bank and asked him to make my son the CEO.
He said, “NO.”
I told him, “My son is Bill Gates’ son-in-law.”
He said, “OK.”
This is how politics works.
↑ comment by Raemon · 2021-01-26T21:05:32.008Z · LW(p) · GW(p)
Regarding the "tell everyone that everyone else has signed", is this an issue that can be solved with assurance contracts, or is there vague politics involved where you're not even willing to sign your name to a thing if you're not confident it's going to succeed?
I'd expect a fair amount of optimization to have gone into finding assurance contract solutions if they actually worked in this instance.
Replies from: Gunnar_Zarncke, johnswentworth↑ comment by Gunnar_Zarncke · 2021-01-26T23:18:35.907Z · LW(p) · GW(p)
I have seen this in real life in small: Getting a startup off the ground. Investors want to see a business plan, capable team, and prospective customers. Employees want to work for a company that has a chance to succeed and is able to pay their salary. And you need customers who want a supplier with a track record and great products. Assurance contracts wouldn't work even less than in the real estate developer case because the volume is too small, and even worse, you often don't even know what your product or customers will be in the end. How does this work at all? It is not blatant lies. It is more like coming up with a great future state that appears doable and positive and getting some tentative commitment. With this, you go to the next person(s) and grow the vision. And include the latest tentative commitments. Repeat until you succeed. This will only work with participants that are sufficiently risk-tolerant (angel investors, students, people open to experience, etc.), but for these, this can create a sufficient positive reinforcement that the endeavor takes up.
So is this simulacrum level 3, or is it something different, or a combination?
↑ comment by johnswentworth · 2021-01-26T21:29:43.829Z · LW(p) · GW(p)
In this case, I'd guess that for many parties, signing an assurance contract is about as expensive up-front as signing a contract, since they need to set aside resources to fulfill their commitments - e.g. banks need to earmark some cash, a contractor needs to not accept alternative jobs, or a renter needs to not sign another lease.
Replies from: Raemon↑ comment by Raemon · 2021-01-26T23:39:54.871Z · LW(p) · GW(p)
Mmm. So maybe part of the thing is the contract needs to be an exploding contract (i.e. gives everyone maybe a week to read and sign, so they don't need to tie up their resources too long), but then also exploding contracts are super annoying and everyone hates them and also institutions literally can't move that fast. So you're back to square one.
↑ comment by romeostevensit · 2021-01-26T23:54:18.506Z · LW(p) · GW(p)
+1 and I like that this provides an intuition for why high energy bluster would be so effective here. Creates a sense of 'things happen around this person, they'll get it done.'
↑ comment by ReverendBayes (vedernikov-andrei) · 2022-05-12T12:34:27.325Z · LW(p) · GW(p)
I think that "blatant lies" case applied for newcomer real estate developers (RED); but well known RED can use their reputation and common knowledge about this reputation as part of simulacrum-3.
If there is common knowledge about many previous successes of this particular RED, then the bank can believe that everyone else will hunt this stag. Thus, the bank will sign the papers.
But newcomer RED can't afford just to pretend to pretend that everyone else is hunting the stag. The bank will not believe that RED's intention to build the skyscraper will be sufficient to convince other parties to hunt the stag. The RED must pretend that other parties already signed the papers to convince the bank to hunt the stag.
So, without good reputation, you must use blatant lies (S2) to convince people to hunt the stag. But having such reputation (and common knowledge about it) you can just pretend to pretend that you stag is the best choice (S3) - people will gladly join your hunting.
comment by Vanessa Kosoy (vanessa-kosoy) · 2021-01-27T00:27:47.527Z · LW(p) · GW(p)
I think that all of ethics works like this: we pretend to be more altruistic / intrinsically pro-social than we actually are, even to ourselves [LW · GW]. And then there are situations like battle of the sexes, where we negotiate the Nash equilibrium while pretending it is a debate about something objective that we call "morality".
comment by Raemon · 2022-12-10T22:16:21.519Z · LW(p) · GW(p)
This gave a satisfying "click" of how the Simulacra and Staghunt concepts fit together.
Things I would consider changing:
1. Lion Parable. In the comments, John expands on this post with a parable about lion-hunters who believe in "magical protection against lions." That parable is actually what I normally think of when I think of this post, and I was sad to learn it wasn't actually in the post. I'd add it in, maybe as the opening example.
2. Do we actually need the word "simulacrum 3"? Something on my mind since last year's review is "how much work are the words "simulacra" doing for us? I feel vaguely like I learned something from Simulacra Levels and their Interactions [LW · GW], but the concept still feels overly complicated as a dependency to explain new concepts. If I read this post in the wild without having spent awhile grokking Simulacra I think I'd find it pretty confusing.
But, meanwhile, the original sequences talked about "belief in belief [LW · GW]". I think that's still a required dependency here, but, a) Belief in Belief is a shorter post, and I think b) I think this post + the literal words "belief in belief" helps grok the concept in the first place.
On the flipside, I think the Simulacra concept does help point towards an overall worldview about what's going on in society, in a gnarlier way than belief-in-belief communicates. I'm confused here.
Important Context
A background thing in my mind whenever I read one of these coordination posts is an older John post: From Personal to Prison Gangs [LW · GW]. We've got Belief-in-Belief/Simulacra3 as Stag Hunt strategies. Cool. They still involve... like, falsehoods and confusion and self-deception. Surely we shouldn't have to rely on that?
My hope is yes, someday. But I don't know how to reliably do it at scale yet. I want to just quote the end of the prison gangs piece:
Of course, all of these examples share one critical positive feature: they scale. That’s the whole reason things changed in the first place - we needed systems which could scale up beyond personal relationships and reputation.
This brings us to the takeaway: what should you do if you want to change these things? Perhaps you want a society with less credentialism, regulation, stereotyping, tribalism, etc. Maybe you like some of these things but not others. Regardless, surely there’s something somewhere on that list you’re less than happy about.
The first takeaway is that these are not primarily political issues. The changes were driven by technology and economics, which created a broader social graph with fewer repeated interactions. Political action is unlikely to reverse any of these changes; the equilibrium has shifted, and any policy change would be fighting gravity. Even if employers were outlawed from making hiring decisions based on college degree, they’d find some work-around which amounted to the same thing. Even if the entire federal register disappeared overnight, de-facto industry regulatory bodies would pop up. And so forth.So if we want to e.g. reduce regulation, we should first focus on the underlying socioeconomic problem: fewer interactions. A world of Amazon and Walmart, where every consumer faces decisions between a million different products, is inevitably a world where consumers do not know producers very well. There’s just too many products and companies to keep track of the reputation of each. To reduce regulation, first focus on solving that problem, scalably. Think amazon reviews - it’s an imperfect system, but it’s far more flexible and efficient than formal regulation, and it scales.
Now for the real problem: online reviews are literally the only example I could come up with where technology offers a way to scale-up reputation-based systems, and maybe someday roll back centralized control structures or group identities. How can we solve these sorts of problems more generally? Please comment if you have ideas.
comment by Elizabeth (pktechgirl) · 2023-01-16T03:35:33.734Z · LW(p) · GW(p)
Most of the writing on simulacrum levels have left me feeling less able to reason about them, that they are too evil to contemplate. This post engaged with them as one fact in the world among many, which was already an improvement. I've found myself referring to this idea several times over the last two years, and it left me more alert to looking for other explanations in this class.
comment by Yoav Ravid · 2021-01-27T12:13:00.790Z · LW(p) · GW(p)
Thanks, this is a fantastic post. it takes a problem in game theory, shows an unusual way of solving it the real world, shows an example of it, and then explains away the added complexity and demonstrating that it all adds up to normality [? · GW] by showing that it doesn't require the actors to understand that this is what they're doing - thus giving a full Gears-Level [? · GW] understanding of the issue. strongly upvoted.
comment by Gordon Seidoh Worley (gworley) · 2021-01-27T01:53:25.681Z · LW(p) · GW(p)
As a society, we've actually codified some of these ideas in advice idioms like "fake it till you make it". We've also codified the equal and opposite advice, e.g. "you can fool all the people some of the time and some of the people all the time, but you cannot fool all the people all the time".
This means we can not only engage in this kind of coordination behavior without understanding the mechanism, we have generalized the coordination mechanism to apply across domains such that no deeper understanding is needed, i.e. you don't even need to notice that the strategy works, only have adopted a norm that you're already applying in multiple domains that allows you to coordinate without realizing it.
Replies from: AnthonyCcomment by TurnTrout · 2021-01-29T21:03:01.731Z · LW(p) · GW(p)
So: we make our case for Stag, try to convince people it’s the obviously-correct choice no matter what. And… they’re not fooled. But they all pretend to be fooled. And they all look around at each other, see everyone else also pretending to be fooled, and deduce that everyone else will therefore choose Stag. And if everyone else is choosing Stag… well then, Stag actually is the obvious choice. Just like that, Stag becomes the new Schelling point.
This seems like it could be easier for certain kinds of people than for others. One might want to insist to the group that yes, I know that Stag isn't always right, that's silly, and before you know it, you've shattered any hope of reaching Stag-Schelling via this method.
Replies from: johnswentworth↑ comment by johnswentworth · 2021-01-29T22:30:35.848Z · LW(p) · GW(p)
See also: Why Our Kind Can't Cooperate [LW · GW].
If one of the main coordination mechanisms used by humans in practice is this simulacrum-3 pretend-to-pretend trick, and rationalists generally stick to simulacrum-1 literal truth and even proactively avoid any hints of simulacrum-3, then a priori we'd expect rationalists to be unusually bad at cooperating.
If we want to close that coordination gap, our kind are left with two choices:
- play the simulacrum-3 game (at the cost of probably losing some of our largest relative advantages)
- find some other way to coordinate (which is liable to be Hard)
I think it ultimately has to be the latter - ancestral human coordination mechanisms are already breaking down as they scale up (see e.g. Personal to Prison Gangs [LW · GW], Mazes [? · GW]), the failure modes are largely directly due to the costs of simulacrum-3 (i.e. losing entanglement with reality), so it's a problem which needs to be solved one way or the other.
(Also, it's a problem essentially isomorphic to various technical AI alignment problems.)
comment by agc · 2021-01-27T09:53:47.790Z · LW(p) · GW(p)
I haven't seen a strong argument that "stag hunt" is a good model for reality. If you need seven people to hunt stag the answer isn't to have seven totally committed people, who never get ill, have other things to do, or just don't feel like it. I'd rather have ten people who who are 90% committed, and be ready to switch to rabbit the few days when only six show up.
Replies from: abramdemski, johnswentworth↑ comment by abramdemski · 2021-01-27T18:47:46.838Z · LW(p) · GW(p)
I agree with the thrust of Johns' response, IE, the stag hunt is a stand-in for a more general class of coordination problems, pointing at the property there are multiple equilibria, and some are Pareto improvements over others. The stag hunt is kind of the minimal example of this, in that it only has 2 equilibria. I generally agree that using Stag Hunt unfortunately may connote other properties, such as this only-2-equilibria property.
However, it seems to me that you think real coordination problems almost never have this all-or-nothing flavor. I disagree. Yes, it's rarely going to be literally two equilibria. However, there aren't always going to be solutions of the sort you mention.
For example, if your family is having a day out and deciding where to eat, and Uncle Mortimer categorically refuses to go to the Italian place everyone else wants to go to, often the family would prefer to follow Mortimer to some other place, rather than letting him eat on his own. Everyone but Mortimer eating at the Italian place is seen as a "failed family day" -- each person individually could go to the Italian place on any day (this is like hunting rabbit), but they wanted to do something special today: a family meal.
Many cases where people take a vote are like this; particularly in cases where you vote yes/no to pass a resolution / go forward with a plan (rather than vote between a number of alternatives). An organization might be able to handle one or two solid refusals to go along with the vote outcome, but a sizable minority would be a serious problem. The reason people are willing to go along with the outcome of a vote they don't agree with is because the organization remaining coherent is like stag, and everyone doing their own thing is like rabbit.
↑ comment by johnswentworth · 2021-01-27T17:18:51.266Z · LW(p) · GW(p)
On paper, I basically agree with this. In practice, people (at least in this community) mostly seem to use stag hunt as a toy-model stand-in for games with increasing returns on number of players "hunting stag", which is quite a bit more general than the pure stag hunt reward function. For that purpose, it is a useful model; the key qualitative insights do generalize.
Replies from: agccomment by abramdemski · 2021-01-27T18:11:24.244Z · LW(p) · GW(p)
In particular, we should expect this sort of thing in industries with increasing marginal returns on investment.
My model of Zvi doesn't think this is a precondition at all (for moral maze development, at least).
I think this is because management is always involved in a stag hunt of one sort or another, regardless of whether the industry overall is stag-hunty, because producing anything is a delicate coordination problem.
Replies from: johnswentworth↑ comment by johnswentworth · 2021-01-27T19:25:48.547Z · LW(p) · GW(p)
I agree with this - Gunnar's real estate developer example is a case-in-point. Increasing marginal returns on investment are a sufficient condition, not a necessary one.
It does seem like maziness occurs to greater or lesser extent in different industries, and I'd guess that extent-to-which-industries-are-bottlenecked-on-Schelling/stag-hunt-style-coordination is the main predictor of that. On the other hand, coordination bottlenecks are often tough to see from the outside, so I'd really like a more-easily-testable criterion which predicts when coordination bottlenecks are likely to dominate.
comment by Drake Morrison (Leviad) · 2022-12-10T21:16:40.771Z · LW(p) · GW(p)
A great explanation of something I've felt, but not been able to articulate. Connecting the ideas of Stag-Hunt, Coordination problems, and simulacrum levels is a great insight that has paid dividends as an explanatory tool.
comment by mruwnik · 2022-12-10T14:54:16.950Z · LW(p) · GW(p)
This seems like a good description of what late stage communism looked like in Poland. Everyone knew that communism was rubbish, the best anticommunist jokes came from party members, but "ambitious" people kept on professing how much they believed in how well it worked and how much they were helping the poor oppressed workers, since that was the way to advance.
A additional bonus is that those on other simulacrum levels are actively selected against - honestly believing in communism would result in you getting shunned, as you're then playing a different game.