Norm Innovation and Theory of Mind
post by Raemon · 2021-09-18T21:38:04.379Z · LW · GW · 15 commentsContents
Multiplayer Norm Innovation Theory of Mind “What the hell, Bob?” Assuming Logical Omniscience Attention, Mistrust, and Stag Hunts Takeaways Multiplayer Norm Pioneering is legitimately hard None 15 comments
Disclaimer: this was the first concept that led to me thinking about the coordination frontier. But I think something on the frame here feels subtly off. I decided to go ahead and post it – I'm pretty sure I believe all the words here. But not 100% sure this is the best way to think about the norm-negotiation problems.
Last post was about coordination schemes [? · GW]. Today’s post is about a subset of coordination schemes: norms, and norm enforcement.
The internet is full of people unilateral enforcing new norms on each other, often based on completely different worldviews. Many people have (rightly, IMO) developed a defensiveness to getting accused of things they don’t think are wrong.
Nonetheless, if society shall improve, it may be useful to invent (and enforce) new norms. What's a good way to go about that?
Ideally, I think people discuss new norms with each other before starting to enforce them. Bring them up at town hall. Write a thoughtful essay and get people to critique it or discuss potential improvements.
But often, norm-conflict comes up suddenly and confusingly. Someone violates what you thought was a foundational norm of your social circle, and you casually say “hey, you just did X”. And they’re like “yeah?” and you’re flabbergasted that they’re just casually violating what you assumed was an obvious pillar of society.
This is tricky even in the best of circumstances. You thought you could rely on a group following Norm X, and then it turns out if you want Norm X you have to advocate it yourself.
It’s even more tricky when multiple people are trying to introduce new norms at once.
Multiplayer Norm Innovation
Imagine you have Alice, Bob, Charlie and Doofus, who all agree that you shouldn’t steal from or lie to the ingroup, and you shouldn’t murder anyone, ingroup or outgroup.
(Note the distinction between ingroups and outgroups [LW · GW], which matters quite a bit).
Alice, Bob, and Charlie also all agree that you should (ideally) aim to have a robust set of coordination meta-principles. But, they don’t know much about what that means. (Doofus has no such aspirations. Sorry about your name, Doofus, this essay is opinionated)
One day Alice comes to believe: “Not only should you not lie to the ingroup, you also shouldn’t use misleading arguments or cherry picked statistics to manipulate the ingroup.”
Around the same time, Bob comes to believe: “Not only should you not steal from the ingroup, you also shouldn’t steal from the outgroup.” Trade is much more valuable than stealing cattle. Bob begins trying to convince people of this using misleading arguments and bad statistics.
Alice tells Bob “Hey, you shouldn’t use misleading arguments to persuade the ingroup of things because it harms our ability to coordinate.”
This argument makes perfect sense to Alice.
The next day, Bob makes another misleading argument at the ingroup.
Alice says “What the hell, Bob?”
The day after that, Bob catches Alice stealing cattle from their rivals across the river, and says “What the hell, Alice, didn’t you read my blogpost on why outgroup-theft is bad?”
Someday, I would like to have a principled answer to the question “What is the best way for all of these characters to interact?” In this post, I’d like to focus on one aspect of why-the-problem is hard.
Disclaimer: This example probably doesn't represent a coherent world. Clean examples be hard, yo.
Theory of Mind
The Sally Anne Marble test is a psychological tool for looking at how children develop theory-of-mind. A child is told a story about Sally and Anne. Sally has a marble. She puts it in her basket, and then leaves. While she’s away, her friend Anne takes the marble and hides it in another basket.
The child is asked “When Sally returns, where does she think her marble is?”
Very young children incorrectly answer “Sally will think the marble is in Anne’s basket.” The child-subject knows that Anne took the marble, and they don’t yet have the ability to model that Sally has different beliefs than they do.
Older children correctly answer the question. They have developed theory of mind.
“What the hell, Bob?”
When Alice says “what the hell, Bob?”, I think she's (sometimes) failing a more advanced theory of mind test.
Alice knows she told Bob “Hey, you shouldn’t use misleading arguments to persuade the ingroup of things because it harms our ability to coordinate.” This seemed like a complete explanation. But she is mismodeling a) how many assumptions she swept under the rug, and b) how hard it is to learn a new concept in the first place.
Sometimes the failure is even worse than that. Maybe Alice told Bob the argument. But then she runs into Bob’s friend, Charlie, who is also making misleading arguments, and she doesn’t even think to check if Charlie has been exposed to the argument at all. And she gets mad at Charlie, and then Charlie gets frustrated for getting called out on a behavior he’s never even thought of before.
I’ve personally been the guy getting frustrated that nobody else is following “the obvious norms”, when I never even ever told someone the norm, let alone argued for it. It just seemed to obviously follow from my background information.
Assuming Logical Omniscience
There are several problems all feeding into each other here. The first several problems are variations on “Inferential distance [LW · GW] is a way bigger deal than you think”, like:
- Alice expects she can explain something once in 5 minutes and it should basically work. But, if you’re introducing a new way of thinking, it might take years [LW · GW] to resolve a disagreement, because…
- Alice’s claims are obvious to her within her model of the world. But, her frame might have lots of assumptions [LW · GW] that aren’t obvious to others.
- Alice may have initially explained her idea poorly, and Bob wrote her off as not-worth-listening to. (Idea Inoculation + Inferential Distance)
- Alice has spent tons of time thinking about how bad it is to make misleading arguments, to the point where it feels obviously wrong and distasteful to her [LW · GW]. Bob has not done that, and Alice is having a hard time modeling Bob. She keeps expecting that aesthetic distaste to be present, and relying on it to do some rhetorical work that it doesn’t do.
- Much of this is also present in the other direction. Bob is really preoccupied with getting people to stop stealing things, it seems obviously really important since right now there’s an equilibrium where everyone is getting stolen from all the time. When Alice is arguing about being extra careful with arguments, Bob feels like she has a missing mood, like she doesn’t understand why the equilibrium of theft is urgent. And that is downstream of Bob similarly underestimating the inferential gulf about why stealing your rival’s cattle is limiting economic growth.
This all gets more complex when things have been going on for awhile. Alice and Bob both come to a (plausibly) reasonable belief that “Surely, I have made the case well enough that outgroup-theft/misleading-arguents are bad.” They might even have reasonable evidence about this because people are making statements like “Theft is bad!” and “Misleading arguments are bad!”.
But, nonetheless, Alice has thought about Misleading Arguments a lot. She is very attuned to it, whereas everyone else has just started paying attention. She have begun thinking multiple steps beyond that – building entire edifices that take the initial claims as a basic axiom, exploring deep into the coordination frontier, along different directions. Bob is having a similar experience re: Theft.
So they are constantly seeing people take actions that look like straightforward defections to them, and look like defections they think other people have opted into being called on, but actually require additional inferential steps that are not yet common knowledge nor consensus.
Attention, Mistrust, and Stag Hunts
Meanwhile, another problem here is that, even if Bob and Alice take each other’s claims seriously, they might live in a world where lots of people are proposing norms.
Some of those norms are actively bad.
Some people are wielding norm-pushing as a weapon to gain social status or win political fights. (Even the people pushing good norms).
Some of the norms are good, but you can only prioritize so many new norms at once. Even people nominally on the same side may have different conceptions of what ingroup boundaries they are trying to draw, what standards they are trying to uphold, and whether a given degree of virtue is positive or negative [LW · GW] for their ingroup.
People often model new norms as a stag hunt – if only we all pitched in to create a new societal expectation, we'd reap benefits from our collective action. Unfortunately, most stag hunts are actually schelling coordination games [LW · GW] – the question is not "stag or no?", it's "which of the millions of stags are we even trying to kill?"
This all adds up to the unfortunate fact that the schelling choice is rabbit, not stag [LW · GW].
Attention resources are scarce. Not many people are paying attention to any given overton-window-fight. People get exhausted by having too many overton fights in a row. Within a single dispute, people have limited bandwidth before the cost of figuring out the optimal choice in the dispute doesn’t seem worth it.
So when someone shows up promoting a new norm, there's a lot of genuine reason to be skeptical and react defensively.
Takeaways
This essay may seem kinda pessimistic about establishing new norms. But overall I think new norms are pretty important.
Once upon a time, we didn't have norms against stealing from the outgroup. Over time, we somehow got that norm, and it allowed us to reap massive gains through trade. The story was obviously not nearly so simplistic as Bob. Maybe people started with some incidental trade, and the norm developed in fits and spurts after-the-fact. Maybe merchants (who stood to benefit from the norm) actively promoted it in a self-interested fashion. Or, maybe ancient civilizations handled this largely via redefining ingroups. But somehow or other we got from there to here.
Once upon a time, we didn't even have statistics, let alone norms against misusing them to mislead people. Much of society is still statistically illiterate, so it's a hard norm to apply in all contexts. Shared use of statistics is a coordination scheme [LW · GW], which civilization is still in the process of capitally-investing-in.
Part of the point of having intellectual communities is to get on the same page about novel ways we can defect on the epistemic commons. So that we can learn not to. So we can push the coordination frontier forward.
(Or, with a more positive spin: part of the point of dedicated communities is to develop new positive skills and habits we can gain, where we can benefit tremendously if lots of people in a network share those skills.)
But this is tricky, because people might have conceptual disagreements about what. (Among people who care about statistics, there are disagreements about how to use them properly. I recently observed an honest-to-goodness fight between a frequentist and bayesian that drove this point home)
Multiplayer Norm Pioneering is legitimately hard
If you're the sort of person who's proactively looking for better societal norms, you should expect to constantly be running into people not understanding you. The more steps you are beyond the coordination baseline [LW · GW], the less agreement with your policies you should expect.
If you're in a community of people who are collectively trying to push the coordination frontier forward via new norms, you should expect to constantly be pushing it in different directions, resulting in misunderstandings. This can be a significant source of friction even when everyone involved is well intentioned, trying to cooperate. Part of that friction stems from the fact that we can't reliably tell who is trying to cooperate in improving the culture, and who is trying to get away with stuff.
I have some sense that there are good practices that norm-pioneers can have that make it easier to interact with each other. Ideally, I think when people who are trying to push society forward run into conflict with each other, they have a set of tools where that conflict is resolved as efficiently as possible.
I have some thoughts on how to navigate all this. But each of my thoughts ended up looking suspiciously like "here's a new norm", and I was wary of muddling this meta-level post with object level arguments.
For now, I just want to leave people with the point that developing new norms creates inferential gaps. Efficient coordination generally requires people to be on the same page about what they're coordinating on. It feels tractable to me to get on some meta-level cooperation among norm-pioneers, but exactly how to go about it feels like an unsolved problem.
15 comments
Comments sorted by top scores.
comment by lsusr · 2021-09-19T06:47:39.964Z · LW(p) · GW(p)
[Raemon is] not 100% sure this is the best way to think about the norm-negotiation problems.
I think about norms very differently. I try not to think about them as abstractions [LW · GW] too much. I put them into a historical and geographical context [LW · GW] whenever possible.
Once upon a time, we didn't have norms against stealing from the outgroup. Over time, we somehow got that norm, and it allowed us to reap massive gains through trade.
What makes you think the causation went this direction? To me, the Shimonoseki campaign of 1863 and 1864 (and Western imperial mercantalism in general) is evidence that the massive gains through trade happened before norms against stealing from the outgroup. The Unequal Treaties (created to promote trade) were such blatant theft that's why they're called "the Unequal Treaties". If you're unfamiliar with the history of the Meiji Restoration then more well-known historical examples include the Atlantic Slave Trade and the Opium Wars [LW · GW].
In other words, I think of social norms as strategies downstream of technological, economic, social and political forces. This doesn't mean small groups of innovators don't can't make a difference. But I think they're like entrepreneurs surfing a wave of change. Someone was going to harness the potential energy eventually. The people who get credit for establishing norms just happened to do did it first. They sided with Moloch.
Small adjustments within the Overton window can sometimes be applied to existing institutions. However, I would be surprised if the way to establish radically new norms could be achieved by modifying existing institutions by someone other than a founder. (Small adjustments can be applied to existing institutions.) It's to establish small, brand new institutions [LW · GW]. If the norms are good (in the Darwinian sense) then they will find a niche (or even outcompete [LW · GW]) existing institutions. If the norms are ineffective then survival of the fittest kills them with minimum damage to the rest of society. Without small-scale empirical testing, the norms that win are the are determined by the random political fashions of the day.
Replies from: Raemon, Raemon, RaemonWhat's the shortest joke in history?
Communism.
What's the longest joke in history?
The Five-Year-Plan.
↑ comment by Raemon · 2021-09-19T21:08:39.032Z · LW(p) · GW(p)
To reply more substantively on "what's the point of this sequence?"
It's mostly not to explain how norms evolve over millennia.
It's to look at "okay, we in the rationality community where we have some shared structures for thinking and deciding, how can we do better here?".
A lot of my posts here are a response to what we seem to be doing by default, which is mostly an elaborate way of saying "please stop". In some cases I'm saying "I think we can actually do well here using our rationalist framesowrks", but there are specific, constrained ways I think it is possible to do well, and people weren't thinking about the constraints.
i.e. by default, I observe people fighting over norms in vaguely defined social clusters, which seems ill-advised. Vaguely defined social clusters are pourous, people come and go. For norm experimentation, I think it's really important to have small groups where you can heavily filter people, and have high fidelity communication. (which I think means you and I are on the same page)
I think there are other non-norm coordination systems that can scale in other ways. Developing microcovid.org is an example, as is the LessWrong Review system, as is developing new grantmaking-body-procedures.
Replies from: lsusr↑ comment by Raemon · 2021-09-19T19:41:18.320Z · LW(p) · GW(p)
Addressing object level:
What makes you think the causation went this direction? To me, the Shimonoseki campaign of 1863 and 1864 (and Western imperial mercantalism in general) is evidence that the massive gains through trade happened before norms against stealing from the outgroup. If you're unfamiliar with the history of the Meiji Restoration then more well-known historical examples include the Atlantic Slave Trade and the Opium Wars.
My actual guess is that this actually happened incrementally over millennia.
I'm not super informed on the history here (feel free to correct or add nuance). But I assume by the time you've gotten to the Meiji Restoration, the Western Imperialists have already gone through several layers of "don't steal the outgroup" expansion, probably starting with small tribes that sometimes traded incidentally, growing into the first cities, and larger nations. And part of the reason the West is able to bring overwhelming force to bear is because they've already gotten into an equilibrium where they can reap massive gains from internal trade (between groups that once were outgroups to be stolen from)
I also vaguely recall (citation needed) that Western European nations sort of carved up various third world countries among themselves with some degree of diplomacy, where each European nation was still mostly an "outgroup" to the others, but they had some incremental gentleman's agreements that allowed them to be internally coordinated enough to avoid some conflict.
(How much of this to attribute to coordination vs technological happenstance vs disease, etc, is still debated a bunch)
Replies from: lsusr↑ comment by lsusr · 2021-09-19T20:30:09.848Z · LW(p) · GW(p)
I think the conversion of France into a nation-state is representative of the Western imperial process in general. (Conquest is fractal.) Initially the ingroup was Paris and the outgroup was the French countryside. The government in Paris forced the outgroup to speak Parisian French. Only after the systematic extermination of their native culture and languages did the French bumpkins get acknowledged as ingroup by the Parisians. In other words, the outgroup was forcibly converted into more ingroup (and lower-class ingroup at that). This process was not unlike the forced education of Native Americans in the United States.
It is true that the expansion of polities from small villages to globe-spanning empires happened over millennia. But I think it's a mistake to treat this process as anything having to do with recognizing the rights of the outgroup. There was never a taboo against stealing from the outgroup. Rather, the process was all about forcibly erasing the outgroup's culture to turn them into additional ingroup. Only after they the people of an outgroup were digested into ingroup were you forbidden from stealing from them. The reason the process took thousands of years is because that's how long it took to develop the technology (writing, ships, roads, horses, bullets, schools, telephones) necessary to manage a large empire.
There's a big difference between recognizing the rights of Christians before versus after you force them to convert to Islam—or the rights of savages before versus after they learn English.
I also vaguely recall (citation needed) that Western European nations sort of carved up various third world countries among themselves with some degree of diplomacy, where each European nation was still mostly an "outgroup" to the others, but they had some incremental gentleman's agreements that allowed them to be internally coordinated enough to avoid some conflict.
It is true that the outgroup was sometimes respected such as the French not wanting to provoke a conflict with the British but the gentlemans' agreements between European powers were not rooted in universal human values. It was because the outgroup had a powerful army and navy. The European empires enthusiastically stole from each other when they could. [LW · GW]
Another tool the Western imperial powers used to coordinate against weaker countries was Most Favored Nation status, which was part of the Unequal Treaties.
Replies from: Raemon↑ comment by Raemon · 2021-09-19T07:09:56.979Z · LW(p) · GW(p)
What makes you think the causation went this direction?
I meant your point here to be implied by:
Maybe people started with some incidental trade, and the norm developed in fits and spurts after-the-fact.
But, you are noticing something like "I started writing this post like 3 years ago. I crystalized much of the current draft 9 months ago. I noticed as I tried to put the finishing touches on it that something felt subtly off, but then decided 'screw it, ship it', rather than letting it sit in limbo forever." My attempt to tack on a slightly more realistic understanding in the concluding section is indeed inharmonious with the rest of it.
I probably have two different replies addressing your object level point, and the broader point about how this overall sequence fits together.
Replies from: lsusrcomment by Emrik (Emrik North) · 2021-09-23T11:02:44.244Z · LW(p) · GW(p)
I'm loving this Sequence so far. I'd really like to see a list of all the concrete norm innovations you can think of that you'd like to see tried in the community. I realise some norms aren't very concrete and easy to put down on paper, but I'd like an as-comprehensive-as-possible list anyway.
One for the list: Impact certificates [? · GW].
comment by Elizabeth (pktechgirl) · 2022-12-14T09:37:02.934Z · LW(p) · GW(p)
I cite the ideas in this piece (especially "you're trying to coordinate unilaterally and that's gonna fail") a lot. I do think Raemon's thoughts have clarified since then and ideally he'd do a substantial edit, but I'm glad the thoughts got written down at all.
comment by jimrandomh · 2021-09-19T06:11:13.901Z · LW(p) · GW(p)
People often model new norms as a stag hunt – if only we all pitched in to create a new societal expectation, we'd reap benefits from our collective action.
I think this is wrong, because it restricts the scope of what counts as a "norm" to only cover things that affect misaligned components of peoples' utility functions. If a norm is the claim that some category of behavior is better than some other category of behavior according to a shared utility function with no game theoretic flavor to it, then anyone who fully understands the situation is already incentivized to follow the norm unilaterally, so it isn't a stag hunt.
Replies from: Raemon↑ comment by Raemon · 2021-09-19T21:17:35.405Z · LW(p) · GW(p)
Seems probably true that there's non-game-theoretic-flavored norms, and this post is mostly not looking at those. (I'm not 100% sure whether I'd call those norms, but that seems more like a semantic discussion and I don't have a strong opinion about it)
Even for game-theoretic-flavored-norms, I actually do think the solution to some of the problems this post is hinting at (and previously discussed in The Schelling Choice is Rabbit [LW · GW]), is to look for norms/habits/actions that are locally beneficial in single-player mode, but happen to have nice flow-through effects when multiple people are doing them and can start to build into something greater than the sum of their parts.
That said:
I think this is wrong
Fwiw I think "this is wrong" doesn't feel quite right as a characterization of my sentence, since it comes with the hedge "people 'often' model new norms as a stag hunt", which is neither claiming this happens all or even a majority of the time, nor that it's necessary.
comment by davetolen · 2024-12-08T15:54:08.654Z · LW(p) · GW(p)
As I neophyte to all of this, I may be missing something. But I immediately saw a real world example of this in the recent controversy over DEI initiatives at the University of Michigan. As a case study it seems to include examples of every concept in this article. New norms are being created, people begin to have an implicit understanding of them, those implicit understandings get violated leading to unresolved conflict. Conflict about DEI is then being coopted for political gain.