Transitive Tolerance Means Intolerance

post by orthonormal · 2021-08-14T17:52:26.849Z · LW · GW · 13 comments

Our society is pretty messed up around arguments of whose ideas we should and shouldn't tolerate. Some of this is inevitable: even without censorship, there are cases where group X can choose to actively show respect to person Y, and members of X will argue about that, and people with any influence over members of X may try and sway the decision too.

Of course, the actual kinds of conflicts in our world are... less tame than the above example. Troublingly, people lose jobs* for saying things that a supermajority of Americans find inoffensive, both on the left and the right. 

You don't need me to tell you that things are bad. I do think I can point out how some of this is a consequence of the natural impulse to judge people by their friends, turned corrosive by the property of transitivity.


Transitivity is the property where if A relates in a certain way to B, and B relates in that same way to C, then A relates in that same way to C. For instance, if Alex is shorter than Beth, and Beth is shorter than Chris, then Alex is shorter than Chris.

Not all relations are transitive. If Alex is Beth's cousin, and Beth is Chris's cousin, it doesn't follow that Alex and Chris are cousins: Beth could share one set of grandparents with Alex, and the other set with Chris.

Pivoting back to toleration, we begin with the idea of guilt by association, which we rightly exclude from legal consideration [LW · GW], but which is still pretty good Bayesian evidence. A person who chums around with the Mafia might not be a mafioso themselves, but they're more likely to be one than a random person is.

Similarly for people who proclaim ideas: a person who associates with an X-sayer is more likely to believe X than a random person.

Where this goes horribly wrong is when toleration is assumed to be transitive.

In reality, if X associates with Y who associates with Z, that doesn't mean X associates with Z, or knows of/cares about/approves of Z. Y could be in a D&D group with X while volunteering with Z, or whatever.

But if our social rules treat toleration as fully transitive, then as soon as Z says something awful, X is contaminated by it. In fact, X needs to quickly ditch/denounce Y in order to avoid the contagion, and sometimes even that won't work.

(I may be getting the details wrong, but I recall a case where A was denounced for saying nice things about B... who had once appeared on C's podcast... which also had D on at some point... who years after that podcast had started saying absolutely reprehensible things.)

At its most extreme level, this contagion spreads to all of society except for the few people who agree completely with you (or are scared enough to tell you they agree completely). Anyone who defends a witch must themselves be a witch, and by the transitive property, everyone else is a witch. The principle of assumed transitive toleration has left you in a bitter, disconnected molecule drifting in a sea of total intolerance.


Now, there is some level of Bayesian evidence you get from multi-step toleration. But it reduces sharply, the further out you get.

If X explicitly talks about wanting the USA to be invaded and taken over by Canada, and Y tolerates X, then Y is probably at least a Canada-conquest sympathizer/apologist. But if Z tolerates Y, then I wouldn't be as sure about Z's politics. I find it unlikely that Z has a high opinion of American self-rule (why would they then tolerate Y?), but it's not that likely that they're as enthusiastic about Alberta-annexation as X. And so on.

In a healthy society there's a six-degrees-of-toleration connection between people with very different politics. I worry that these chains have been growing longer and more fragile, for many reasons. This strains the bonds of liberalism, which (contra Hobbes) has been our best tool at averting the worst forms of violence humanity has to offer.

And the assumed transitivity of tolerance is perhaps both a cause and a symptom of that.

 

* I'm not giving my examples because I don't want to start a political row, but yes it's both. More of it happens behind closed doors than publicly on Twitter, although the latter does have a chilling effect. Please don't focus on this.

13 comments

Comments sorted by top scores.

comment by Wei Dai (Wei_Dai) · 2021-08-14T21:59:05.146Z · LW(p) · GW(p)

What scares me is the realization that moral change mostly doesn't happen via "deliberation" or "reflection" but instead through this kind of tolerance/intolerance, social pressure, implicit/explicit threats, physical coercion, up to war. I guess the way it works is that some small vanguard gets convinced of a new morality through "reason" (in quotes because the reasoning that convinces them is often quite terrible, and I think they're also often motivated by implicit considerations of the benefits of being a moral vanguard), and by being more coordinated than their (initially more numerous) opponents, they can apply pressure/coercion to change some people's minds (their minds respond to the pressure by becoming true believers) and silence others or force them to mouth the party line. The end game is to indoctrinate everyone's kids with little resistance, and the old morality eventually dies off.

It seems to me like liberalism shifted the dynamics towards the softer side (withholding of association/cooperation as opposed to physical coercion/war, tolerance/intolerance instead of hard censorship) but the overall dynamics really isn't that different as far as reason/deliberation/reflection playing only a minor role in how moral change happens. In other words, life under liberalism is more pleasant in the short run, but it doesn't really do much to ensure long term moral progress, which I think explains why we're seeing a lot of apparent regress recently.

ETA: Also, to the extent that longtermists and people like me (who think that it's normative to have high moral uncertainty) are not willing to spread our views through these methods, it probably means our views will stay unpopular for a long time.

Replies from: orthonormal, Dagon, Wei_Dai, Jan_Kulveit
comment by orthonormal · 2021-08-15T17:02:07.741Z · LW(p) · GW(p)

I like Scott Alexander's discussion of symmetric vs asymmetric weapons. Symmetric weapons lead to an unceasing battle, which as you said has at least become less directly violent, but whose outcomes are more or less a random walk. But asymmetric weapons pull ever so slightly toward, well, a weakly extrapolated volition of the players on both sides.

Brownian motion plus a small term looks just like Brownian motion until you look a long ways back and notice the unlikeliness of the trend. The arc of the moral universe is long, etc.

(Of course, in this century we very probably don't have the luxury of a long arc...)

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2021-08-15T19:12:29.981Z · LW(p) · GW(p)

That's a good point, but aside from not having the luxury of a long arc, I'm also worried about asymmetric weapons coming online soon that will work in favor of bad ideas instead of good ones, namely AI assisted persuasion and value lock-in. Basically, good ideas should keep their hosts uncertain and probably unwilling to lock in their own values and beliefs or use superintelligent AI to essentially hack other people's minds, but people under the influence of bad ideas probably won't have such compunctions.

ETA: Also, some of the existing weapons are already asymmetric in favor of bad ideas. Namely the more moral certainty you have, the more you're willing to use social pressure / physical coercion to spread your views. This could partly explain why moral uncertainty is so rare.

comment by Dagon · 2021-08-15T00:16:23.802Z · LW(p) · GW(p)

The very concept of moral uncertainty is pretty foreign to the vast majority of humanity.  Tolerance and forbearance wax and wane in popularity, but actually acknowledging that you don't know what's best just doesn't happen. 

Replies from: Kenny
comment by Kenny · 2021-08-15T00:45:18.677Z · LW(p) · GW(p)

The association transitivity is applied on the individual level rather than on the ideas of the individuals. Most individual's beliefs and thoughts may remain almost static through time, and maybe it became the default level of association transitivity because of it.

As you said that actually being uncertain really doesn't happen because the development of concrete world view is important for survival as the person grows up from childhood. Schools certainly don't focus much on uncertainty itself. It has to be derived from the individual's own willingness to seek out alternatives and develop the habit of uncertainty mindset by reading a bit too much.

Society at large don't encourage uncertainty mainly because it is inefficient to apply on a massive scale. It would lead to too much chaos and misunderstanding. People wouldn't be able to communicate effectively. Having the luxury to be uncertain is not something most people can afford, which would lead to a very different type of societal structure and interoperability.

As a result, we apply the association transitivity on the individuals because ideas themselves are too ephemeral.

comment by Wei Dai (Wei_Dai) · 2021-08-15T04:16:52.201Z · LW(p) · GW(p)

As an example of the reasoning of moral vanguards, a few days ago I became curious how the Age of Enlightenment (BTW, did those people know how to market themselves or what?) came about. How did the Enlightenment philosophers conclude (and convince others) that values like individualism, freedom, and equality would be good, given what they knew at the time? Well, judge for yourself. From https://plato.stanford.edu/entries/enlightenment:

However, John Locke’s Second Treatise of Government (1690) is the classical source of modern liberal political theory. In his First Treatise of Government, Locke attacks Robert Filmer’s Patriarcha (1680), which epitomizes the sort of political theory the Enlightenment opposes. Filmer defends the right of kings to exercise absolute authority over their subjects on the basis of the claim that they inherit the authority God vested in Adam at creation. Though Locke’s assertion of the natural freedom and equality of human beings in the Second Treatise is starkly and explicitly opposed to Filmer’s view, it is striking that the cosmology underlying Locke’s assertions is closer to Filmer’s than to Spinoza’s. According to Locke, in order to understand the nature and source of legitimate political authority, we have to understand our relations in the state of nature. Drawing upon the natural law tradition, Locke argues that it is evident to our natural reason that we are all absolutely subject to our Lord and Creator, but that, in relation to each other, we exist naturally in a state of equality “wherein all the power and jurisdiction is reciprocal, no one having more than another” (Second Treatise, §4). We also exist naturally in a condition of freedom, insofar as we may do with ourselves and our possessions as we please, within the constraints of the fundamental law of nature. The law of nature “teaches all mankind … that, being all equal and independent, no one ought to harm another in his life, health, liberty, or possessions” (§6). That we are governed in our natural condition by such a substantive moral law, legislated by God and known to us through our natural reason, implies that the state of nature is not Hobbes’ war of all against all. However, since there is lacking any human authority over all to judge of disputes and enforce the law, it is a condition marred by “inconveniencies”, in which possession of natural freedom, equality and possessions is insecure. According to Locke, we rationally quit this natural condition by contracting together to set over ourselves a political authority, charged with promulgating and enforcing a single, clear set of laws, for the sake of guaranteeing our natural rights, liberties and possessions. The civil, political law, founded ultimately upon the consent of the governed, does not cancel the natural law, according to Locke, but merely serves to draw that law closer. “[T]he law of nature stands as an eternal rule to all men” (§135). Consequently, when established political power violates that law, the people are justified in overthrowing it. Locke’s argument for the right to revolt against a government that opposes the purposes for which legitimate government is taken by some to justify the political revolution in the context of which he writes (the English revolution) and, almost a hundred years later, by others to justify the American revolution as well.

Replies from: cousin_it
comment by cousin_it · 2021-09-20T19:41:11.275Z · LW(p) · GW(p)

I'm pretty happy that we no longer have divine right of kings, though. For most of history god-monarchies were very prevalent. Somehow Locke and his friends found an attack that worked, it wasn't a small task.

comment by Jan_Kulveit · 2021-08-15T10:16:42.855Z · LW(p) · GW(p)

Often, I think about such changes as about phase transitions on a network.

If we assume that these processes (nucleation of clique of the new phase, changes in energy on edge boundaries,...)  are independent of the content of moral change,  we can expect the emergence of "fluctuations" of new moral phases. Then the question is which of these fluctuations grow to eventually take over the whole network; from an optimistic perspective, this is where relatively small differences between moral phases caused by some phases being "actually better" break the symmetry and lead to gradual moral progress. 

Stated in other words, if you look at the micro-dynamic, when you look at the individual edges and nodes, you see the main terms are social pressure, coercion, etc., but the 3rd order terms representing something like "goodness of the moral system in the abstract " act as a symmetry-breaking term and have large macroscopic consequences.

Turning to longtermism, network-wise, it seems advantageous for the initial bubble of the new phase to spread to central nodes in the network - which seems broadly in line with what EA is doing.  Plausibly,  in this phase, reasoning plays larger role, and coercion smaller - which is what you see. On the other hand, if longtermism becomes sufficiently large / dominant, I would expect it will become more coercive.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2021-08-15T21:03:41.796Z · LW(p) · GW(p)

I think this is a good way to think about the issues. My main concerns, put into these terms, are

  1. The network could fall into some super-stable moral phase that's wrong or far from best. The stability could be enabled by upcoming tech like AI-enabled value lock-in, persuasion, surveillance.
  2. People will get other powers, like being able to create an astronomical number of minds, while the network is still far from the phase that it will eventually settle down to, and use those powers to do things that will turn out to be atrocities when viewed from the right moral philosophy or according to people's real values.
  3. The random effects overwhelm the directional ones and the network keeps transitioning through various phases far from the best one. (I think this is a less likely outcome though, because it seems like sooner or later it will hit upon one of the super-stable phases mentioned in 1.)

Have you written more about "moral phase transitions" somewhere, or have specific thoughts about these concerns?

comment by ADifferentAnonymous · 2021-08-14T22:28:23.387Z · LW(p) · GW(p)

There's a self-fulfilling prophecy aspect to this. If you expect to be judged for your transitive associations, you'll choose them carefully. If you choose your transitive associations carefully, they'll provide more Bayesian evidence about your values, making it more rational for others to judge you by them.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2021-08-15T03:05:48.215Z · LW(p) · GW(p)

If you expect to be judged for your transitive associations, then all that your transitive associations tell me about your character is that you have reason to care about your social status. I learn nothing else about your actual values in this way.

comment by frontier64 · 2021-08-15T09:00:11.639Z · LW(p) · GW(p)

Maybe you can solve this by just not caring about what Z believes in the first place? If you think his views are reprehensible so you support him being fired you're already in a failure state. This whole discussion of whether it's really ok to cancel A because he's friends with B and B is a Jew is actually a net negative. It just cements the idea that it's ok to cancel B in the first place. I picture the Soviet Politburo arguing whether or not it's ok to send friends of political dissidents to the gulags or if the KGB is going a little bit too far.

These sorts of discussions move the Schelling point and never actually work as pushback towards the problem they're discussing.

People don't behave very differently based on their stated beliefs. There's White Nationalist programmers and there's Black Power programmers and they program about the same. Maybe they hang out with different people on the weekends and play different board games, but that doesn't matter because their job is programming. Nor does it quite make sense to fire either of them because some mentally handicapped people who spend all day on twitter decided to gang up on a programmer today.

Replies from: orthonormal
comment by orthonormal · 2021-08-15T17:07:03.912Z · LW(p) · GW(p)

You can choose to draw your bounds of tolerance as broadly as you like!

On a prescriptive level, I'm offering a coherent alternative that's between "tolerate everybody" and "tolerate nobody".

On a descriptive level, I'm pointing out why you encounter consequences when you damn the torpedoes and try to personally tolerate every fringe believer.