Discursive Warfare and Faction Formation
post by Benquo · 2025-01-09T16:47:31.824Z · LW · GW · 3 commentsThis is a link post for https://benjaminrosshoffman.com/discursive-warfare-and-faction-formation/
Contents
3 comments
Response to Discursive Games, Discursive Warfare
The discursive distortions you discuss serve two functions:
1 Narratives can only serve as effective group identifiers by containing fixed elements that deviate from what naive reason would think. In other words, something about the shared story has to be a costly signal of loyalty, and therefore a sign of a distorted map. An undistorted map would be advantageous for anyone regardless of group membership; a distorted map is advantageous only for people using it as an identifying trait. Commercial mapmakers will sometimes include phantom towns so that they (and courts) can distinguish competitors who plagiarized their work from competitors who independently mapped the same terrain. Point deer make horse can catalyze the formation of a faction because it reduces motive ambiguity in a way that "point deer make deer" could not.
"Not Invented Here" dynamics are part of this. To occupy territory, an intellectual faction has to exclude alternative sources of information. I think you're talking about this when you write:
LessWrong rationalism might be able to incorporate ideas from analytic into its own framework, but the possibility of folding LessWrong rationalism into analytic, and in some sense dissolving its discursive boundaries, transforms the social and epistemic position of rationalist writers, to being more minor players in a larger field, on whose desks a large pile of homework has suddenly been dumped (briefing on the history of their new discursive game).
2 Individuals and factions can rise to prominence by fighting others. You can make a debate seem higher-stakes and therefore more attractive to spectators by exaggerating the scope of disagreement.
The opposition to postmodernist thought on LessWrong is enacting this sort of strategy. Analytic philosophy attracts attention in part by its opposition to Continental philosophy, and vice versa. LessWrong is broadly factionally aligned with the Analytic party, in favor of Modernism and therefore against its critics, in ways that don't necessarily correspond to propositional beliefs that would change in the face of contrary evidence. Eliezer can personally notice when Steven Pinker is acting in bad faith against him, but the LessWrong community is mood-affiliated with Steven Pinker, and therefore implicitly against people like Taleb and Graeber.
These two functions can mutually reinforce.
You can force a disagreement to persist by arguing for claims that are in your opponent's group-identity blind spot and preferentially arguing against the people with the most exaggerated blind spots. (There's a tradeoff, though. You get more attention by arguing against people who won't try to learn from you, but you also get more attention by arguing against people who are more prestigious because their arguments make more sense. We see a variety of niches at different levels of prestige.) You can attract more attention by exaggerating those claims. And you can form an identity around this (and thus gain narrative control over followers) by forming a reciprocal blind spot around your exaggerations.
This is the essence of the Hegelian dialectic. It is a conflict strategy that expropriates not from its nominal enemy, but from people who mistake the kayfabe for either a genuine disagreement or a true conflict. The movie Battle of Wits (AKA Battle of Warriors) is the best representation I've seen of this dynamic - a Mohist (Chinese utilitarian) is invited to help defend a city, but gradually discovers the belligerents on both sides are not actually acting on self-interest or trying to win the conflict, but are instead committed to playing out their roles, even when this kills them. They interpret his constructive attempts to save lives as power grabs, and the regime he's trying to help repeatedly acts to thwart him. His attempts to save the lives of the enemy soldiers and leaders are also thwarted, partly by their own actions. By the end of the movie the city has been burnt to the ground by the armies supposedly fighting over it, and the Mohist hero is leading away the local children, who aren't old enough to have been initiated into a Hegelian death cult.
You bring up Marx as an example of someone who tried and failed to control the reception of his own ideas. But such "control" only makes sense in the context of brand management. However, Marx didn't only write the Communist Manifesto, which defined his factional brand. He also wrote Capital, an explanation of class dynamics within a basically Ricardian frame.
Capital won Marx a lot of prestige because it seemed intellectually credible, because it could account for itself in Ricardian terms. Ricardo was widely regarded as intellectually credible. This is related to the fact that there is no Ricardian faction; he's tacitly accepted on the right as well as the left, because he didn't also try to catalyze an adversarial political movement, he simply advanced an explanatory theory. Marx modeled his strategy on that of Hegel (he explicitly described his materialist dialectic as "Hegel turned on his head," a perfectly Hegelian move), and Hegel identified as a Spinozan (another foundational figure, like Ricardo, both widely accepted but not identifiable with any major political faction.)
What's not wrong on purpose is persuasive but does not become a factional identity. What becomes a factional identity is wrong on purpose.
Applying this to LessWrong: Plenty of people read the Sequences, improved their self-models and epistemic standards, and went on to do interesting things not particularly identified with LessWrong. Also, people formed an identity around Eliezer, the Sequences, and MIRI, which means that the community clustered around LessWrong is - aside from a few very confused people who until recently still thought it was about applying the lessons of the Sequences - committed not to Eliezer's insights but to exaggerated versions of his blind spots.
The people who aren't doing that mostly aren't participating in the LessWrong identity, but while factions like that are hostile to the confused people who behave as though they're part of a community trying to become less wrong, such factions are also parasitic on such people, claiming credit for their intellectual contributions. When such participation is fully extinguished, the group begins to decay, having nothing distinctive to offer, unless it has become too big to fail, in which case it's just another component of one political faction or another.
3 comments
Comments sorted by top scores.
comment by Chris_Leong · 2025-01-10T13:02:58.155Z · LW(p) · GW(p)
Committed not to Eliezer's insights but to exaggerated versions of his blind spots
My guess would be that this is an attempt to apply a general critique of what tends to happen in community's in general to the LW community without accounting for its specifics.
Most people in the LW community would say that Eliezer is overconfident or even arrogant (sorry Eliezer!).
The incentive gradient for status hungry folk is not to double-down on Eliezer's views, but to double-down on your idiosyncratic version of rationalism, different enough from the community's to be interesting, but similar enough to be legible.
(Also, I strongly recommend the post this is replying to. I was already aware that discourse functioned in the way described, but it helped me crystallised some of the phenomena much more clearly).
↑ comment by Benquo · 2025-01-10T17:24:58.455Z · LW(p) · GW(p)
I'm thinking of cases like Eliezer's Politics is the Mind-Killer [LW · GW], which makes the relatively narrow claim that politically loaded examples are bad examples for illustrating principles of rationality in the context of learning and teaching those principles, so they should be avoided when a less politicized alternative is available. I think this falsely assumes that it's feasible under current circumstances for some facts to be apolitical in the absence of an active, political defense of the possibility of apolitical speech. But that's a basically reasonable and sane mistake to make. Then I see LessWrongers proceed as though Politics is the Mind-Killer established canonically that it is bad to mention when someone is saying or doing something politically loaded or discuss recognized-as-political precedents, which interferes with the sort of defense that Politics is the Mind-Killer implicitly assumed was a solved problem.
Or how Eliezer both explicitly wrote at length against treating intellectual authorities as specially entitled to opinions AND played with themes of being an incomprehensibly powerful optimization process, but the LessWrong community ended up crystallizing around an exaggerated version of the latter while mostly ignoring his explicit warnings against authority-based reasoning. Eliezer's personally commented on this [LW(p) · GW(p)] (higher-context link [LW(p) · GW(p)] that may take longer to load):
"How dare you think that you're better at meta-rationality than Eliezer Yudkowsky, do you think you're special" - is somebody trolling? Have they never read anything I've written in my entire life? Do they have no sense, even, of irony? Yeah, sure, it's harder to be better at some things than me, sure, somebody might be skeptical about that, but then you ask for evidence or say "Good luck proving that to us all eventually!" You don't be like, "Do you think you're special?" What kind of bystander-killing argumentative superweapon is that? What else would it prove?
I really don't know how I could make this any clearer. I wrote a small book whose second half was about not doing exactly this. I am left with a sense that I really went to some lengths to prevent this, I did what society demands of a person plus over 10,000% (most people never write any extended arguments against bad epistemology at all, and society doesn't hold that against them), I was not subtle. At some point I have to acknowledge that other human beings are their own people and I cannot control everything they do - and I hope that others will also acknowledge that I cannot avert all the wrong thoughts that other people think, even if I try, because I sure did try. A lot. Over many years. Aimed at that specific exact way of thinking. People have their own wills, they are not my puppets, they are still not my puppets even if they have read some blog posts of mine or heard summaries from somebody else who once did; I have put in at least one hundred times the amount of effort that would be required, if any effort were required at all, to wash my hands of this way of thinking.
Or how Eliezer wrote about how modern knowledge work has become harmfully disembodied and dissociated from physical reality - going into detail about how running from a tiger engages your whole sensorimotor system in a way that staring at a computer screen doesn't [LW · GW] - but lots of Lesswrongers seem to endorse and even celebrate this very dissociation from physical reality in practice.
↑ comment by Viliam · 2025-01-20T15:48:59.990Z · LW(p) · GW(p)
The incentive gradient for status hungry folk is not to double-down on Eliezer's views, but to double-down on your idiosyncratic version of rationalism, different enough from the community's to be interesting, but similar enough to be legible.
The easiest way to do that is to add something that is considered high-status in the mainstream society, such as religion. (And claim that misunderstanding the true value of religion is Eliezer's blind spot.)
Kinda like in the Median Voter Theorem -- draw a scale with Eliezer on one end, the mainstream society on the other end, and find a convenient position in between, attracting people who find the direction towards LessWrong fascinating (lots of new ideas) but also limiting (because they would have to give up some of their existing ideas).