Rules for Epistemic Warfare?
post by Gentzel · 2021-06-05T13:30:00.896Z · LW · GW · 14 commentsContents
14 comments
In partisan contests of various forms, dishonesty, polarization, and groupthink are widespread. Political warfare creates societal collateral damage: it makes it harder for individuals to arrive at true beliefs on many subjects, because their social networks provide strong incentive to promote false beliefs. To escape this situation, improving social norms and technology may help, however if only one side of a conflict becomes more honest, the other side may exploit that as a weakness, just as conquerors could exploit countries were less violent. Coming up with rules analogous to rules of war, may help ratchet partisan contests toward higher levels of honesty and integrity over time, enabling more honest coalitions to become more competitive. What follows is a naïve shot at an ethos of what such rules and norms might look like within a community:
Do not lie, deceive, or otherwise speak dishonestly, unseriously, or insincerely. Do not tolerate others that do. Beyond the context of silly banter, comedy, games, pranks, and jokes, where such communication is permissible, deception is harmful and should only be directed at valid enemies.
When justified, lies should be told to enemies, *not* about enemies. When everyone lies about their enemies, one can only trust information about potential enemies gained from direct experience. This leaves the faithful isolated from friends they should have had, and the skeptical vulnerable to attack by those they had been warned about.
Many of the worst lies are those that drive innocent people to action in harmful ways. Lies to the masses, lies to well intentioned decision makers, and especially lies about matters of life and death. The reputation of those who converse unseriously and deceitfully on such matters should be utterly and unmercifully destroyed should they not attempt to seriously evaluate the truth of their own statements and to right the wrongs caused by their deceit. Disproportionate and indiscriminate deception is epistemic war crime.
Though it should be avoided when possible, sometimes when it is necessary to deceive adversaries, it may also be necessary to deceive the innocent. In such circumstances, proportionality must be applied, and distortions imposed on the innocent must be justified in concrete, harm mitigating gains. Harms imposed on the innocent must be compensated by the imposer and lies to the innocent should never be allowed to go unaddressed and become permanent. Epistemic aggressors must take extraordinary efforts to mitigate collateral damage.
People should not be punished for what they believe, they should be punished for harmful actions and the intentions that their actions reflect. Punishing beliefs may sometimes enable good, but it makes it harder for groups to arrive at the truth. If you want to do good, you must be capable of finding out what is true, otherwise you only do good by *good luck.* Overall, societies should push toward higher levels of integrity and align the interests of their participants to make such possible. In virtuous competition, groups compete to achieve extreme credibility and treat their own members as allies, not useful idiots to be won over and pitted against each other. In the equilibrium of unvirtuous competition, where dishonest attacks are tolerated and launched with impunity, group members increasingly become useful idiots, and become treated as such.
14 comments
Comments sorted by top scores.
comment by Raemon · 2021-06-05T19:40:39.505Z · LW(p) · GW(p)
I haven't yet thought in detail about whether this particular set of suggestions is good, but I think dealing with the reality of "conflict incentivizes deception", figuring out what sort of rules regarding deception can become stable schelling points seems really important.
comment by Dagon · 2021-06-05T14:45:57.629Z · LW(p) · GW(p)
There are no enemies. More importantly, there are no pure allies. There are only changing and varied relationships, which include both cooperation and conflict.
There is a difference of degree, not of type, between your closest friend and your bitter enemy. And it changes over time, at a different rate than information exchange does. In the same sense you don't want to literally torture and kill someone you're currently at war against (since you could be allies in the next conflict, AND since you don't want them to take the same attitude toward you), you don't want perfect transparency and honesty with your current family or friends (knowing you will be opposing them on some topics in the future).
Heck, most people aren't well-served by putting most of their effort into self-truth. Many truths are unhelpful in day-to-day activities, and some may be actually harmful. I suspect (but it's hard to measure) there's a kind of uncanny valley of truth - for some topics, marginal increase in true knowledge is actually harmful, and it takes a whole lot of more knowledge to get back above the utility one before starting.
For some topics, of course, lies and misdirection are wasteful and harmful, to friends as well as enemies.
Replies from: gworley, Gentzel↑ comment by Gordon Seidoh Worley (gworley) · 2021-06-06T04:40:02.156Z · LW(p) · GW(p)
I'm kinda surprised this comment is so controversial. I'm curious what people are objecting to resulting in downvotes.
Replies from: Raemon↑ comment by Raemon · 2021-06-06T07:58:19.734Z · LW(p) · GW(p)
I'm surprised by the degree of controversialness of the OP and... all the comments so far?
Replies from: Dagon↑ comment by Dagon · 2021-06-06T15:41:59.784Z · LW(p) · GW(p)
I downvoted the OP - it didn't have anything new, and didn't really make any logical connections between things, just stating a final position on something that's nowhere near as simple as presented. Oh, and because it's completely unworkable for a consequentialist who doesn't have a reliable "permanent enemy detector", which is the point of my comment.
I didn't expect the mixed reaction for my comment, but I kind of didn't expect many votes in either direction. to some extent I perpetrated the same thing as the OP - not a lot of novelty, and no logical connection between concepts. I think it was on-topic and did point out some issues with the OP, so I'm taking the downvotes as disagreement rather than annoyance over presentation.
edit: strong votes really make it hard to get a good signal from this. Currently, this comment has ONE vote for +10 karma, and my ancestor comment responding to the post itself has 11 votes for +6 karma. I've removed my default 2-point vote from both. But what in heck am I supposed to take from those numbers?
Replies from: Gentzel↑ comment by Gentzel · 2021-06-06T17:44:44.093Z · LW(p) · GW(p)
I don't think you are fully getting what I am saying, though that's understandable because I haven't added any info on what makes a valid enemy.
I agree there are rarely absolute enemies and allies. There are however allies and enemies with respect to particular mutually contradictory objectives.
Not all war is absolute, wars have at times been deliberately bounded in space, and having rules of war in the first place is evidence of partial cooperation between enemies. You may have adversarial conflict of interest with close friends on some issues: if you can't align those interests it isn't the end of the world. The big problem is lies and sloppy reasoning that go beyond defending one's own interests into causing unnecessary collateral damage for large groups. The entire framework here is premised on the same distinction you seem to think I don't have in mind... which is fair because it was unstated. XD
The big focus is a form of cooperation between enemies to reduce large scale indiscriminate collateral damage of dishonesty. It is easier to start this cooperation between actors that are relatively more aligned, before scaling to actors that are relatively less aligned with each other. Do you sense any floating disagreements remaining?
Replies from: Dagon↑ comment by Dagon · 2021-06-06T19:50:52.511Z · LW(p) · GW(p)
I think if you frame it as "every transaction and relationship has elements of cooperation and competition, so every communication has a need for truth and deception.", and then explore the specific types of trust and conflict, and how they impact the dimensions of communication, we'd be in excellent-post territory.
The bounds of understanding in humans mean that we simply don't know the right balance of cooperation and competition. So we have, at best, some wild guesses as to what's collateral damage vs what's productive advantage over our opponents. I'd argue that there's an amazing amount of self-deception in humans, and I take a Schelling Fence approach to that - I don't understand the protection and benefit to others' self-deception and maintained internal inconsistency, so I hesitate to unilaterally decry it. In myself, I strive to keep self-talk and internal models as accurate as possible, and that includes permission to lie without hesitation when I think it's to my advantage.
comment by Raemon · 2022-12-09T01:51:44.514Z · LW(p) · GW(p)
This might be the lowest karma post that I've given a significant review vote for. (I'm currently giving it a 4). I'd highly encourage folk to give it A Think.
This post seems to be asking an important question of how to integrate truthseeking and conflict theory. I think this is probably one of the most important questions in the world. Conflict is inevitable. Truthseeking is really important. They are in tension. What do we do about that?
I think this is an important civilizational question. Most people don't care nearly enough about truthseeking in the first place. The people who do care a lot about truthseeking tend to prefer avoiding conflict, i.e. tend to be "mistake theory" types.
Regular warfare is costly/terrible and should be avoided at all costs... but, "never" is just not an actually workable answer. Similarly, deception is very costly, in ways both obvious and subtle. One of my updates during the 2019 Review was that it is plausible that "don't lie" is actually even more important than "don't kill" (despite those normally being reversed in in my commonsense morality). But, like violent warfare, the answer of "never" feels like an overly simplified answer to "when is it acceptable to lie?"
Eliezer's discussion of meta-honesty explores one subset of how to firm up honesty aroun the edges [LW · GW]. I like Gentzel's post here for pointing in a broader direction
This is not necessarily an endorsement of any particular point made here, only that I think the question is important. I think people who gravitate towards "truthseeking above all else" have a distaste for conflict theory. Unfortunately, I trust them more than most conflict theorists on how to develop norms around truthtelling that hold up under extreme conflict.
comment by ChristianKl · 2021-06-05T15:45:23.655Z · LW(p) · GW(p)
I think the model of a war between two sites is fundamentally flawed for epistemic warfare. For most players with power internal struggles within their community matter more for their personal success then whether their community win against other communities. See all the post about the problems of moral mazes.
Replies from: Gentzel↑ comment by Gentzel · 2021-06-06T17:56:05.063Z · LW(p) · GW(p)
Why would multi-party conflict change the utility of the rules? It does change the ease of enforcement, but that's the reason to start small and scale until the advantages of cooperating exceed the advantages of defecting. That how lots of good things develop where cooperation is hard.
The dominance of in-group competition seems like the sort of thing that is true until it isn't. Group selection is sometimes slow, but that doesn't mean it doesn't exist. Monopolies have internal competition problems, while companies on a competitive market do get forced to develop better internal norms for cooperation, or they risk going out of business against competitors that have achieve higher internal alignment via suppressing internal zero-sum competition (or re-aligned it in a positive-sum manner for the company).
Replies from: ChristianKl↑ comment by ChristianKl · 2021-06-06T19:30:21.447Z · LW(p) · GW(p)
The dominance of in-group competition seems like the sort of thing that is true until it isn't.
If you look at facebook you see a lot of in-group competition over likes, comments and generally getting attention. Dating markets are generally about in-group competition and dating is something people care about a lot. Promotion decisions are about in-group competition.
Monopolies have internal competition problems, while companies on a competitive market do get forced to develop better internal norms for cooperation, or they risk going out of business against competitors that have achieve higher internal alignment via suppressing internal zero-sum competition
While it's true that companies in competive markets have more pressure to reduce internal zero-sum competition we still have moral mazes [? · GW] in a lot of the large and powerful cooperations.
On the other hand of the spectrum you have for example Tucker Max's PR team that wants to promote a movie and feminists who want signal their values by demonstrating. Then you have them working together to make a demonstration against the film of Tucker Max. The alliance is more conscious on the side of the PR team of Tucker Max then on the other side both "sides" are making profits and playing a game that's positive sum for them while negative sum for a lot of other people in society.
comment by Vanilla_cabs · 2021-06-06T08:22:31.051Z · LW(p) · GW(p)
If you keep the analogy of war, historically in-group rules have not been the driving force behind deescalation, inter-group treaties have. So I would be much more interested in a draft of rules that would be applied simultaneously to all sides with the goal to deescalate in mind.
Replies from: Gentzel↑ comment by Gentzel · 2021-06-06T18:09:29.793Z · LW(p) · GW(p)
I am not sure that is actually true. There are many escalatory situations, border clashes, and mini-conflicts that could easily lead to far larger scale war, but don't due to the rules and norms that military forces impose on themselves and that lead to de-escalation. Once there is broader conflict though between large organizations, then yes you often do often need a treaty to end it.
Treaties don't work on decentralized insurgencies though and hence forever wars: agreements can't be credibly enforced when each fighter has their own incentives and veto power. This is an area where norm spread can be helpful, and I do think online discourse is currently far more like waring groups of insurgents than waring armies.